Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
‘I will show you how safe Telegram is’ (twitter.com/jsrailton)
225 points by rzk on Feb 16, 2023 | hide | past | favorite | 195 comments


Better to link to the actual story (Guardian in partnership with a few others):

https://www.theguardian.com/world/2023/feb/15/revealed-disin...

Covered already on HN:

https://news.ycombinator.com/item?id=34800157

https://news.ycombinator.com/item?id=34803779

Etc


Speaking of the actual story, how come they put all of this effort into unmasking this guy, discussing the consequences of his actions to the integrity of democracies... but then he says "I hack into Telegram accounts by using an SS7 vulnerability", and they just copy and paste that verbatim into the story, not even bothering to explain it in the slightest?

Obviously it's because they themselves don't know what it means, so it just gets filtered by their brain as nonsense tech words. But is it really that hard for them to reach out to a tech person and ask them "hey, what does it mean that they use an SS7 vulnerability to hack into Telegram accounts?", so that they can explain "Oh, that means they're impersonating your phone number, so that when Telegram sends you an SMS to verify that it's you, they receive that SMS on your behalf and can log in to your Telegram account"?

It baffles me, because it would take so little effort for them to provide this additional context into how the actual hacking is done, in a way that is understandable and interesting for the average non-tech person, and yet... they just don't bother to?

Somehow this seems to only be acceptable for tech stuff. If when they found out that this guy was involved in the Nigerian elections, the reporter shrugged and said "Huh, Nigeria. I wonder what a Nigeria is. Anyway, not worth Googling it or checking whether it has any relevance to the story whatsoever" then everyone would agree he's doing a disservice to the story and to the public. Yet somehow this is routinely done with technical terms, the public is worse off because basic things are hidden to them behind inscrutable acronyms by lazy reporters, and no one bats an eye.


Ironically the Guardian could have even looked up a past article published by them - "SS7 hack explained: what can you do about it?"

https://www.theguardian.com/technology/2016/apr/19/ss7-hack-...


The SS7 might just be a cover for a basic spear phishing attack. Otherwise I'm not sure why to demo it you wouldn't hack the actual politician's phone, instead of just doing his assistant.


I monitor Russian war channels and some people there insist on using Telegram only for Russian military people. If you use Whatsapp, Ukrainian officers will get all chats from NATO.

Telegram accounts of opposition were hacked by belarus police as well. It's known and documented.

My takeaway is that for truly private chat one should write his own software using simple crypto without all those fancy clients. Ideally just use one time keys and xor everything. Can do it with pen and paper.

Signal might be safe, but I think it's a honeypot.


> My takeaway is that for truly private chat one should write his own software using simple crypto without all those fancy clients.

That‘s actually pretty secure in practice, because you won‘t be communicating with anybody.

> Ideally just use one time keys and xor everything.

How do you generate the keys? How do you share them? And you only care about encryption, authentication does not matter to you at all?

The chance of getting this right as an individual developer, especially given this level of understanding of cryptography, is next to zero.


OTP one time pads are uncrackable if is random enough. Hard part is transporting that decoder to someone.


Textbook one-time pads are trivially malleable. Sometimes non-malleability matters just as much or more than privacy.

("This is POTUS, do$ÿ} launch the nukes!!!")


If you are a developer working for the government and in charge of this, please do not make this mistake. If you are the President of the United States and need to send a message like this, please make it completely unambiguous. Maybe repeat it several times and provide a justification.


I'm fairly confident that the US government, given that it runs the NSA, has access to a textbook or two on applied cryptography and cryptanalysis and does not need to source input on their cryptographic designs from HN :)


> That‘s actually pretty secure in practice, because you won‘t be communicating with anybody.

Nice. In all seriousness though, there has to be a way to make a hand-rolled but secure channel. E.g. use standard crypto libraries, hand-exchange keys or key phrases, etc?


>How do you generate the keys? How do you share them?

https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exc...


Reasonable choice, but then you're doing something completely different from what GP suggested already:

> Ideally just use one time keys and xor everything.


> Signal might be safe, but I think it's a honeypot.

Telegram smells a lot more like a honeypot than Signal


If you use group chats or unencrypted individual chats, then it's absolutely not secure. I don't use those for the security though. It's just a really nice messenger that's not Messenger. But it does also have E2E encrypted chats, and that's what you should use if you're trying to keep your conversation secret. Unfortunately many people aren't aware of that, and just assume Telegram is secure by default.


Even Telegram's encrypted chats have not had nearly enough analysis to be trusted. They used to use some bad home-grown techniques, now it's just "probably secure enough".

Their client code also isn't really open source, even though they claim it is. While Signal has reproducible builds.


Huh? Their client code isn't just fully open-source, it also has reproducible builds both on Android and iOS, has been for a while. Doesn't Signal only have them on Android? It's a great app either way but if RBs matter, that's a little knock against it.


Where did you get that idea from?

Their client code is not usable because they don't provide working instructions for compiling it. If you can't compile it, they can't have reproducible builds.

For example, see this issue from 2019 which the developers still haven't replied to:

https://github.com/TelegramMessenger/Telegram-iOS/issues/97

There are a lot more issues about people not being able to use the code, none of which get any replies from the owners.


GitHub builds directly from their source code just fine: https://github.com/TelegramMessenger/Telegram-iOS/actions/ru...

(uses this script: https://github.com/TelegramMessenger/Telegram-iOS/blob/maste...)

And here are few recent examples of them helping people to solve build issues: https://github.com/TelegramMessenger/Telegram-iOS/issues/968... https://github.com/TelegramMessenger/Telegram-iOS/issues/986... https://github.com/TelegramMessenger/Telegram-iOS/issues/992...

How were you digging to find an irrelevant issue yet omitted all of this? I really don't see the picture you're trying to paint.


It certainly is usable. I'd used Telegram FOSS from F-Droid for quite long before switching to Nekogram (also on F-Droid), a third-party Telegram client with some extra features.

Incidentally, Signal is still not available on F-Droid, and Signal developers are known to be hostile towards third-party clients (for "security reasons." Almost like I've heard that argument before from a certain smartphone manufacturer...)


Yep, I also use Nekogram. I like how open the client aspect is of Telegram.


From their official announcement of reproducible builds. And this seems to be the instructions: https://core.telegram.org/reproducible-builds


Both could be, just from different queen bees


Moxie Marlinspike is a lot of things but I'm pretty confident the queen bee of a honeypot is not one of them.


Moxie Marlinspike - and Whisper Systems - have assumed a variety of concerning positions over the years. Concerning, because their flaws are obvious, yet Moxie - a very smart guy - pretends not to notice them.

For example, Moxie defended discontinuing encrypted SMS on the grounds that it leaked metadata to telcos - yet failed to emphasize that this was merely the same metadata leaked to Whisper Systems, or justify why we should trust Whisper Systems more. "Just trust us" policies are worrying.

Moxie's defense of failing to provide any alternative to downloading Signal/TextSecure from Google Play also contained a number of very eye raising frank admissions, including that Whisper Systems was motivated by the ability to silently push updates and monitor its users (there's that "just trust us" again). Followed by the very weird assertion that "Avoiding Play alone is not a privacy win", which is a bit like saying there's no point wearing a seatbelt because you might get injured in other ways.

Whisper Systems is notoriously hostile to any use of Signal that doesn't involve their official client, going through servers they administrate (servers which run code they release infrequently if at all). Signal has made no attempt to bootstrap a federated system, even though this would save them money. They are extremely keen to maintain ironclad control over both the app and the server, and are willing to take unusual security postures in service of this.

I've read enough stuff from Moxie that makes me say "what, that doesn't make sense" to make me very suspicious.


I'm not necessarily agreeing with his positions you mentioned, except to note that his focus on Signal was making an encrypted messenger that was easy enough to use that normal people would actually use it, which is a different goal than being the most secure possible and that does inevitably involve compromises and tradeoffs. He has written extensively on the problems with decentralized systems and the added complexity that makes people not adopt them, and as someone that's done extensive work in the dweb space, his arguments are quite sound and not suspicious to me in the slightest. https://moxie.org/2022/01/07/web3-first-impressions.html

If someone wants to send encrypted messages in a decentralized way I suppose they can use PGP with Tor (a project that has also received US government funding btw) on top of a P2P network or something similarly strange, but good luck getting the non ultra tech literate friends to use any of it. They will just resort to going back to SMS because they can't figure it out, or because it's slow and doesn't work, or because a malicious actor figures out how to blast the entire network with spam messages, and then we're right back to the original problem with no improvements.

As a thought experiment here to make this conversation more constructive, what is the actually most secure way to send messages between two parties, and how many of the non tech friends has someone convinced to actually use it?


The SMS support that was discontinued was for plain SMS to/from other plain SMS numbers from within the Signal app - it was not encrypted.



Ah, right. I thought OP was referring to https://www.signal.org/blog/sms-removal-android/ , my bad.


Signal is funded by the US State Department[1]. I'm sure you can trust it to send messages to your drug dealer, your mistress, or the competitor you are selling you company's secrets to. I wouldn't trust it if I wanted to keep secrets from american 3 letter agencies, though.

[1] https://www.mintpressnews.com/the-open-technology-fund-makes...


DARPA was going to contribute money to the OpenBSD project (which also maintains OpenSSH) before Theo said some things critical of the Iraq war and they retracted it. I wonder how many people would have accused them of being CIA plants if they took the grant money. Regardless, there are many competing interests and bureaucracies in the US government and it's not a safe assumption that they are in cahoots with each other on encryption they can break on demand. It's usually a more complicated picture than just "the government". Some of this funding is likely with the well meaning intention and goal of strengthening the security and privacy of communication between Americans.


Also see the history of "window" (chaff) in World War II. R&D people for both the Allies and the Germans realised, as improvements of the new "radar" continued, that radar doesn't see a difference between an aeroplane and a suitably sized radio-reflecting object, say a strip of foil. So, if you chuck a bunch of these foil strips out of a plane, now the enemy radar is full of "planes" that don't really exist.

Both sides stalled deployment of this trivial yet effective countermeasure because they believed once they used it their opponents would immediately understand how it was done ("Gee, immediately after the German bombers did that trick which messed up our radar we found loads of metal strips in trees all over the area they attacked...") and so copy it - and both had "official" estimates made which said their opponents would surely benefit more than they would once it came into use.


Haha I'm sorry but a name like Moxie Marlinspike seems designed by professionals to tickle the so quirky so he must be safe button in the nerd community.


Despite the "trustless" credo that gets passed around in cryptography, it's actually often very important to know the people that work in this space and I invite you to read more about this particular person, his background and roots and make a determination as to his intentions in this space and the level of trust you are willing to put in his work (and/or the stuff he previously worked on) and not just make a base judgement on his name alone. FWIW, I've been familiar with his work since before I even used or cared about cryptography.



Yeah moxie is diametrically opposed to me. This is why I don't promote signal and only use it through matrix for those two people who don't use anything else. Which is also in contravention to his highness' wishes because he hates third party clients.

Moxie believes in security above everything even if it takes choices away from the user and forces you to trust a third party (in this case him, but also the mobile vendor and Google because he doesn't trust custom firmware either). Basically what he calls the mobile security model. And the reason I hate mobile devices with their closed model and attestation crap to make sure I play by the vendor's rules.

I believe a user should always have the final say in everything. If the user makes it insecure that's their business. Basically the desktop security model. And the reason I don't like working on mobile devices if I can avoid it.

I'm as principalled as he (and other people like him) is so I wouldn't even enter an argument, there's no point.


Confused by this; Moxie did eventually concede and provide a signed APK.


Drew DeVault took the time to write it up: https://drewdevault.com/2018/08/08/Signal.html


> Truly secure systems don’t require trust.

This blatant absolutist statement is completely false. It exactly means, that there is no computer which is secure today. It also means, that it’s pointless to care about security on any phone in existence, because it cannot be achieved that currently.


Even assuming Moxie is beyound doubt, that won't help much today, as he resigned as Signals CEO.

"https://www.bbc.com/news/technology-59937614"

(He is still part of Signals board, though).


> Telegram accounts of opposition were hacked by belarus police as well. It's known and documented.

No, not really _hacked_. You give your phone unlocked to the police, and they access your Telegram account. You can't refuse, and you probably can imagine why.


Actually really hacked. What you said also happens of course, but hacks are routine as well. Happened to 2 people I know personally. They do it by intercepting the sms code at the cell operator level.


I do not use Telegram: what SMS code?


The default authentication method is 5-digit login code received via SMS. You can enable 2FA with password being the second method, but you have to know how to do it. Worse there are 2 things called "password" there: the "cloud password" which is the 2FA password and the "password code" (that's what it's called in Russian, maybe the English interface uses a better name) which is merely a pin code for locking the Telegram app on your device. So some less tech-savvy people after hearing they need to set up a password set the latter and falsely assume they are now safe.


Only if you have one device: once you have authenticated on two devices the code is sent over Telegram to other devices you have. In that case there is no option to fall back to SMS when authenticating. I think if you lose all your devices then you're locked out, unless you have a recovery email set up.

In addition, if login succeeds, an authentication warning that a new device has connected is sent to every other device, and contrary to other Telegram chat messages cannot be deleted from another device, even previously authenticated ones. The message does not show up on the new device anyway, and it has to be deleted manually, once per other device. So an attacker that would gain access cannot hide its tracks unless it has access to all Telegram-known devices simultaneously.


> In that case there is no option to fall back to SMS when authenticating.

That's not true. The option is there. (Edit: at least for now, some changes in that area are upcoming)

> In that case there is no option to fall back to SMS when authenticating.

The attack usually happens at night. By the time you wake up, all your chat histories are already downloaded and the secret police is on its way.


> That's not true. The option is there.

I tried it before I wrote the comment to be 100% sure, the option was not there, and it still is not.

I am not sure why it is the case but I am quite sure it was the case way before as I was making sure of this back then.


the Two Factor auth to validate is really you after guessing/knowing your pass from a previous leak


It's worse. 2FA is optional and SMS code is the first (and default) auth method.


> It's worse. 2FA is optional and SMS code is the first (and default) auth method.

This will be changed on Saturday.

Got the following message about API changes last week: https://telegra.ph/Telegram-API-Changes-02-16


But given the choice, isn't SMS a stronger first authentication factor? It's temporary and randomized, whereas users don't change passwords that frequently. So a password is much more susceptible to keyloggers/malware/brute-force than an SMS code.


That announcement seems to be concerned with unofficial clients only. But it does sound like the change is a part of some bigger package.



I actually wrote to Telegram’s support team to get more info about this, and it seems the article has a lot of errors. The support rep linked me to this, https://telegra.ph/Wired-Errors which is Telegram’s response to all of it.


> as no app can defend against direct access to a device.

Of course you can't protect the local data from all the possible attacks but there are many ways to prevent some attacks and complicate others. So this article isn't honest.

I kinda don't understand why people trust telegram. Regular messages aren't even encrypted. SMS authentication. Contact harvesting. Mandatory background run permissions (the app would constantly complain if it can't run its background tasks). Then some strange people selling users' metadata. One time I've purchased my location history for ~30 euros in TRX and it was accurate.


> No, not really _hacked_.

I see a number of comments similar to yours on HN, and I am legitimately curious and do not mean to offend: Why did you think Belarusian police do not hack phones? Was your comment based on any source or personal experience, or did it feel right?

I ask because I have observed a number of times where HN comments are wrong on things (on have personal or professional experience in) but get upvoted because the comments look reasonable, but the correct comment gets downvoted.

It's an interesting failure mode of HN discussions. I thought I'd ask you because the topic is not contentious, and hopefully this comment does not cone across as underhanded.


Because to beat up someone and order them to unlock their phone is faster, does not require trained staff, can be done anywhere and is fun for the cops.

In some cases they use israeli hardware (forgot the name of the company), if you are fooled into giving your phone away for some period of time, like entering a police building, where they have a "no-phone zone" for visitors.


> Because to beat up someone and order them to unlock their phone is faster, does not require trained staff, can be done anywhere and is fun for the cops.

The implicit assumption is that the states always go for the faster solution. I'd argue that intercepting an SMS/call is easier/faster for nation-states, than arrest and torture; most carriers worldwide have interception capabilities mandated by law: using the secret police to subvert those capabilities is not rocket science - especially when coordinating with hacking-as-a-service companies like "Team Jorge".

What really grinds my gears, when someone confidently state their thought experiment/"derivation from (idealized) first principles" as a rebuttal to empirical evidence. Incidentally, the wording is sometimes absolute, which cloaks the speculative (and wrong) nature of such comments ("No, not really hacked." Implying hacking doesn't happen, and yet it does).

I've seen such 'truthy' but wrong comments voted to the top on HN, because the up-voters share the same blindspot and think "Yeah, that sounds right"


> The implicit assumption is that the states always go for the faster solution.

That’s what cops in Belarus did to me twice since 2020 when I declined to unlock my phone. That’s what they did to many of my friends and colleagues. Hacking happens but it is a more of an exception as it requires planning and effort, which is hard. Blackmail and threats are much easier.


Just because B happened to you personally doesn't mean A never happens to anyone.

When a person says "A happens", and another responds by saying "A doesn't really happen - rather B happens", their assertion is insufficient to refute the existence of A, whether based on a cost analysis thought experiment or personal experience with B. It is far more accurate for the second person to make an addition rather than a refutation by saying "B also happens more frequently than A"


Really cool


Obligatory XKCD: https://xkcd.com/538/

Cellebrite and GrayKey are prominent providers of cellular DFIR hardware used by government agencies.


"Signal might be safe, but I think it's a honeypot."

Based on what?

Signal documents its own encryption process, and you can check the app source code to verify it. https://signal.org/docs/specifications/doubleratchet/

Signal is the best choice I know of when I'm looking for the union of 1. True e2e encryption, and 2. Ease of use by non-technical people.


The fact that it locks you into using their servers, does not distribute on F-Droid (only Google Play OR an APK with an insecure update mechanism), and has a completely closed-source "abusive message filter" module server side, that could functionally be used for censorship, storing messages for future decryption, or any other number of nefarious purposes - we have no idea since it's not open source (https://github.com/signalapp/Signal-Server/blob/main/.gitmod...).

Additionally, you cannot distribute branded forks or Signal, and if you do fork it, your fork is not allowed to connect to Signal's "official" OWS (open whisper systems) servers - hostility to federation should be viewed with prejudice and suspicion at the very least, it suggests a vested interest in a single point of failure (or control), which goes against user interests.

Further reading: https://drewdevault.com/2018/08/08/Signal.html


Signal uses Curve25519, AES-256, and HMAC-SHA256 for its e2e encryption. So unless you believe those algorithms are insecure, there's no reason to think that their server setup is a compromise on your messages' security.

Fear of "future decryption" applies equally to all forms of encrypted communication, regardless of which servers the messages go through. And since AES-256 is known to resist quantum computing decryption, there's no actual reason to think future decryption will be an issue.

Signal has actually been approached by governments for whatever user data they have, and exactly none of it included messages - encrypted or otherwise. https://www.dailydot.com/debug/signal-grand-jury-subpoena-da...

As for everything you said about F-Droid and forking, see https://molly.im/ . It's not a "branded" fork but it does connect to Signal's servers.


You don't need to break encryption to engage in censorship based on unencrypted metadata.

OWS is based in San Francisco. The US federal government has compelled providers to introduce backdoors or start logging information that was not being logged before for certain IP addresses or user identifiers - phone numbers in this case, and done so under gag orders that prevent companies from disclosing it. Judges can rule that use of a warrant canary as intended can violate these gag orders as well.

Just because Signal can share 1 or 2 instances of cases where information was requested and they did not comply does not mean they never have, weren't able to in the past, or aren't able to in the future.

As others have stated, using Signal is putting a lot of trust into the OWS legal entity, and proper cryptosystems should not rely on trust.

I hadn't heard of Molly and am checking it out now, thank you for sharing.


There’s no need to “hack” anything in Belarus. They simply resort to torture.

https://www.themoscowtimes.com/2022/12/23/critics-slam-16-ye...


They do both. Source: am Belarusian, both things happened to my friends (specifically with respect to Telegram account access).


Did you friends that were hacked had the PIN set for their Telegram accounts? I'm curious whether there is actually a Telegram "hack" or yet another SMS hijacking (via ss7 or by just directly co-opting the telco).


Both hacking victims I personally know didn't have the 2FA password set and were hacked via SMS hijacking. I'm following this pretty closely and so far haven't heard of successful attacks of any other type. One semi-exception is [1], but here the victim's device was captured and most likely used to receive the password reset email.

[1] https://habr.com/ru/post/598939/


I’d prefer to be hacked, given a choice.


> My takeaway is that for truly private chat one should write his own software

That's the only way to make sure you're using software you trust, but rolling your own crypto implementations is often not so secure (because of the many pitfalls).


You don't have to roll your own crypto. You can e.g. just use libsodium which is designed to be easy-to-use and not possible to use incorrectly.


Using libsodium is good advice, but all crypto libraries can be used incorrectly.


I'd have no idea where to start securing an app against side channel hacks.

Is the memory encrypted? Should it be? Can other apps access it?

E2E might be working fine, its the ends I'd be concerned about.


Narcs did this and alas the FBI ended up turning the guy who wrote their software to turn on them all.. so hard for a criminal mastermind to also be a good programmer I think


you could run your own xmpp or matrix instances.


If you and your group chat friends can meet up in person once to input an agreed upon a ~1Gb one-time-pad then you can exchange uncrackable text messages for years on any insecure channel I’ve long felt that this is the ideal solution for anything super super secret


Everyone in that group also has to manage to keep that 1 GB secret.


I mean if a Roman general could…


1 GB?


The size is fairly irrelevant. 1 GB fits on a microSD card many times smaller than a Roman General's hand-written cypher.


How do you handle that SD card in practice to do your communication?

My point is, if anyone is on to you and can physically get to you (or anyone else in the group), there is a high risk they can get hold of the data.


That's why you put it behind a secret menu in an iPod https://tidbits.com/2020/08/17/the-case-of-the-top-secret-ip...


True, but it can be copied and returned quickly without the victim knowing, and you have to trust every device you put that SD card into to read the pad.


also how do you ensure the messages haven't been corrupted in transit


1. For each letter of the plaintext, concatenate "dontcorruptmebro"

2. Grab 17 letters from the pad (I'm assuming pad means the randomly chosen letters that make up the 1 gigabyte of shared secret among the participants)

3. Convert the first letter of the pad to a number (let's say 1-26)

4. Right-shift the message by that amount and let the letters wrap around: e.g., if the plaintext letter is g and we're shifting by 1 the plaintext becomes "ogdontcorruptmebr"

5. Encrypt with pad (not a cryptographer, but I'm assuming this means adding a character of the shared secret with a character of the plaintext, and wrapping around if it goes past the end)

6. Send

7. Decrypt with pad

8. Left-shift and wrap by $first_pad_letter_value (i.e., the same value we used to right-shift and wrap above)

9. Remove the "dontcorruptmebro" part to concatenate the plaintext letters to reveal the original message

10. Rinse and repeat, and do whatever secure messengers do if/when you receive a part of the "dontcorruptmebro" that isn't "dontcorruptmebro"

I only thought about this for a few minutes, and I'm not a cryptographer. So I'm honestly asking-- did I solve the corruption-in-transit problem without screwing up anything?

Edit: just to be clear-- I'm not trying to be efficient, or to authenticate, or do anything whatsoever other than a dead simple thing that can be implemented easily.

Edit 2: clarification


not sure; i think this just means the attacker has to guess one letter of the pad in addition to the crib, so (with your example alphabet) they need 26× as many tries to get a corrupted message through


You can never ensure that your communication channel isn’t corrupted. But with a one time pad you’ll just receive unreadable garbage then. You‘re also free to sign messages before encryption.


if an adversary xors their guess of part of your message (a crib) and what they want it to say with the one-time-pad ciphertext, they can make it say what they want if the guess is right; this is a general vulnerability with unauthenticated stream ciphers

if they guess wrong, there's a little unreadable garbage in the message, but in many scenarios they only have to guess right once

so you need some kind of message authenticator like hmac or poly1305, but as i understand it, these lack the absolute security guarantee of a one-time pad


Good point I guess that’s the same as the “heil h**er” vulnerability

Maybe some kind of low density salting would make messages hard to guess but not too hard to read


That actually makes a lot of sense. Matrix should add it as a feature.


There's too many practical drawbacks. For example you have to send the OTP without getting it intercepted. How do you do that? It also has to be truly random. You can't pick a page from a book or anything, that undermines the whole model and makes cryptanalysis possible.

It's useful but only in very specific cases. Mostly it's just a paradigm in cryptography.

Note that for all its fame as unbreakable it does lack perfect forward secrecy so anyone getting a hold of it from any of the participants can read their intercepts from all the way back.


Sitting in a cafe almost an entire week later: oh, but it does have perfect forward secrecy, as long as you discard the beginning of the pad as soon as you use it (which is actually strictly better than passing a temporary key over a potentially decryptable channel, and then discarding it after session ends). If I correctly understand what perfect forward secrecy is.


> Telegram accounts of opposition were hacked by belarus police as well. It's known and documented.

These were cases when cops either were able to access the device, or were able to intercept sms messages. Notice that that did not happen to those belarus related channels whose admins left the country. For example an ongoing issue [1][2] with one of the admins who left the country, but his identity was found out and now cops are using his brother, who was still in the country, as a lever to make him delete the channel.

They can't delete/hack your channel unless you let them.

[1] https://mediazona.by/article/2023/02/15/blackmail

[2] https://mediazona.by/news/2023/01/22/belzd


telegram isn't e2e encrypted (unless you use secret chats which nobody does (and those do not support more than 2 participants))


Not only that but they've made suspicious ties with the Kremlin that resulted in it being unblocked in Russia and have a mysterious source of funding after TON collapsed.


I'd love to hear more about this.

I thought Pavel Durov (founder) isn't really welcome in Russia. He seems to have a left a lot of money on the table with his previous company (VK) to get out of there.

The interpretation of events with telegram I've always heard has been that Russia tried to block telegram but doing so blocked most of the rest of useful services as telegram was hosted on e.g. AWS. This meant that real work was slowed. In the end, they gave up because continuing the block would mean crippling their access to the internet.

I have found it weird that pro-putin russians still use telegram. But it seems like a lack of discipline which is quite the plague of russian military.

I'd genuinely love to get good information on this as I use telegram as it is a great piece of technology and wouldn't if it were collaborating with russian government. I think a lot of rumors do not distinguish between russian people and russian government. And that's pretty damaging.



While very long, it doesn't look like that article provides anything but speculation. What would you say is the key point? Where's the meat?

I know that it would be hard to get good evidence here but I still think the burden of proof lies on the more extreme claim. Which to me is that a company founded by someone fleeing oppression would work directly with the oppressor.

The example of messages appearing as 'read' does not imply to me that telegram is working with them - rather that the her app was compromised by other means. Otherwise, why create such a flaw in your monitoring?

Meduza - news organization criminally charged in russia operating in latvia - is mostly accessed in russia via telegram. It's hard but not impossible to imagine that the russian government would allow it to remain uncensored.


The agreement with the Russian Government over "Terrorism" cases is not just speculation, but announced by both parties.


That would be damning if I could find a statement by telegram about how they are working with the russian government to deal with 'terrorism'.

I'm seeing quoted statements by the russian government in the article that I cannot verify elsewhere but I'm unable to see telegram making a statement from their side of working with the russian government.

I'm truly interested in getting to the bottom of this. This wired piece doesn't inspire a lot of confidence in it's trustworthiness but I'm unable to find something with more concrete sources or at least a better reputation.


as a counter point, here's telegram's answer to this article https://telegra.ph/Wired-Errors


Interesting. Where can we read more?


Can you read russian or translate it via google translate?

https://medium.com/@anton.rozenberg/friendship-betrayal-clai...

while the whole store is about on how Durov tried to sleep with authors wife and tried to put pressure on him using his work position, author did expose facts which very interesting:

- telegram office in sankt petersburg sharing same office that mail ru does (and mail ru basically is pro government org)

- telegram team still contributes to vkotankte and they do share some software components connect to online messaging

- most interesting russian oligarchs has very big influence on the products (vkontakte and telegram). While both productions trying to tell its not true, behind the doors they do have well established process of giving data when certain criterias are met


Curious how good chatGPT is at translation. Tried it on Russian?


Have you tried?


Not yet but will next time I login.


I wouldn't say nobody. My parents, SO, and some of my friends all use the encrypted chats. It would be nice if the client made the differences more clear though. Although if it is a honeypot, then they obviously wouldn't want to advertise the workaround.


Is there a e2e chat program that actually supports encrypted chats with more than 2 people?

We've used telegram e2e but its frustrating since you can't see those chats on desktop at all. (for obvious reasons)


I say again: Jitsi Meet (Open source, self-hosted) has voice/video E2EE (it has text, but is not yet encrypted).

No login. No password. No app. All WebRTC. Chrome required since Firefox does not yet seem to support encryption over WebRTC (I think).

I set it up from scratch with Debian Linux and Virtualbox in about 30 minutes here: https://www.youtube.com/watch?v=4rlffwHUchk


Yes, WhatsApp and Signal. Both use the Signal Protocol.


Before doing anything related to security, you should always understand what your threat profile is. If you don't have a real probability that a state actor will go after you, most likely your threat model will include scams and criminals. And unless you're a wealthy individuals, most of these scams will be automated, not hand-tailored.


Good old pgp (in the form of using gnupg to send encrypted emails) is safe. Make sure that you trust the correct keys though.

Non standard solutions are a recipe for disaster.


I don't think anyone can whatsapp chat. Somehow, whats app is the most reliable end-to-end encrypted messaging service today and many just don't know.


WhatsApp is closed source. For what it's worth, the protocol itself might be impenetrable, but the client itself has access to your decrypted messages and can still decide to send them back to Facebook without your knowledge.

The system that depends on a good will and "trust me bro" is not secure by default, even if Meta/Facebook were the most trustworthy company in the world under the most honest legislation.


Software can be reverse engineered even if it doesn't have source available (and WhatsApp has been extensively reverse engineered). Similarly, software might be impractical to audit even if it does have source available.


Open vs. closed source is a completely moot point in the context of iOS and Android if your threat model includes vendor/supply chain attacks.


If you have reproducible builds and you can verify the digest of the app on your phone, difference is like night and day in open-source vs. closed-source projects.


But you can’t have reproducible builds at least in the App Store.


Apple can implement to show pre-encrypt hash if they want. I cannot see anything negative impact for them if we just give some pressure.


Oh, don't get me wrong, I think it would be phenomenal if Apple were to allow reproducible builds on the App Store!

But given both the existing encryption as you mention, and Bitcode (mandatory for at least the Apple Watch for now), I think it's unfortunately pretty unlikely to happen.


Isn't Signal open source? How could it be a honeypot? The community should be able to verify e2e encryption fairly easily.


Bugs and backdoors are indistinguishable. No software is bug-free. App stores make repeatable builds hard to achieve. Another vector to introduce changes. Might just be convenient target to attack via other means (e.g. rogue android update targeted for this special app).

I'm just a bit paranoid recently, I don't have any proofs or something like that. I, personally, would use Signal today as off-the-shelf easy-to-use solution. But for something that's absolutely must not be disclosed I'd use age (write message in text file, encrypt it via age, send base64. Supports symmetric and asymmetric encryption, schemes are simple enough, software is popular enough and super-focused.


Just use a open protocol that doesn't lock you into a specific server or client, like XMPP with OMEMO.


Look into Briar


So, the twitter post alludes to SS7, but it is not clear how it is (ab)used to do the Telegram-related exploitation.

Presumably, SS7's design flaws are being used intercept Telegram's registration verification messages, placing the resulting Telegram accounts under control of the bad actors while appearing to be real, independent users (and so aiding in establishing their credibility, which leads to other things), but that is a bit... handwave-y.


Telegram allows logins per SMS code (they will be rolling out changes in two days). So as long as you knew the number of your victim and have the ability to re-route SMS, you were able to login to other people’s accounts.

Of course this can be easily mitigated by setting a “cloud password”, but I guess most people don’t do that.


If this is true, then Signal should be subject to the same weakness no ?


Yes it is. It’s worse for Telegram though because it gives the attacker access to the chat history too. Telegram does however send a message to all devices when a new device is logging in, so at least you would know. Signal does not do that but your contacts will get a message that your security code changed if they have the option for that enabled, but people generally ignore this message.

Both services offer to set an additional pin or password to protect your account.


> people generally ignore this message.

Because it's noisy. You get that message whenever your conversation partner switches phones or re-installs Signal. The reaction on seeing that message is more "enjoy the new phone, what did you get?" than "have you been hacked?"


For most people, sure. If you're conspiring with others to protest against the oppressive Russian government, you may want to pay closer attention, though. You may even want to do physical verification of the keys.


Signal provides a Registration Lock that is available in the settings for preventing this from happening to some degree, it's however opt-in and you need to set and remember a PIN.


Telegram too.


Nope.

First of all, everything's end-to-end encrypted and no chat logs are stored, so even if someone did do that they wouldn't have access to your chat history.

Second of all, if someone tries to impersonate you, your contact gets a notification that your encryption keys have changed, ideally making the recepient slightly more vigilant.

Finally, Signal has built-in (optional and not on by default) protection for these types of attacks which require a PIN after activating the number on a new device, making SIM swap attacks useless without PIN as the second factor: https://support.signal.org/hc/en-us/articles/360007059792-Si...


Indeed, you're right: the worst that can happen is that chat recipients would see a "your security number has changed". But nothing will happen when talking to a whole new person, or when you join a group you've never been in before.


But notice that these are all things you could do anyway. Why are we even bothering to impersonate someone else?

If local group "Superb Fun for a Superb Owl" is letting in anybody who wants to join, rather than pretending to be party animal Steve, by using SMS interception to impersonate Steve, why wouldn't a cop named Bill just join the group as Bill ? And if they don't want cops, perhaps because they suspect that "Set fire to the mall" will be considered a crime rather than a fun celebration of NFL Glory, they're not going to wave Steve through, they're going to want to check, and they'll discover that's an impersonator not the real Steve.


The impersonator, by definition isn't known. Only the profile being impersonated is.

If an activist is part of a private group, and a cop impersonates the activist, they can get all the new messages without raising any doubt. In a lively conversation, the cop can even send messages and expect the activist not to see them. Disappearing messages will help here.


To make things easier, let's suppose our Activist is named Alice, and our Cop is named Charlie.

> The impersonator, by definition isn't known. Only the profile being impersonated is.

Charlie isn't able to impersonate Alice's profile because that's encrypted and Charlie doesn't have the key. Charlie can make a fresh profile, but it's not Alice's, although of course Charlie is able to use a stock photo of Alice (if he has one) and name it "Alice". Let's label this Alice2.

> If an activist is part of a private group, and a cop impersonates the activist, they can get all the new messages without raising any doubt.

Although Alice may be part of a group, Charlie isn't and Alice2 isn't either. Messages sent to Alice (as a group member) are not received by Alice2, who wouldn't be able to decrypt them anyway. Charlie will need to (as Alice2) ask to have Alice2 admitted to the group. Perhaps he can pretend Alice dropped her phone and bought a new one. Depending on what sort of "activist group" this is and the threat level, this may be quite easy or involve an in-person meeting which will be difficult to fake.

Of course since apparently you're imagining Alice is actually still around, she presumably tells people Alice2 is an imposter and it's likely this goes very badly for Charlie even if it's not an in-person meeting.

> In a lively conversation, the cop can even send messages and expect the activist not to see them. Disappearing messages will help here.

If Alice isn't knocked off the network, none of this "impersonation" works. You seem to be imagining Signal is some toy message board system where you can log in as Alice - just need an SMS, but it's nothing like that, that's why subpoenas don't bring anything meaningful back when they ask Signal about phone numbers, Signal doesn't know anything.


In Telegram the first is available opt-in, an equivalent to the second is available opt-in (key change = shows up as a different chat), and the third is equally true.


We know from experience that most users never change defaults. For that reason alone, Telegram‘s "secure chat" is anything but.


I agree Telegram is a worse choice for security, but I think it’s important to know specific differences and not just simplify to “not secure”, because that approach’s flip side - believing simply that “Signal is good and secure” - leads to mistakes like applying the law of defaults only to Telegram, not Signal.


> For that reason alone, Telegram‘s "secure chat" is anything but.

I didn't think that particular Telegram feature was affected in any way by any option, only that they needed to be started explicitly but were always secure no matter what.

So I immediately went looking through my settings and found an option that leaks tracking info by default, even in Secure Chats.

Signal, by default, routes all calls through their server. The option of using P2P connections for better latency and call quality is opt-in.

Telegram? See [1]. I found no evidence that Secure Chat calls are any different from regular calls.

1: https://www.bleepingcomputer.com/news/security/telegram-leak...


Huh? Signal uses P2P call routing by default. The "always relay" option is available for users who do not want to reveal their IP to anyone but the OWS service, and is found (defaulted to off) in Settings>Privacy>Advanced>Always Relay Calls.

Of course of P2P is impossible your call gets routed through the server no matter what, but I don't think there's a fix for that -- there's ~always going to be a need to relay connections in case of NAT issues, firewalls, etc.


it's worse than that. secure chat in telegram is very limited. you can't save anything shared in them. for most practical applications, like talking to friends and family you don't even want to use secure chat. if it were default you'd have to turn it off most of the time


Cool. But are they still closed-source?


How out of date are you? Both server and client apps have been on GitHub for half a decade.


Signal did notoriously not update their repositories for a while when they implemented their cryptocurrency scheme into the app.

If you're basing your trust on their open source software, you should also run a client you've compiled yourself (after auditing the code, of course).


I'm aware that there was at a certain point over a year without server code updates. I'm also aware that Signal went out of their way to discourage forks of the official clients, but I don't know if that was just Moxie or does the new CEO share the same opinion.

In any case, I don't think either of those things = Signal not being open source. Not free software? Sure, who gives a shit. But the source is there and long pause doesn't mean much to me. While it was still happening, it certainly raised my eyebrows. Now that we know it was a pause and not a full stop? Doesn't mean a thing to me.


> Of course this can be easily mitigated by setting a “cloud password”, but I guess most people don’t do that.

I have a faint memory of being forced to set said cloud password, or at least not finding a way to skip the set password screen. So I've always assumed it wasn't entirely uncommon.


Isn't this a weakness in all SMS based verification?

If you can reroute SMS auth codes, it's game over.

It's too bad that most 2FA rely on this method (or use it as a fallback).

I don't see how it is directly related to telegram, though.


> If you can reroute SMS auth codes, it's game over.

Except it's absolutely trivial to do so, just bribe a low ranking employee of the phone company, and it's done. This has been done thousands/millions of times, usually targeting Bitcoin holders. Just google "Simjacking"

I absolutely loathe when companies make me use SMS as 2FA. I flat out refuse to use the service if they force SMS for account recovery, because at that point you might as well just be sending plaintext passwords over the internet, because you clearly don't care about your customers safety.

Oh, and the amount of hoops you have to jump through to make Gmail NOT use SMS for account recovery is insane.


Telegram actually uses SMS as the first factor. You can set a password as well, but that's an optional second factor.

That said, Telegram isn't very secure at all. You can make it secure by sacrificing all kinds of conveniences (i.e. not taking part in group chats) but the platform is just too unreliable.

It's a shame their apps work so well because the underlying protocol and security are behind on all of their competitors. From iMessage to WhatsApp and from FB Messenger to XMPP, encrypted group chats can be enabled easily. Only SMS/MMS is a less secure way to group chat.


> Oh, and the amount of hoops you have to jump through to make Gmail NOT use SMS for account recovery is insane.

GMail is as I understand it just your Google account, and so simply telling Google not to use the Phone for account recovery, by deleting that option if present, or not adding it when Google suggests you might want it, ought to be enough. Does that not work ?


millions of times?


It's a routine operation for even small-time groups.

One million requires: 10 000 contacts spread across phone operators around the whole world, each enabling 100 sim swaps over time. I suspect this doesn't take so long to fulfill.


From what I've heard from my Russian colleagues quite a lot of people have an alternative number they register telegram with. The SIM card is never stored in the main phone. Also it was popular to buy a phone number in countries like Finland and keep it alive by making small payments. Obviously you never use this number directly in Russia. The was a method to read the SMS messages online.


The only thing worse than SMS-2FA is SMS-1FA, which I believe is Telegram‘s default.


It's the default but you can set up a password as your second factor. You should really do that if your Telegram account is important to you.


Well, what is WhatsApps default then?


What does WhatsApp‘s authentication model have to do with whether Telegram‘s is secure or not?

Additionally, if you take over somebody‘s WhatsApp account, you can send and receive new messages in their name and very visibly kick them out of their account themselves.

With Telegram, the legitimate owner stays logged in (so you can see what they write and read in addition to sending your own messages), and you get their entire chat history on top of that.


How can you not see how bad security practise of Telegram is not directly related to Telegram?

Do not let sms 2fa slide for anyone.


Since some months, it's more a users' bad sec practice: Telegram offers on sale virtual numbers[1] with a crypto dumbery, you can buy a virtual number for 40-something $ and then log in with their wallet. Should be safer than having the SMS code rerouted but I can't say how much safer that way would be.

[1]: https://fragment.com/numbers?sort=price_asc&filter=sale


The biggest risk of any encrypted chat is that the op-sec of the recipient isn't as good as yours. No matter what steps you take, you can't prevent the other person from being dumb.


Smoke and mirrors, this post.

Maybe replace Telegram with SS7 and it would make more sense.


If a service uses a known insecure authentication method, how is that the fault of the authentication and not the service?


Most serfkces use insecure authentication methods, especially in the messenger space. They're almost exclusively built around your phone number being the guiding proof of your identity.


True, but in most of them, you can‘t recover past conversations using SMS-OTP alone.

Authentication should be proportional to what it‘s protecting.


"Here he is demoing access to the #Gmail of a purported key political insider in #Kenya just days before the election."

Odd choice of title when the subject of the thread is exposing compromise of elections using vulnerabilities not necessarily native to Gmail or Telegram.


For me personally, the main argument against Telegram is that its development and operations team is physically located in Russia. This means that they can very easily be bribed and/or intimidated into any type of collaboration with their state.


You can put a password on your telegram account.


.. can you reset that password? Using a text message?


You can but it takes 7 days and it's really noisy

Steps to pwn assuming you don't have access to their email:

1) insert phone number (victim receives code via telegram)

2) ask code via sms

3) intercept sms and login

4) click "reset password" on the 2fa prompt (victim receives message stating that they requested a password reset and they have 7 days to confirm their phone number (via sms) or their account will be reset)

5) wait 7 days

6) repeat steps 1-3

7) click "reset password"

8) confirm reset

9) yey you logged in as the victim, all the chat history is permanently deleted and the old sessions are invalidated kicking out the victim


Technically yes but it takes 1 more step.

afaik (maybe I'm wrong) you can only reset a Telegram cloud password with a recovery email address. And you can maybe access that email address through a SMS based recovery.

Having your Telegram account with a cloud password and the recovery email locked with a safe 2FA method "should be safe".


The other answer to your comment says that you can't, so it's possible the the feature is broken or something? But you can configure a recovery email, which is supposed to be used, if you forget your password. But I have not used it yet.


You actually can't, if you forget the password, the account will remain indefinitely locked (and deleted after 6 months of inactivity if you haven't changed the default time).


No, you can only reset it with an email (that you add when setting up the password) or using an already-active device.


There's a broader question I've been raising for a number of years now, about how major online service providers address the brownshirt threat. I'd first raised that in 2016 on the now-defunct Google+, entirely coincidentally on the anniversary of Kristallnacht:

<https://web.archive.org/web/20170604101018/https://plus.goog...>

Telegram seems to either have turned or been compromised from the start. Given transitions closer to HN's home, Twitter's userpation by an alt-right zottanaire would be another case in point. Ironically, Yonatan Zunger and Lea Kissner (to whom I'd addressed much of that post's message) were both at Twitter when Musk acquired it, though both have since left. (Zunger was G+'s chief architect, Kissner lead a security team there. For all its various faults, G+ had relatively little co-option by fascists, something I had an opportunity to assess during the site's shutdown, by way of the 8-million-odd Communities that existed, some with clearly white nationalist / antisemitic, or other bents, virtually all of which were inactive for years by the time I looked at them (late 2018 / early 2019), whilst at the same time legitimate use of terms such as "Aryan" in an Indian/Hindu context were generally active. Google+ managed to avoid the Scunthorpe Problem.

Mediated communications, particular the electronic / digital / AI variants ... are seeming increasingly fraught. The Telegram story is a bump on that node.

Thought as I write this: Telegram's namesake, the original telegraph, was itself notably used to intercept and alter communications back in the day, notably news of the outcome at Waterloo and by agents of Standard Oil.


All fine and dandy but so far I have lost entire conversations on signal, whatsapp and matrix while this never happened to me on telegram, which is the number one thing that matters to me.


Retrieving the victim's entire chat history is also the number one thing that matters to hackers.


I'm in China, and I just see people learning about this the very hard way. There were many unusual unrests and crackdowns recently.

SMS is definitely a weak spot without a second thought. The state actor can easily analyse and reroute then pull off a massive list of names straight to gulag.


Ironic to see a thread on Twitter (of all platforms) complaining about "political activity happens on a handful of platforms [sic] makes the tooling for political manipulation really interoperable."


The guy mentioned SS7 and SMS hacks, coincidentally I got a message from telegram yesterday that they are deprecating SMS confirmation.


By default Telegram is not E2E encrypted.


Telegram's idea of binding phones is really strange. Perhaps it's a measurement against spam?


Now you can buy phone numbers as NFTs, no SMS authentication required.


Anyone who share’s twitter links, should be kindly asked not to.


Its scary


[flagged]


"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting." - https://news.ycombinator.com/newsguidelines.html


It appears that frustration with twitter threads is too common to be interesting. Not unexpected.


That's correct. The reason we don't want such comments isn't that they're unjustified, it's that they're repetitive.


I equally can't be arsed to read threads titled like this. the titling of articles is probably the thing that bothers me most on this website. if it isn't "why [high entropy word] and [high entropy word] are [high entropy word]"[1], then it's this. if Telegram is insecure, say that. if it's secure, say that. healthy titles don't make things less clear to make you want to clarify them.


If only for this twitter thread, you can comment with "unroll" on twitter to get one of these https://threadreaderapp.com/thread/1625719023081082880.html


I don’t have a twitter account. I just followed a link because the linked content seems interesting.

But thank you for the tip!





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: