Cybercrime , Encryption & Key Management , Fraud Management & Cybercrime
ISMG Editors: How Arrest of Telegram CEO Affects Encryption
Also: AI's Role in Cybersecurity; New Fraud Prevention Rules Anna Delaney (annamadeline) • September 6, 2024In the latest weekly update, Information Security Media Group editors discussed the implications of the recent arrest of Telegram's CEO in Paris for encrypted messaging services, the transformative impact of artificial intelligence in cybersecurity, and the latest regulations designed to curb fraud in electronic payments.
See Also: Corelight's Brian Dye on NDR's Role in Defeating Ransomware
The panelists - Anna Delaney, director, productions; Tony Morbin, executive news editor, EU; Tom Field, senior vice president, editorial; and Suparna Goswami, associate editor, ISMG Asia - discussed:
- Key takeaways from an interview with CISO Sam Curry of Zscaler on why AI is simultaneously the most overhyped and underhyped technology in history;
- The implications of new rules rolled out by the National Automated Clearing House Association aimed at reducing fraud, with a particular focus on combating credit push fraud;
- What the indictment of Telegram CEO Pavel Durov by the Paris Prosecutor's Office could signal about the future of encrypted communications in France and potentially other countries.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Aug. 23 edition on changing CISO disclosure rules post-SolarWinds and the Aug. 30 edition on how CrowdStrike's competitors are responding to its recent outage.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm Anna Delaney. Today, we'll explore the transformative impact of AI, the latest regulations designed to curb fraud in electronic payments and the implications of the recent arrest of Telegram's CEO in Paris for encrypted messaging services. Joining me today are Tom Field, senior vice president of editorial; Suparna Goswami, associate editor for ISMG Asia; and Tony Morbin, executive news editor for the EU. Great to see you all.
Tom Field: Thanks for having us.
Delaney: So Tom, a few weeks ago at the Black Hat conference, you had the opportunity to sit down with our very good friend Sam Curry, CSO of Zscaler, to talk about the current and future state of AI and whether it's the most or least hyped technology in history. What did you learn from that conversation?
Field: What’s interested is that I have sat with Sam multiple times. We've recorded hours of interviews about the adoption, use cases and weaponization of AI regulation in preparation for the Black Hat discussion. I had a conversation with him where he said to me, "I ask myself sometimes whether this is the most overhyped technology in history or if it's the least." And I said, "You know what, that is going to be the kernel of our conversation right there." So, we finally had the chance to meet at Black Hat and sit down in ISMG Studio. That is the very first question that I posed to him, and I will share with you the response that he gave me then.
Sam Curry: "That's a little leading." I stole it from I think Sir Tim Berners-Lee, who said it about the internet. It was simultaneously the most overhyped and underhyped technology at once in history. It's possible that this is a contender for the number one slot honestly. But I wouldn't say Gen AI. I would say AI, which gen AI is a part of. A lot of the attention, certainly since early 2023, was around LLMs, and ChatGPT took the world by storm, in particular the way it grew so fast. But there's much more to it than that. And there are other advances coming in. By the way, one of the reasons it did is it crossed the uncanny valley. So, people started to see it and even personify it or anthropomorphize it. That's fairly significant.
Field: What's interesting is that Sam and I've been talking about AI for years now. We did a series of roundtables, probably since 2016 or 2017. And guess what? Even then, every vendor booth had an AI component to it. He would go up and say, "Well, how does this work?" And they would say, "Well, AI." And he said, "Well, no, how does it really work?" And his advice from that came double click every time you hear or see the term AI. So, we went on the road and did a conversation about what it really meant. We're talking about predictive AI right then and analytics, particularly in taking a look at Suparna's favorite topic of fraud, and how to look at some of the data from fraud incidents. Now, with gen AI crossing the uncanny valley, which is probably a term taken from one of the U.S. Christmas shows. It's something that Rudolph the Red-Nosed Reindeer might have crossed. Having gone across the uncanny valley, and you've got such broad acknowledgement of it and use of it, and yet we are still in early days. And he likened it to, most recently, the use of drones in the Ukraine-Russia conflict, and that has matured over the past two years. Go back even to World War I, when tanks were introduced to combat. Initially, they were used very differently than they ultimately came to be used. We're in early days with gen AI as well in terms of trying to figure out not what it can do to automate the manual but how it's being weaponized by adversaries, and what we can do to bolster our own cybersecurity. We don't know the true power of it yet, and that's why he talks about this being both the least and most hyped technology in history.
Delaney: We certainly don't know the true power of it. But are you getting a sense through the conversations you're having with CISOs and various experts of how AI will redefine cybersecurity in the next decade or so as it evolves?
Field: Again, early days. But what encourages me is that you've got fewer organizations talking now needing a policy. I don't hear anybody saying we have to outright block the use of this. It's trying to enable people within an organization to use AI responsibly and find ways that they cannot automate the manual and do the things that we know it can do in terms of analyzing statistics and turning out reports, but find ways to make the tools more responsive to improve the visibility into the telemetry and to speed up detection and response. In particular, we yet don't know the full power of this, but I'm encouraged that organizations are trying to get there, as opposed to trying to put a manhole on a tunnel that's not going to be held back.
Delaney: Well, I love the angle of this interview, and it's always fascinating to hear.
Field: There's so much more to hear. It's a terrific conversation. We have a great time talking with Sam. He always gives us some excellent imagery and some thoughtful discussion. And we've just touched the tip of the iceberg here, much like AI.
Suparna Goswami: But Tom, during the roundtables here in India, the conversation surprisingly is still around - we need policies, we need to have better policies to help employees to know where we need to stop, where we need not stop, and where we can. So, it's good to hear that the conversations in your region have moved beyond policies. Here, it's still more or less around needing policies. During the last roundtable that I attended, it was more or less around we need to have strong policies. And they are still struggling to come up with strong policies.
Field: You make a terrific point Suparna. The conversations that I'm having in North America are likely very different from the ones you're having in Asia and different from the ones that Tony and Anna are having in the U.K. and Europe. And Tony, you're off to Dubai this week. I know this is going to be a conversation, because guess what? You can't spell Dubai without AI.
Tony Morbin: Very clever. Very good. Yeah. I was just going to mention that when we were talking about regulations, because I'll be talking about regulation bit later on. Elon Musk was able to get knocked out from the California AI moderation regulations. So, the problem is we've got regulations coming in, but there's no general understanding or agreement of what regulations are appropriate.
Delaney: Huge topic, and a lot will shift next few years. But for now, I implore everybody to go and watch your interview Tom.
Field: I predict we will discuss this again on this panel someday.
Delaney: Suparna, the National Automated Clearing House Association has rolled out new rules aimed at reducing fraud, with a particular focus on combating credit push fraud. Can you tell us about these changes, and what are the implications for ACH transactions - automated clearing house transactions?
Goswami: I had an in-depth conversation with Devon Marsh from Nacha, and it dives into the complexities and opportunities that surround the payment risk management space, especially with Nacha's new rule, which is said to be implemented. It starts from October, and it'll be implemented in a phased manner. Why this is important is because it directly addresses the increasing threats posed by credit push fraud, like you mentioned, particularly in the B2B transactions. I'm assuming all our readers are aware about credit push fraud. But, it is a type of fraud where a payer's money is pushed into a fraudster's account. So, one of the crucial aspects that he highlighted was the growing sophistication of fraud tactics, and this includes the ones that we speak during our interviews, such as business email compromises and even AI-driven scams such as deepfakes that trick businesses into making those unauthorized payments. But what stood out is that Nacha's approach to mitigating these risks isn't simply about deploying new technology. They are not solely dependent on technology. Instead, it is about encouraging financial institutions to refine what is already existing and leverage their existing tools and processes, such as the aims and protocols. And another key takeaway from this discussion, which I found interesting, is the phased implementation of these rules. For example, the first wave of requirements taking place in October is relatively easier to achieve, where it requires organizations to have process and procedures in place. The more complex monitoring rules come into play in the next few years, and it will give the smaller banks a breather space to prepare themselves for the more complex rules. And more or less, these rules demand a very proactive detection efforts, especially given that fraudsters are evolving their pace at an alarming rate. So, Devon said that it is clear that being prepared is not just about compliance but also about staying ahead of these growing risks and adopting swiftly. What is also relevant is the timing, because we are on the verge of significant regulatory changes, which we are seeing in the U.K., U.S. and Australia. The idea is that institutions must move beyond checking the box for compliance and instead focus on enhancing their fraud detection capabilities. So yes, it puts a lot of emphasis on the human process of strategic monitoring, cross-silo collaboration. AML teams, as much as we talk about that they should collaborate, does not collaborate with the fraud team. So, lot of emphasis on the fraud team. And he also emphasized about fortifying the entire payment ecosystem to create a more resilient future. One of the things was that these regulations is mainly meant for the receiving banks. So, they want the receiving banks to have more regulatory or more controls to ensure that whatever amount they're receiving, it's not going into a mule account. So, that's what the main aim is.
Delaney: Suparna, what are the specific challenges that financial institutions have faced in trying to implement these changes in China? I'm thinking more for smaller institutions that might have fewer resources.
Goswami: We were talking a lot about the smaller banks who have limited resources and how they are implementing the new nurture rules. So, many smaller banks, he said, already have the tools surprisingly. They already have the tools and the people in place who monitor transactions. But unfortunately, they often lack that formalized, documented policies that are required under the new rules. So, he mentions that one challenge is conducting gap analysis to identify what practices these banks currently have and where they need to improve, because nothing is documented. He, additionally, says that there is a misconception, and implementing these rules would require very expensive technical solutions, whereas, like I said, in these rules, it is more of having those human-based processes in place, not necessarily the sophisticated systems, which is something small institutions might not always realize. So yes, this is about having those documentation and procedures in place to understand and leveraging the existing technologies.
Delaney: Nice overview of the changes there. Tony, Telegram CEO Pavel Durov has been indicted by the Paris prosecutor's office, raising significant concerns about the future of encrypted communications in France, and also beyond. So, given the implications, what are your views on this case? What broader impacts do you foresee?
Morbin: It is covering freedom of speech and law enforcement and the enabler being encryption. When I was a journalist in the Middle East, I once saw one of my articles copied into Arabic in a local publication, and they even had the cheek to publish it with a photograph of me interviewing the Shaikh. And at the time, there was no effective copyright in the country because the government in question didn't want to limit the dissemination of information, which sounds like a laudable aim, until it's the information that you created and you didn't get paid for it. Then, with the advent of social media, search engines and now generative AI, traditional media are crying foul, as the tech giants are declaring their platforms are simply communication tools such as a telephone, and they can't be held legally responsible for the content sent over them. At the same time, they increasingly plunder the content created by others and monetize it to the detriment of the creators on the basis that it resides on their platform. The current state of affairs is not good for media companies. So, even though I support the use of end-to-end encryption for secure communication, I'm a proponent of free speech, and I recognize that Telegram is essentially a chat social media mashup. You won't be surprised that I'm shedding no tears over the arrest of the Dubai-based, Russian-born Telegram CEO Pavel Durov - reported on ISMG by my colleague, Akshaya Asokan. And that's because I also support the rule of law and democracy. Durov left Russia in 2014 after refusing to shut down opposition groups on the VK social media network that he founded when he was 22. But, Durov isn't some attractive poster boy for free speech. His Telegram service now has nearly a billion users worldwide, and it does range from pro-democracy campaigners to organized crime and far-right extremists. It's Telegram's absence of moderation that led Durov to be accused of facilitating illicit transactions, child sexual abuse and fraud. He refused to communicate information to authorities and used encryption without having obtained the requisite permission, in accordance with France's Online Communications Freedom Act. Now, free speech absolutists will definitely disagree with me, but freedom of speech is absolute, with incitement to violence, especially insurrection, which is pretty universally banned, and there are very few places that don't ban child abuse images. So, the principle that restrictions should apply, where it can be shown that their absence could cause harm, is pretty much widely accepted. However, there are others that will argue that restrictions violate individual privacy rights and quash free speech, particularly as the decision on where the line is drawn and who draws it will be different in different jurisdictions, as opposed to a tech CEO deciding. Going for extremes, the morningstar.com said that if Telegram were forced to compromise its privacy policies as a result of this legal action, it would lead to a domino effect of increased surveillance and censorship across various platforms. And if these platforms are compromised, it'll have far reaching consequences for political dissent, journalism and civil liberties worldwide. There are others in the cryptocurrency community, where Telegram has become the go to platform, and Durov's arrest has fueled fears there about government overreach and the potential crackdown on privacy-oriented technologies that are often essential for crypto operations. Already we've seen Chris Pavlovski, the Canadian founder and CEO of video sharing platform - Rumble, announce his departure from Europe following Durov's arrest. Online democracy advocate Eli Pariser, on a TED freedom responsibility talk, noted how the arrest of the CEO of a global tech company is unprecedented. But along with Brazil's clamp down on X, it is a part of flashing out who controls tech spaces - a techie CEO, governments or societies. First, those profiting from content must take some responsibility for it. I see Durov's arrested not about free speech, but about breaking the law, complicity in allowing criminal activity and not cooperating with the law. But how moderation of truly encrypted services can be achieved presents many technical and ethical challenges as encryption itself is effectively a dual use technology that can be used for good or ill. French law criminalizes any use of encryption technology that could obstruct law enforcement and permissions required to operate encryption in the country. The European Commission has distanced itself from the Durov case saying that it's purely a criminal investigation at the national level and has nothing to do with the Digital Services Act. Ironically, many in the security community have criticized Telegram for touting encryption while simultaneously making it difficult to access, as well as being susceptible to fairly efficient algorithm substitution attacks. It's all about the other end-to-end encrypted services such as WhatsApp and Signal. These platforms ensure compliance under French law but could still potentially be targeted since the application of end-to-end encryption also means the content can't be moderated. The big difference from Telegram is that WhatsApp, Signal and Apple's iMessage are built with end-to-end encryption to prevent anyone other than the intended recipient reading the shared content on the services, and that includes companies that run the platforms, as well as any law enforcement that might require their help. By default, Telegram conversations are encrypted only insofar as they can't be read by someone connecting to your Wi-Fi network. However, any messages that were sent outside of that or clicked manually enabled opt-in to secret chat, which is every group chat message and comment on your broadcast channels are encrypted during transit, but they're stored on Telegram servers, so effectively in the clear for telegram itself. What that means in most cases is that Telegram is choosing not to cooperate with law enforcement, rather than unable to help them. In fact, Durov has said that telegrams commitment to privacy was more important than our fear of bad things happening, such as terrorism. Now, the push for law enforcement access to encrypted traffic isn't new. Comes up regularly here in the U.K., where the police seem to walk back doors, while cybersecurity experts at the NCSC advise against them on the basis that if they exist, they'll eventually fall into the wrong hands, impacting both privacy and security. Today, there are very few authorities demanding end-to-end encryption to be outlawed, and regulators and critics are instead calling for approaches such as client-side scanning to try and police messaging services another way. It's not just law enforcement that has problems with encryption, as the companies that use encryption themselves have concerns about their ability to inspect encrypted traffic. A recent Gigamon survey reports most malware hides in the SSL and TLS layers of encryption used by the secure websites and 65% of lateral east-west network traffic is encrypted. Some organizations simply let this encrypted traffic flow without inspection. Others decrypt certain applications, and others use technologies designed to preemptively get plain text visibility into encrypted data. But ultimately, encryption has to provide secrecy to be effective, and the extent to which it can be circumvented is the extent to which it's weakened. It may be that we want to weaken for certain purposes, such as publishing, strengthen it for others, such as defense, but in both cases, criminals will exploit its weaknesses and utilize its strengths. We can't uninvent strong encryption, even if we wanted to, and while we won't stop misuse either, we can regulate, which then requires enforcement, including where necessary - arrests. Of course, there are unintended consequences of such action. And interestingly, Reuben Kirkham, director of the Free Speech Union of Australia, said, "If anything, people should be even more worried about platforms where their leaders are not being arrested, as they will likely be quietly doing what the government wishes." The arrest of Pavel Durov has reportedly made Telegram the most downloaded app in France.
Delaney: Fantastic take of perspective Tony. I know you can't generalize here, but do you think this indictment could set a global precedent that might pressure other tech companies to think their encryption policies?
Morbin: Absolutely. When you arrest the CEO, it makes people think, and it works on two levels here. First, what are other countries going to do? Because it's not just France. There are loads of countries that will have the same view of his activities, and some of them will be people who are worried about crime and fraud in their territory. Others will be about people who oppose the government - pro-democracy activists. So, you've got that, and then you've got the pressure on other big services. In Europe, the EU says that platforms with more than 45 million monthly active users count as very large platforms, and so that's where Twitter and Meta come into play. But Telegram says it's under that threshold with 41 million active users in the EU. So, it doesn't get included in that. But, there's no doubt that people will be looking at this. And as I say, it's a number of scores. One is content moderation. Second, how do you do that? What's your attitude towards encryption? And even though Telegram wasn't properly encrypted for most of its messages, what about end-to-end encryption, where the messages are out of the control of the social media platform?
Delaney: We have to see what shifts we see in user behavior as well; where they sort of flock to now. But thank you so much Tony. That was a great topic. Excellent work team. Thank you so much. Excellent commentary all round. Brilliant work.
Goswami: Thank you.
Field: Until we do this again.
Delaney: Until next time. Thank you so much for watching.