WEBVTT 1 00:00:07.170 --> 00:00:09.480 Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm 2 00:00:09.480 --> 00:00:12.690 Anna Delaney. Today, we'll be covering the U.S. investigation 3 00:00:12.690 --> 00:00:16.320 into Chinese hackers targeting telecom wiretap systems, the 4 00:00:16.320 --> 00:00:19.890 catastrophic risks of AI and the vetoed safety bill in the U.S., 5 00:00:20.250 --> 00:00:23.850 and new global ransomware response guidance. The brilliant 6 00:00:23.850 --> 00:00:26.430 minds joining me today are Mathew Schwartz, executive 7 00:00:26.430 --> 00:00:29.580 editor - DataBreachToday and Europe; Rashmi Ramesh, assistant 8 00:00:29.580 --> 00:00:33.180 editor - global news desk; and Tony Morbin, executive news 9 00:00:33.180 --> 00:00:35.520 editor for the EU. Great to see you all. 10 00:00:35.540 --> 00:00:36.770 Tony Morbin: Good to be here. 11 00:00:37.000 --> 00:00:37.330 Rashmi Ramesh: Good to be here. 12 00:00:37.360 --> 00:00:38.410 Mathew Schwartz: Yeah. Thanks, Anna. 13 00:00:39.150 --> 00:00:42.090 Anna Delaney: Rashmi, you're in the desert. That's beautiful. 14 00:00:42.990 --> 00:00:46.110 Rashmi Ramesh: Yeah. This is actually from a summit we hosted 15 00:00:46.170 --> 00:00:51.930 in Rajasthan last weekend. Rajasthan is a desert state in 16 00:00:51.930 --> 00:00:56.970 India. So, the summit was an offsite one. So, it was for a 17 00:00:56.970 --> 00:01:04.410 couple of days; 2-3 days. And we had about 100 CISOs and CIOs and 18 00:01:04.410 --> 00:01:08.460 security practitioners, all of whom were in the resort, who 19 00:01:08.460 --> 00:01:11.220 were looking at the sand dunes that's behind me. So, it was 20 00:01:11.220 --> 00:01:16.980 brilliant, especially because we actually got to interact and 21 00:01:16.980 --> 00:01:20.580 connect with people during and outside of work, instead of 22 00:01:20.580 --> 00:01:23.700 just, you know, sort of rushing through an agenda, and that's 23 00:01:23.700 --> 00:01:26.670 the feedback we got from the attendees as well. So, it was 24 00:01:26.790 --> 00:01:27.570 excellent. 25 00:01:27.990 --> 00:01:29.550 Anna Delaney: Yeah, it makes such a difference. Did you 26 00:01:29.550 --> 00:01:31.440 actually ride a camel? 27 00:01:32.010 --> 00:01:35.340 Rashmi Ramesh: No I didn't, but a lot of people did. I think 28 00:01:35.340 --> 00:01:39.600 about 50 of them did at some point during the event. 29 00:01:40.020 --> 00:01:44.820 Anna Delaney: Excellent. Not an experience I relish. But anyway. 30 00:01:47.040 --> 00:01:48.510 Tony, what a sight. 31 00:01:49.380 --> 00:01:52.350 Tony Morbin: Yeah, I thought I'd be really cheerful. And you 32 00:01:52.350 --> 00:01:57.660 know, compare ransomers to kidnappers, which is not very 33 00:01:57.660 --> 00:01:59.520 nice. But then, nor is ransomware. 34 00:01:59.960 --> 00:02:02.210 Anna Delaney: Mat, gorgeous view. We've seen this before in 35 00:02:02.210 --> 00:02:05.600 the Editors' Panel, but this is an autumnal shade. 36 00:02:05.720 --> 00:02:08.540 Mathew Schwartz: Yes, it's ripped from the headlines and 37 00:02:08.570 --> 00:02:12.050 from earlier this week, a tranquil moment at, as you say, 38 00:02:12.320 --> 00:02:16.100 a site you've seen before - local park here in Dundee, 39 00:02:16.100 --> 00:02:18.350 Scotland, called Magdalen Green. 40 00:02:19.460 --> 00:02:22.040 Anna Delaney: Beautiful. And here's a view of some vineyards 41 00:02:22.220 --> 00:02:25.130 near the Black Forest in Germany, where I was a few weeks 42 00:02:25.130 --> 00:02:28.250 ago enjoying some of the excellent local wines. Take me 43 00:02:28.250 --> 00:02:32.210 back, I say. But for now, Mat, you start us off. This week, you 44 00:02:32.210 --> 00:02:35.060 reported that the U.S. government is investigating 45 00:02:35.180 --> 00:02:38.480 Chinese state-sponsored hackers for breaching major telecom 46 00:02:38.480 --> 00:02:42.110 providers' lawful wiretap systems in an espionage 47 00:02:42.320 --> 00:02:45.200 operation, not for the first time, of course, targeting U.S. 48 00:02:45.200 --> 00:02:49.490 surveillance efforts. So, what key details should we be aware 49 00:02:49.490 --> 00:02:49.700 of? 50 00:02:50.920 --> 00:02:53.710 Mathew Schwartz: Well, there's some irony here, because, as you 51 00:02:53.710 --> 00:02:58.360 say, it looks like this espionage operation, which has 52 00:02:58.360 --> 00:03:03.700 been attributed not by the U.S. government but by researchers to 53 00:03:03.730 --> 00:03:08.890 China, looks like they've hacked into lawful wiretap equipment or 54 00:03:08.890 --> 00:03:13.030 intercepts. So, for ages and ages in the U.S., 55 00:03:13.570 --> 00:03:17.020 telecommunications providers have had to comply with 56 00:03:17.290 --> 00:03:21.850 court-ordered wiretaps, what's known as lawful intercept, as 57 00:03:21.850 --> 00:03:25.240 opposed to illegal intercept, I suppose. But, it's lawful 58 00:03:25.240 --> 00:03:29.980 because the court said you must do this. So, the scale at which 59 00:03:30.010 --> 00:03:33.970 this sort of thing can happen, I think, is a bit eye watering. 60 00:03:34.330 --> 00:03:37.900 You have seen a lot of reports in the past talking about the 61 00:03:37.900 --> 00:03:43.060 industrialized almost nature of law enforcement taps, questions 62 00:03:43.150 --> 00:03:47.590 about whether this is violating people's privacy rights, not 63 00:03:47.590 --> 00:03:50.530 that you really have those per se so much in the U.S. to the 64 00:03:50.530 --> 00:03:53.560 extent that you do in Europe of course, but there's been some 65 00:03:53.560 --> 00:03:58.450 big questions. And so crashing that party, as it were, appears 66 00:03:58.450 --> 00:04:03.760 to be a bunch of Chinese spies who are asking, "Well, we'd like 67 00:04:03.760 --> 00:04:05.710 to know if some of this information as well. What's the 68 00:04:05.710 --> 00:04:10.330 easiest way to do it? Should we infect endpoints? No, why don't 69 00:04:10.330 --> 00:04:14.350 we just hack into the system they've built for police and 70 00:04:14.380 --> 00:04:17.770 other law enforcement agencies to use to keep tabs on people." 71 00:04:18.040 --> 00:04:22.270 So, the supposition with this particular espionage operation 72 00:04:22.480 --> 00:04:28.870 is that Chinese spies are interested in Chinese operations 73 00:04:29.110 --> 00:04:32.650 that the U.S. government might be attempting to probe. So 74 00:04:32.740 --> 00:04:35.410 basically, what does the government know about what China 75 00:04:35.410 --> 00:04:38.560 is trying to do? That's the guess. But, it looks like 76 00:04:38.560 --> 00:04:42.640 they've also vacuumed up a lot of more general information as 77 00:04:42.640 --> 00:04:47.380 well. It's not always clear, of course, what their goal will be, 78 00:04:47.830 --> 00:04:51.400 but here's what we know so far. Very recently, the Wall Street 79 00:04:51.400 --> 00:04:55.090 Journal had started reporting that there was a national 80 00:04:55.090 --> 00:05:00.250 security probe involving some very large service or broadband 81 00:05:00.250 --> 00:05:05.590 providers. And in recent days, that reporting has been expanded 82 00:05:05.860 --> 00:05:11.290 to say that Verizon Communications, AT&T and Lumen 83 00:05:11.320 --> 00:05:16.480 technologies are among, i.e., there might be more, the 84 00:05:16.480 --> 00:05:20.680 broadband providers breached in this apparent espionage 85 00:05:20.710 --> 00:05:24.700 operation that I've just been discussing tied to China, 86 00:05:25.090 --> 00:05:29.860 specifically tied to a group that Microsoft has codenamed 87 00:05:29.980 --> 00:05:34.270 Salt Typhoon. Neither of those words mean anything except that 88 00:05:34.270 --> 00:05:40.480 typhoon is what it uses to refer to Chinese or suspected Chinese 89 00:05:40.510 --> 00:05:43.240 groups. So, this one looks like it's part of the Ministry of 90 00:05:43.240 --> 00:05:48.460 State Security, which is China's foreign intelligence agency. So, 91 00:05:48.610 --> 00:05:52.270 if all of this sounds familiar, as you alluded to in your intro 92 00:05:52.270 --> 00:05:55.960 Anna, China's been hacking the U.S. and its allies for a long 93 00:05:55.960 --> 00:06:02.020 time. This is just the latest in a very, very, very long series 94 00:06:02.050 --> 00:06:08.200 of attempted or effective espionage operations. Things 95 00:06:08.200 --> 00:06:10.780 appear to have been getting worse. FBI director Christopher 96 00:06:10.780 --> 00:06:15.580 Wray, earlier this year at a conference, said that the threat 97 00:06:15.580 --> 00:06:19.060 posed by the Chinese government is massive. He characterized 98 00:06:19.090 --> 00:06:24.400 China's hacking program as being larger than that of every other 99 00:06:24.430 --> 00:06:29.080 major nation combined. So basically, China's got an army 100 00:06:29.080 --> 00:06:33.490 of individuals believed to number about 600,000 people and 101 00:06:33.490 --> 00:06:37.060 that includes not just employees but also private contractors. 102 00:06:37.450 --> 00:06:41.650 And we've seen leaks earlier this year and before that even 103 00:06:41.680 --> 00:06:47.380 of Chinese groups that appear to be hired and run by the 104 00:06:47.530 --> 00:06:52.390 espionage agencies, but they're ostensibly private firms. So, 105 00:06:52.450 --> 00:06:58.330 huge industrialized hacking operation lately, as I said, 106 00:06:58.870 --> 00:07:01.720 including this intrusion into what looks like lawful 107 00:07:01.750 --> 00:07:07.750 intercept. So, what else does this portend? Well, like I said, 108 00:07:07.750 --> 00:07:10.450 there's the lawful intercept side of things. We've also seen 109 00:07:10.450 --> 00:07:14.620 a lot of other Chinese hacking come to light this year. For 110 00:07:14.620 --> 00:07:19.090 example, in recent months, there was a big hack of Versa 111 00:07:19.090 --> 00:07:23.530 Networks' Versa Director software used by service 112 00:07:23.530 --> 00:07:27.430 providers to help provision their services. This was also 113 00:07:27.430 --> 00:07:31.060 tied to China. So, they've had a long-standing focus on service 114 00:07:31.060 --> 00:07:33.940 providers, I think because if you can break into the likes of 115 00:07:33.940 --> 00:07:40.030 Verizon and AT&T, you can get a view of so much of the internet 116 00:07:40.030 --> 00:07:45.340 traffic in the United States. We've also been seeing attempts 117 00:07:46.120 --> 00:07:51.070 successful to hack into hardware, things like Volt 118 00:07:51.100 --> 00:07:56.620 Typhoon, another group codenamed by Microsoft, has been tied to 119 00:07:56.620 --> 00:08:00.910 the targeting of outdated routers often used in homes and 120 00:08:00.910 --> 00:08:04.150 small businesses - so how routers, some of which have 121 00:08:04.900 --> 00:08:07.330 gotten to the point where they're no longer supported, and 122 00:08:07.330 --> 00:08:10.990 yet there's still great launching points for hackers who 123 00:08:10.990 --> 00:08:16.450 want to get a view of well hackers who wanted to hit other 124 00:08:16.450 --> 00:08:19.960 parts of the United States. And so you had the FBI forcibly 125 00:08:19.960 --> 00:08:24.730 retiring some of those routers working with service providers. 126 00:08:25.300 --> 00:08:29.440 One last thing to mention. Last month, Lumen Technologies' 127 00:08:29.470 --> 00:08:33.760 threat intelligence group was warning about a modified version 128 00:08:33.760 --> 00:08:37.510 of the Mirai malware, Internet of Things infecting malware, 129 00:08:37.660 --> 00:08:41.620 that came out years ago now. It looked like it had been modified 130 00:08:41.620 --> 00:08:46.270 by, again, Chinese intelligence to exploit a number of devices 131 00:08:46.750 --> 00:08:50.770 for years now, and hitting a peak in July of last year of 132 00:08:50.830 --> 00:08:56.530 60,000 infected devices, and it's less now, but these devices 133 00:08:56.530 --> 00:09:00.520 come and go, and it still looks like this botnet that this 134 00:09:00.520 --> 00:09:03.700 espionage group is using is composed of tens of thousands of 135 00:09:03.700 --> 00:09:08.230 endpoints. So, just massive hacking going on here as China 136 00:09:08.230 --> 00:09:12.160 attempts to run what appear to be a number of espionage 137 00:09:12.190 --> 00:09:13.000 operations. 138 00:09:13.720 --> 00:09:16.990 Anna Delaney: Do we know, or have there been any signs that 139 00:09:16.990 --> 00:09:20.620 the stolen, wiretapped communications or other data are 140 00:09:20.620 --> 00:09:24.250 being used for further espionage or intelligence gathering? 141 00:09:25.480 --> 00:09:28.330 Mathew Schwartz: Too soon to say. That's a great question. I 142 00:09:28.330 --> 00:09:32.260 mean, they're running these operations for a reason 143 00:09:32.380 --> 00:09:37.600 obviously, and the use of espionage is to give government 144 00:09:37.600 --> 00:09:42.580 planners, diplomats information about what their adversaries, or 145 00:09:42.910 --> 00:09:45.910 sometimes allies, of course, are thinking what they're planning 146 00:09:46.060 --> 00:09:50.590 to do. So, NO is the short answer. We don't know what 147 00:09:50.590 --> 00:09:54.250 exactly has been stolen. We don't know whose communications 148 00:09:54.370 --> 00:09:57.790 may have been intercepted. As I mentioned, the U.S. government 149 00:09:57.790 --> 00:10:01.150 hasn't attributed these attacks. And the government typically 150 00:10:01.150 --> 00:10:05.290 won't attribute the attacks unless there is a politically 151 00:10:05.290 --> 00:10:09.910 expedient reason to do so. So, there'll be some geopolitical 152 00:10:09.940 --> 00:10:12.580 point they're trying to make or pressure they're trying to bring 153 00:10:12.580 --> 00:10:17.110 to bear. Until that happens, we might not get many more details 154 00:10:17.110 --> 00:10:18.790 about what exactly went down here. 155 00:10:19.240 --> 00:10:22.120 Anna Delaney: And Mat, just final question - given the scale 156 00:10:22.120 --> 00:10:24.580 of this breach, what are the immediate steps do you think 157 00:10:24.580 --> 00:10:28.030 CISOs and their security teams, especially in telecoms as 158 00:10:28.060 --> 00:10:30.730 critical infrastructure, should be taking right now to assess 159 00:10:30.730 --> 00:10:33.340 their defenses and really prevent this happening to them? 160 00:10:34.200 --> 00:10:36.000 Mathew Schwartz: I think this needs to spark some 161 00:10:36.000 --> 00:10:39.420 conversations between service providers and the government 162 00:10:39.660 --> 00:10:44.670 about expectations for this lawful intercept type of stuff. 163 00:10:45.150 --> 00:10:48.180 Whether anything comes of this remains to be seen, because it 164 00:10:48.180 --> 00:10:52.080 seems like law enforcement's appetite using court orders is 165 00:10:52.110 --> 00:10:56.010 voracious for getting these communications. And so, if 166 00:10:56.010 --> 00:10:59.460 you're going to give law enforcement an easy way to 167 00:10:59.460 --> 00:11:03.090 access the communications of so many individuals inside the 168 00:11:03.090 --> 00:11:08.730 states, what mechanism do you really have then to secure that 169 00:11:08.820 --> 00:11:12.180 for an intelligence agency racking up, looking like one of 170 00:11:12.180 --> 00:11:15.090 these law enforcement agencies and getting all-you-can-eat 171 00:11:15.090 --> 00:11:18.750 access to Americans' communications? There's some big 172 00:11:18.780 --> 00:11:22.020 unanswered questions here. Hopefully, we'll see Congress 173 00:11:22.170 --> 00:11:27.570 look into this and maybe take a stand and demand greater 174 00:11:27.600 --> 00:11:31.500 safeguards if those discussions ever come to light publicly 175 00:11:31.500 --> 00:11:35.670 though. I am not optimistic that we'll ever hear much detail. 176 00:11:37.140 --> 00:11:39.030 Tony Morbin: The other thing that makes me think of is that 177 00:11:39.540 --> 00:11:42.480 it kind of adds to the argument that if you have backdoors for 178 00:11:42.480 --> 00:11:44.940 law enforcement, that they're not always going to stay with 179 00:11:44.940 --> 00:11:48.450 law enforcement. And I mean, I still can't get my head around 180 00:11:48.450 --> 00:11:52.140 the idea of 600,000 hackers for China. I think they'd find a 181 00:11:52.140 --> 00:11:52.590 way. 182 00:11:53.460 --> 00:11:55.590 Mathew Schwartz: Absolutely. And I mean, this has huge relevance 183 00:11:55.590 --> 00:11:59.280 for the crypto debate. Governments keep getting told by 184 00:11:59.310 --> 00:12:02.160 computer scientists that you can either have strong, unbreakable 185 00:12:02.160 --> 00:12:07.380 encryption or everyone can listen to it, but you still have 186 00:12:07.380 --> 00:12:11.280 Western governments saying we want backdoors, and that's weak 187 00:12:11.310 --> 00:12:14.460 encryption, and this is a case study in exactly what will 188 00:12:14.460 --> 00:12:16.620 happen with weak systems. 189 00:12:17.140 --> 00:12:18.880 Anna Delaney: Well, excellent analysis. Thank you so much. 190 00:12:18.880 --> 00:12:21.790 Speaking of evolving threats, Rashmi, one of the pressing 191 00:12:21.790 --> 00:12:25.690 discussions right now is around the catastrophic risks posed by 192 00:12:25.720 --> 00:12:28.810 AI. And I know there's lots of debate on what catastrophic 193 00:12:28.810 --> 00:12:32.440 really means. Who gets to define these risks and how immediate or 194 00:12:32.530 --> 00:12:34.720 realistic they are? What are your thoughts? And I know it's 195 00:12:34.720 --> 00:12:38.200 complex, but a critical area. I'm keen to hear what you think. 196 00:12:38.000 --> 00:12:40.880 Rashmi Ramesh: Yeah, you're right about that. You know, 197 00:12:40.880 --> 00:12:44.240 we've been talking about catastrophic risks of AI for 198 00:12:44.240 --> 00:12:48.230 quite some time now. I don't know if you recall, but there 199 00:12:48.230 --> 00:12:51.680 was, you know, and the OpenAI Superalignment team, which was 200 00:12:51.680 --> 00:12:56.030 about mitigating AI's catastrophic risk. Stephen 201 00:12:56.030 --> 00:12:59.090 Hawking has warned about it. But, you know, despite all of 202 00:12:59.090 --> 00:13:03.980 this noise and so many attempts, we have not really seen much 203 00:13:03.980 --> 00:13:08.270 regulation that has become law around it, or even have 204 00:13:08.270 --> 00:13:14.180 companies or organizations take concrete steps to address it. 205 00:13:14.540 --> 00:13:18.530 And I spoke to some experts to figure out why that may be the 206 00:13:18.530 --> 00:13:23.060 case. One of the interesting aspects of those conversations 207 00:13:23.240 --> 00:13:29.270 was that AI's catastrophic risk has brought themes, but it does 208 00:13:29.270 --> 00:13:33.590 not actually have a definition. Actually, that's not completely 209 00:13:33.590 --> 00:13:39.380 right. AI's catastrophic risks has too many definitions, and 210 00:13:39.410 --> 00:13:46.790 the explanation changes depends on who you ask, and why is that? 211 00:13:47.060 --> 00:13:50.750 One explanation is that the technology and its use cases are 212 00:13:50.750 --> 00:13:56.420 complex and its behavior is unpredictable. So, the base 213 00:13:56.420 --> 00:14:01.490 definition is that catastrophic risk is anything that causes a 214 00:14:01.490 --> 00:14:05.420 failure of the system. But, these risks depend on the type 215 00:14:05.420 --> 00:14:09.560 of system in question and who it affects. The impact of this 216 00:14:09.560 --> 00:14:14.570 failure of AI systems can range from, say, endangering 217 00:14:14.570 --> 00:14:18.140 civilization and affecting humanity to, you know, more 218 00:14:18.140 --> 00:14:23.360 localized risks, like ones that impact enterprise customers. So, 219 00:14:23.390 --> 00:14:27.650 how do you comprehensively legislate or curb the risk of 220 00:14:27.650 --> 00:14:31.970 something whose definition itself is so shaky? It takes 221 00:14:32.000 --> 00:14:35.810 attacking the problem from dozens of directions, and that's 222 00:14:36.260 --> 00:14:40.040 possibly time-consuming, and possibly why we don't have any 223 00:14:40.040 --> 00:14:44.660 concrete legislation yet. It's no excuse though, because, I 224 00:14:44.660 --> 00:14:48.770 mean, there's a EU AI Act for inspiration. Its deployment may 225 00:14:48.770 --> 00:14:54.800 take a few years, but you need to start somewhere. And this 226 00:14:54.830 --> 00:14:59.900 also sort of brings up a tangential discussion about how 227 00:14:59.900 --> 00:15:04.940 realistic these risks are and how likely they are to affect us 228 00:15:04.940 --> 00:15:08.150 in the immediate future. Surprisingly or not 229 00:15:08.150 --> 00:15:12.440 surprisingly, most of the AI and security experts I spoke to said 230 00:15:12.440 --> 00:15:16.730 that these extension-level risks that are part of the 231 00:15:16.730 --> 00:15:21.620 catastrophic risk are farfetched and that we should be 232 00:15:21.620 --> 00:15:25.460 focusing on the risks that are already in motion, you know, 233 00:15:25.460 --> 00:15:30.710 like deepfakes and AI frauds and malware development, and not to 234 00:15:30.710 --> 00:15:36.770 mention every old trick in a book now has an AI upgrade. And 235 00:15:36.980 --> 00:15:38.840 that's what we should be focusing on. 236 00:15:39.260 --> 00:15:42.380 Anna Delaney: Rashmi, how do you think legislation like the AI 237 00:15:42.440 --> 00:15:45.860 safety bill that was recently vetoed in California fits into 238 00:15:45.860 --> 00:15:49.070 balancing that this push for innovation with the need to 239 00:15:49.070 --> 00:15:51.530 manage these catastrophic risks? 240 00:15:53.190 --> 00:15:58.530 Rashmi Ramesh: are they harmful. Who is liable? And what are the 241 00:16:07.110 --> 00:16:11.850 consequences for anybody who's prioriting safety ... who's 242 00:16:12.660 --> 00:16:18.180 prioritizing profit over safety? And you know, basically, it 243 00:16:18.180 --> 00:16:21.870 gives you a sense of what is okay and what is not. And more 244 00:16:21.870 --> 00:16:24.660 importantly than anything else, it's not left up to the 245 00:16:25.710 --> 00:16:29.670 AI-developing companies to figure out, you know, with what 246 00:16:29.670 --> 00:16:32.700 the rules are and which ones they can choose to follow and 247 00:16:32.700 --> 00:16:33.120 ignore. 248 00:16:33.540 --> 00:16:35.280 Anna Delaney: Excellent. Well, lots to think about there. Thank 249 00:16:35.280 --> 00:16:39.360 you, Rashmi. It's shifting gears slightly. Tony, there's been new 250 00:16:39.360 --> 00:16:42.240 ransomware guidance released as part of the international 251 00:16:42.240 --> 00:16:45.420 counter-ransomware initiative pushing for faster reporting, 252 00:16:45.780 --> 00:16:50.220 expert involvement and discouraging ransom payments. So 253 00:16:50.460 --> 00:16:53.340 Tony, how do you see this changing how organizations 254 00:16:53.430 --> 00:16:54.270 tackle ransomware? 255 00:16:55.410 --> 00:16:58.050 Tony Morbin: Well yeah, based on, you know, the image behind 256 00:16:58.050 --> 00:17:00.360 me there, you know, essentially ransomware groups stole their 257 00:17:00.360 --> 00:17:03.210 business model from kidnappers. So, in many ways, they should be 258 00:17:03.210 --> 00:17:06.270 treated the same, except, of course, they are more prevalent. 259 00:17:06.330 --> 00:17:08.850 They're online, and they're far more successful. I mean, 260 00:17:08.850 --> 00:17:12.450 ransomware payments were exceeding a billion dollars in 261 00:17:12.450 --> 00:17:16.560 2023 according to Chainalysis. Now, conventional wisdom says 262 00:17:16.560 --> 00:17:19.650 that ransomware should never be paid, as it not only funds the 263 00:17:19.680 --> 00:17:23.280 criminals, it fuels further crime, and it also identifies 264 00:17:23.310 --> 00:17:27.030 you as a payer. So, you're more likely to be attacked again. And 265 00:17:27.030 --> 00:17:29.550 then comes the issue of whether you actually get access to your 266 00:17:29.550 --> 00:17:32.400 data, because the attackers' decrypters often don't work 267 00:17:32.430 --> 00:17:35.460 assuming they provide them, plus whatever they say, the 268 00:17:35.460 --> 00:17:38.490 likelihood is that they'll not hold up, that they will hold on 269 00:17:38.490 --> 00:17:40.920 to your data, and then they'll resell it, whether you pay it or 270 00:17:40.920 --> 00:17:45.450 not. So yeah, criminal behavior. But what do you expect from 271 00:17:45.450 --> 00:17:49.740 criminals? The obvious conclusion is that we should be 272 00:17:49.740 --> 00:17:53.790 banning paying ransoms and kill the business model dead. If no 273 00:17:53.790 --> 00:17:58.140 one pays, they'd stop. Now, while that works in theory, the 274 00:17:58.140 --> 00:18:01.710 collateral damage could be huge. And as with kidnappers, when 275 00:18:01.710 --> 00:18:04.560 it's your child that's held, or in ransomware cases, your 276 00:18:04.560 --> 00:18:07.950 patients or organizations which are at risk and paying appears 277 00:18:07.950 --> 00:18:10.590 to be the only way to save them, the pressure to pay can be 278 00:18:10.590 --> 00:18:14.400 immense. I think everybody really understands this. So, 279 00:18:14.520 --> 00:18:17.430 with the exception of banning payments to sanctioned entities, 280 00:18:17.610 --> 00:18:21.150 there's no real proposed ban on the payment of ransoms, and the 281 00:18:21.150 --> 00:18:25.350 guidance given which we're going to get into is not binding. 282 00:18:26.130 --> 00:18:29.820 Nonetheless, the advice is, don't be overhasty to pay. 283 00:18:30.000 --> 00:18:33.570 Consider your options and the likely outcomes. Now, as you 284 00:18:33.570 --> 00:18:35.700 mentioned, you know, making these options clearer is the new 285 00:18:35.700 --> 00:18:39.210 voluntary ransomware guidance that was released during the 286 00:18:39.210 --> 00:18:43.470 International Counter Ransomware Initiative 2024 meeting at the 287 00:18:43.470 --> 00:18:46.620 White House this month. Now, the latest guidance, produced by the 288 00:18:46.680 --> 00:18:49.590 U.K. and Singapore governments and supported by 39 countries 289 00:18:49.590 --> 00:18:52.890 and global insurance bodies, aims to reduce disruption and 290 00:18:52.890 --> 00:18:56.670 cost to businesses, reduce the number of ransoms paid by 291 00:18:56.670 --> 00:18:59.550 ransomware victims, and the size of the ransoms when the victims 292 00:18:59.550 --> 00:19:00.630 do choose to pay. 293 00:19:00.810 --> 00:19:03.030 Anna Delaney: But Tony, what are the initial recommendations to 294 00:19:03.030 --> 00:19:05.400 guide organizations in the event of an attack? 295 00:19:05.990 --> 00:19:08.090 Tony Morbin: Well, among the recommendations is the call for 296 00:19:08.090 --> 00:19:12.380 victims to report attacks to law enforcement, cyber insurance and 297 00:19:12.380 --> 00:19:16.220 other outside firms that can help. And this involvement of 298 00:19:16.220 --> 00:19:19.520 more advisors in deciding whether to pay a ransom includes 299 00:19:19.520 --> 00:19:22.550 reviewing what the legal situation is in the country, and 300 00:19:22.550 --> 00:19:26.720 whether sanctions do apply. Now, catching and jailing those 301 00:19:26.720 --> 00:19:29.180 responsible and seizing and disrupting their infrastructure 302 00:19:29.180 --> 00:19:32.660 might not be the victim's top priority during an attack, but 303 00:19:32.660 --> 00:19:35.090 obviously, cooperation in achieving this outcome is going 304 00:19:35.090 --> 00:19:38.810 to be to everyone's benefit. Victims are also being reminded 305 00:19:38.810 --> 00:19:41.960 that paying the ransom doesn't guarantee access to your devices 306 00:19:41.960 --> 00:19:44.840 or data, and so therefore, they're told that the decision 307 00:19:44.840 --> 00:19:47.600 to pay the ransom should be made only after making sure that it's 308 00:19:47.600 --> 00:19:50.720 likely to change the outcome of the incident and complies with 309 00:19:50.720 --> 00:19:54.440 local regulatory requirements. The ransomware victims are being 310 00:19:54.440 --> 00:19:58.610 encouraged to record their incident response decisions 311 00:19:59.270 --> 00:20:02.600 related to ransomware mitigation data captured for post-incident 312 00:20:02.780 --> 00:20:06.230 reviews and this kind of due diligence, collecting an 313 00:20:06.230 --> 00:20:10.550 analysis of information, the potential harms, is recommended 314 00:20:10.550 --> 00:20:12.980 to be part of every organization's incident response 315 00:20:12.980 --> 00:20:16.550 and recovery plans. Organizations also need to make 316 00:20:16.550 --> 00:20:18.440 sure that they know the regulatory penalties that can 317 00:20:18.440 --> 00:20:22.640 result from a data breach in their sector. Among the guidance 318 00:20:22.640 --> 00:20:26.090 that can be given by these cyber incident response companies is 319 00:20:26.090 --> 00:20:28.190 that they can let you know if there's a publicly available 320 00:20:28.190 --> 00:20:32.210 decrypter that can unlock your systems. These are obtained by 321 00:20:32.300 --> 00:20:36.110 organizations such as No More Ransom. Plus negotiators 322 00:20:36.110 --> 00:20:38.270 familiar with ransomware operators will typically 323 00:20:38.270 --> 00:20:41.510 negotiate down the actual ransom that you have to pay 324 00:20:41.540 --> 00:20:46.040 considerably. In addition, the negotiation process itself, 325 00:20:46.040 --> 00:20:49.700 quite deliberately, is extended so it acts as a delay to avoid 326 00:20:49.700 --> 00:20:53.390 hasty decisions. It gives the organization's impacted time to 327 00:20:53.390 --> 00:20:56.840 identify the extent of the problem, quantify what data or 328 00:20:56.840 --> 00:20:59.930 assets have been stolen or affected, the impact on business 329 00:20:59.930 --> 00:21:03.830 operations, customers, employees, supply chain, and the 330 00:21:03.830 --> 00:21:08.450 likelihood of further data exfiltration. It also provides 331 00:21:08.450 --> 00:21:12.020 the time to provide a more empiric and less emotional 332 00:21:12.020 --> 00:21:15.320 review of how practical it would be to continue operations if the 333 00:21:15.320 --> 00:21:18.800 ransom isn't paid. How good are your backups? Have they been 334 00:21:18.800 --> 00:21:22.550 compromised as well? Is pen and paper or other workarounds even 335 00:21:22.550 --> 00:21:25.010 an option? And have you identified where the attackers 336 00:21:25.010 --> 00:21:27.980 are in your system and ejected them? Plus identified how they 337 00:21:27.980 --> 00:21:30.410 got in and closed that door so they don't just walk back 338 00:21:30.410 --> 00:21:33.770 through the same route? Do you have cyber insurance? Does it 339 00:21:33.770 --> 00:21:36.800 cover the full cost of what's happened, and if not, how much 340 00:21:36.800 --> 00:21:39.710 is covered? Could you afford to pay the managed move-in if you 341 00:21:39.710 --> 00:21:43.010 chose to, or is starting again going to be a cheaper option? 342 00:21:43.430 --> 00:21:46.040 And if you do end up paying the ransom, who makes that decision? 343 00:21:46.340 --> 00:21:48.440 Are you paying people who have the ability to unlock your 344 00:21:48.440 --> 00:21:51.770 systems, or another criminal in the chain? And have you got 345 00:21:51.770 --> 00:21:55.070 access to cryptocurrency, most likely form of payment, and how 346 00:21:55.070 --> 00:21:59.030 secure is that? And then, of course, after the event where 347 00:21:59.030 --> 00:22:01.850 data has been stolen, you need to evaluate what the risks are 348 00:22:01.850 --> 00:22:04.940 to life, personal data or national security if the data 349 00:22:04.940 --> 00:22:08.030 were published, and seek to verify that any claims about the 350 00:22:08.030 --> 00:22:11.900 nature and the amount of data stolen are true. And finally, 351 00:22:12.020 --> 00:22:14.570 once again, assessing the initial breach and the 352 00:22:14.570 --> 00:22:17.360 associated vulnerabilities have been remediated. 353 00:22:17.900 --> 00:22:21.980 Anna Delaney: Excellent overview and many questions to be asked. 354 00:22:22.400 --> 00:22:25.160 Mat, would love your thoughts, because does this encourage you, 355 00:22:25.160 --> 00:22:29.360 you know, as someone who covers ransomware in great length for 356 00:22:29.360 --> 00:22:29.960 many years? 357 00:22:29.900 --> 00:22:32.433 Mathew Schwartz: Yeah, no, definitely. I mean, one of the 358 00:22:30.740 --> 00:22:56.120 Anna Delaney: Absolutely. Thanks both. And finally and just for 359 00:22:32.497 --> 00:22:36.424 things Tony mentioned was there might be a public decrypter if 360 00:22:36.487 --> 00:22:40.541 you work with firms. And that's the advice - always reach out to 361 00:22:40.605 --> 00:22:44.469 experts. I know in the U.S., that's the FBI, is often a great 362 00:22:44.532 --> 00:22:48.586 starting point abroad; other law enforcement agencies, likewise, 363 00:22:48.650 --> 00:22:52.640 great starting points. Incident response firms - great starting 364 00:22:52.704 --> 00:22:56.251 point. If you have cyber insurance, they'll be the first 365 00:22:56.314 --> 00:23:00.052 point of call if you've suffered an incident like this. But 366 00:23:00.115 --> 00:23:03.726 besides the public stuff, which you can find, there's the 367 00:23:00.890 --> 00:23:11.030 fun, imagine AI is fully integrated into society 50 years 368 00:23:03.789 --> 00:23:07.716 private stuff, and by reaching out to law enforcement incident 369 00:23:07.780 --> 00:23:11.327 response firms as well, sometimes they can clue you into 370 00:23:11.330 --> 00:23:15.800 from now. What kind of cultural or social changes do you think 371 00:23:11.390 --> 00:23:14.494 decrypters that the bad guys don't know about, or 372 00:23:14.558 --> 00:23:18.105 workarounds, vulnerabilities they've found. So, it isn't 373 00:23:16.040 --> 00:24:17.810 would emerge from AI being part of everyday life? So, how would 374 00:23:18.168 --> 00:23:21.779 always a black-and-white situation of either I pay to get 375 00:23:21.842 --> 00:23:25.706 it back or I've got to restore. Sometimes, there's in-between 376 00:23:25.770 --> 00:23:29.760 sorts of stuff. Again, like Tony was saying, think about if you 377 00:23:29.824 --> 00:23:33.814 need to pay, don't pay, just to ... as just in case. I think in 378 00:23:33.878 --> 00:23:37.615 the U.S., we've seen great strides in that front again with 379 00:23:37.678 --> 00:23:41.226 the FBI getting on-site very quickly with a lot of these 380 00:23:41.289 --> 00:23:44.963 breaches and saying to the board, saying to the CEO, "Just 381 00:23:45.026 --> 00:23:48.954 wait a moment. Let's see what we can do without you paying." I 382 00:23:49.017 --> 00:23:52.755 don't think the rest of the world necessarily has been that 383 00:23:52.818 --> 00:23:56.682 proactive. Hopefully though, with this sorts of guidance that 384 00:23:56.745 --> 00:24:00.293 we're seeing, it'll cause people to stop and reflect and 385 00:24:00.356 --> 00:24:01.560 hopefully pay less. 386 00:24:17.810 --> 00:24:22.340 things like work, relationships or even creativity evolve in 387 00:24:22.340 --> 00:24:22.970 such a future? 388 00:24:27.460 --> 00:24:31.210 Tony Morbin: I'll give you a big dystopian view I'm afraid. You 389 00:24:31.210 --> 00:24:34.360 know, I mean, initially yeah, social and health inequalities 390 00:24:34.360 --> 00:24:37.570 can be evened out globally, and the need to work be reduced. 391 00:24:38.530 --> 00:24:41.590 But, my fear is that knowledge is no longer going to be prized 392 00:24:41.650 --> 00:24:45.670 as AI stores and recalls all data. Human understanding 393 00:24:45.670 --> 00:24:48.700 becomes less valued as AI can demonstrate a more thorough 394 00:24:48.700 --> 00:24:51.400 analysis of the facts. So, there are going to be fewer people 395 00:24:51.400 --> 00:24:53.680 that are motivated to pursue deep understanding and 396 00:24:53.680 --> 00:24:56.830 intellectual growth. As a result, we revert to the 397 00:24:56.830 --> 00:25:01.270 medieval world, where extremely sociopath rulers use 398 00:25:01.270 --> 00:25:05.230 disinformation to seize control, supported by a small educated 399 00:25:05.230 --> 00:25:08.230 elite controlling AI technology in the ruthless pursuit of 400 00:25:08.230 --> 00:25:12.460 power. And then the rest of us just get fed so much of social 401 00:25:12.970 --> 00:25:16.960 media-generated, you know, likes from chatbox and, you know, 402 00:25:16.990 --> 00:25:22.180 basically kept down. So, I know I do understand Rashmi's warning 403 00:25:22.180 --> 00:25:28.270 about the dangers of AI itself, and I don't dismiss those, but 404 00:25:28.270 --> 00:25:31.570 my worry is the dangers of what humans will do with it. 405 00:25:33.070 --> 00:25:35.860 Anna Delaney: Back to the dark ages. So, some argue it will 406 00:25:35.860 --> 00:25:39.310 drive up, it will encourage and inspire creativity, but maybe 407 00:25:39.340 --> 00:25:40.330 our brains will. 408 00:25:41.380 --> 00:25:42.880 Tony Morbin: I'm just throwing that in there really as a 409 00:25:42.880 --> 00:25:45.550 warning so to say that, I do think we should regulate. 410 00:25:46.660 --> 00:25:48.370 Anna Delaney: Got ya. Rashmi? 411 00:25:48.990 --> 00:25:53.580 Rashmi Ramesh: Well, I think it will be ... the world would 412 00:25:53.610 --> 00:25:59.760 possibly be more efficient, more secure physically and virtually, 413 00:25:59.790 --> 00:26:03.390 all of which could possibly result in life being a little 414 00:26:03.390 --> 00:26:08.610 less stressful. But, you know, with the shadow surveillance and 415 00:26:08.610 --> 00:26:12.900 smart devices at every corner, I also think we'd lose the ability 416 00:26:12.900 --> 00:26:18.840 to forget or be forgotten or have anonymity of any sort. Is 417 00:26:18.840 --> 00:26:22.320 that a good thing? A bad thing? We will cross that bridge when 418 00:26:22.320 --> 00:26:24.060 we get there, if we get there. 419 00:26:25.860 --> 00:26:28.110 Mathew Schwartz: Yeah wow. Great points there. There's so much 420 00:26:28.110 --> 00:26:31.170 detritus in everyday life that really doesn't need to be 421 00:26:31.170 --> 00:26:35.130 recorded and analyzed. And on that front, I would love it if 422 00:26:35.130 --> 00:26:39.030 it was a productivity-enhancing tool. I mean me, personally, I 423 00:26:39.030 --> 00:26:42.270 tend to be late to everything, despite my best efforts. And so 424 00:26:42.270 --> 00:26:45.420 if AI could, I don't know, maybe with electrodes or something 425 00:26:45.420 --> 00:26:48.660 hooked up to my brain, but if AI could, be like, "Look, it's 426 00:26:48.660 --> 00:26:51.630 really time for you to go now. You tend to run 10 minutes late, 427 00:26:51.630 --> 00:26:56.160 so I've blocked off. There's do not disturb. Nobody knows, 428 00:26:56.190 --> 00:26:59.130 nobody thinks you're available. Leave now, walk out the door." 429 00:26:59.670 --> 00:27:02.280 You know, if there was something that could just maybe hack my 430 00:27:02.280 --> 00:27:05.040 life a little bit on the productivity and the 431 00:27:05.040 --> 00:27:07.560 organizational front, that might be helpful. 432 00:27:08.040 --> 00:27:11.640 Anna Delaney: And your AI assistant would tell whoever 433 00:27:12.480 --> 00:27:14.400 meeting or running over, yeah. 434 00:27:14.000 --> 00:27:18.590 Mathew Schwartz: Yeah, that's strong arm then. Like, "Yes, Mr. 435 00:27:18.590 --> 00:27:20.900 Schwartz is not available right now." You know. 436 00:27:22.070 --> 00:27:24.170 Anna Delaney: The AI assistant does all the dirty work for you. 437 00:27:24.290 --> 00:27:30.140 So, I was thinking maybe AI time travel. You know, imagine AI 438 00:27:30.140 --> 00:27:34.070 creating these immersive simulations of past societies, 439 00:27:34.070 --> 00:27:38.060 and you wouldn't just observe history, you'd go back to those 440 00:27:38.390 --> 00:27:42.740 historical events and even altering small details for a 441 00:27:42.740 --> 00:27:45.620 more personalized journey. So, that was one of them. Digital 442 00:27:45.620 --> 00:27:48.770 deities, I think, there's ... you're going to see this maybe 443 00:27:48.770 --> 00:27:53.450 obsession or a cult following AI could evolve into virtual 444 00:27:53.450 --> 00:27:56.810 spiritual guides. We've seen this happen before in different 445 00:27:56.810 --> 00:27:59.300 ... guys, as you're already seeing the tech titans in 446 00:27:59.390 --> 00:28:02.570 California become these high priests. So, these spiritual 447 00:28:02.570 --> 00:28:06.590 guides could offer individualized paths to 448 00:28:06.590 --> 00:28:08.000 enlightenment, maybe ... 449 00:28:08.000 --> 00:28:08.660 Mathew Schwartz: And redemption. 450 00:28:08.990 --> 00:28:12.440 Anna Delaney: Yeah, personal beliefs. Then, I also think, 451 00:28:12.470 --> 00:28:15.680 because this is a long-time obsession of humans, immortal 452 00:28:15.680 --> 00:28:18.170 living and permanent youth. So, there'll be a drive on that 453 00:28:18.170 --> 00:28:21.590 front. So, with the AI advancements, this concept of 454 00:28:21.590 --> 00:28:24.620 aging and death could be transformed, allowing people to 455 00:28:24.950 --> 00:28:28.550 experience immortality, maybe even maintain permanent youth. 456 00:28:29.480 --> 00:28:31.730 Tony Morbin: Unfortunately, it might not be the immortality you 457 00:28:31.730 --> 00:28:34.820 thought. I mean, you were saying about creating a virtual world. 458 00:28:34.970 --> 00:28:38.840 If we were able to create a realistic virtual world, you 459 00:28:38.840 --> 00:28:42.350 then get the situation where simulations are more likely than 460 00:28:42.350 --> 00:28:44.810 reality, and therefore, we're more likely to actually be a 461 00:28:44.810 --> 00:28:46.220 simulation than reality. 462 00:28:46.880 --> 00:28:47.570 Anna Delaney: What is real? 463 00:28:49.160 --> 00:28:52.010 Tony Morbin: Statistically, we are more likely to be a 464 00:28:52.010 --> 00:28:53.060 simulation than real. 465 00:28:55.400 --> 00:28:59.750 Anna Delaney: Heavy stuff this week. Hopefully, it won't happen 466 00:28:59.750 --> 00:29:02.960 anytime soon. Let's just say that. Thank you so much 467 00:29:02.960 --> 00:29:05.540 everybody. You've been brilliant. Excellent insights. 468 00:29:05.780 --> 00:29:08.360 And as always, great to see you. 469 00:29:08.570 --> 00:29:09.440 Mathew Schwartz: Thanks for having us. 470 00:29:10.400 --> 00:29:11.090 Rashmi Ramesh: Thanks Anna. 471 00:29:11.870 --> 00:29:12.200 Tony Morbin: Thank you! 472 00:29:12.200 --> 00:29:14.180 Anna Delaney: Thanks so much for watching. Until next time.