Thanks for signing up to view this webinar on-demand. You can watch the Cloud Security Alliances recording below or interact with the transcript. For more great webinars from JupiterOne, click here.
Security Leaders Debate: Cybersecurity Predictions for 2023: this mp4 video file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
Sounil Yu:
Welcome, everyone to security leaders debate. We have a great panel that we've assembled for you today where we're going to debate some interesting predictions that we have for 2023. I'm going to show you I'm the CSO here at Jupiter one, and I'd like to go ahead, introduce my panel. And why don't we go ahead, start with Kelly.
Kelly Shortridge:
Hi, I'm Kelly Shortridge. I'm a senior principal product technologist at FASTLY, also co author of the forthcoming book on Security Cass Engineering Through O'Reilly Media.
Sounil Yu:
And once I coming up, by the way.
Kelly Shortridge:
That is coming out spring 2023, you can preorder it already.
Sounil Yu:
Awesome. Okay. Next up, Claude.
Claude Mandy:
Patron. I'm Claude Mandy. I'm a chief evangelist for data security at Cemetery Systems, where we have the kind of big, ambitious task of securing the world's data. So my job as a chief evangelist is to talk about how great we are and how great data security is.
Sounil Yu:
And you're also a former Gartner analyst, too, right? So you made a bunch of predictions during your time there, too, right?
Claude Mandy:
I have, indeed. So it's not unusual for me to be predicting the future.
Sounil Yu:
All right. And last but not least, Fernando.
Fernando Montenegro:
I Fernando Montenegro here. I'm a senior principal analyst with Omdia and Cybersecurity practice. And kind of like Claude I do. We do work on those predictions, right? So I've been working as an analyst for a few years. Prior to that, I've been in a number of vendor and vendor technology roles in enterprise security for the past, I don't know, 25, 30 years, give or take. So yes, I've been around the block for a while.
Sounil Yu:
Well, it's great to have you all here. So for the audience, we're going to the format of this webinar is where each of us are going to make a prediction. And the spicier it is, the more fun we'll have in contesting these predictions. But I've asked each of the panelists to come forth with some of the things that should really challenge our thinking and challenge our assumptions around what the future might hold for us. So we're going to go through one of those predictions. You'll see a couple of other ones alongside, and we may, if we have time, revisit some of the other ones that are listed here, but hopefully enjoy the conversation and let's go. So we'll start first with Fernando. Why don't you go ahead and start your your top prediction.
Fernando Montenegro:
Absolutely. Thank you. Thank you for your opportunity. Right. So my what we listed as a top prediction, I'm trying to phrase it politely like cloud security conversations will evolve well beyond CSPM. And for those who just love the the acronym soup that we have in the industry, CSPM stands for Cloud Security Posture Management, which is basically the way of, Hey, I'm going to inspect what's running on their cloud configuration. I'm going to report on bad things. That is pretty much how we started doing cloud security for a lot of things. But I think that the prediction for it for 2023 is that we continue that we will continue on an accelerated path well beyond that.
Sounil Yu:
Absolutely. And what I think firstly, what does it mean by posture? What do we mean by that?
Fernando Montenegro:
Right. So if you imagine your typical environment, the idea that you're picking up and you're setting up an environment in a cloud provider, I'm going to pick I'm going to mention WFH because that's the one I'm a little more familiar with, right? So you fill up your VPC. If you fill up your two instances, you set up your your application load balancer review, fed up your database, and so on and so forth. And you use default policies for some of these and you configure things and you configure a bucket storage buckets and so on. And all of those have configuration associated with that. Right. And we started cloud security. We're talking here 2018, give or take, right? We started some of this. Oh, okay. I'm going to have something that looks at that configuration and then tells me, oh my goodness, this is bad because this particular configuration is going to allow bad things to happen.
Sounil Yu:
So I would argue that the traditional way that we've traditionally even means anything nowadays, but security posture was initially configuration oriented. But as it is with most marketing teams, they they expand the scope of what that means to be everything under the world now. Right. Security posture originally should have been just about configuration, but now it's about everything. So when you're saying it's going to evolve well beyond CSP and beyond.
Fernando Montenegro:
That's that's where I would I would it's good that we're having the conversation. I would argue that CSP still refers primarily to configuration. We've created all other acronyms to discuss other things we've created to WPP Cloud Workload Protection platform. We've created VM Cloud infrastructure and entitlement management.
Sounil Yu:
And so let me let me ask, because I think symmetry is CSPM, right?
Claude Mandy:
You know, I'm going to agree with this, right? Because we are the evolution beyond the CSPM, we're in data security posture management. But that's just what the terms that's been applied to us. What we kind of do beyond that is actually provide that transparency and visibility into how you're using data across that. Is that posture management just configuration? No. Is it more possibly So I think that's kind of the thinking here. I actually don't think you can have a security posture management just of an element and use that well in organization. You have to look at the whole security posture. You can start somewhere and then move out from that. And for us, we start at the data and move out from there. So I kind of agree that it's going to evolve well beyond CSP because that's. What all sisters are talking about. Now. You could argue the terms and all those kind of things, but it has to move beyond that is, I think, what I'm hearing from people.
Fernando Montenegro:
We have some data. Go ahead, Kelly.
Sounil Yu:
Kelly.
Kelly Shortridge:
I think it should absolutely evolve beyond it. I think, though, the beauty of posture management as a term is the fact that everything can fit into it, which means if you are, let's say, a research analyst, firm or vendor, you can basically say this is the one solution that's going to heal your cloud like it because posture is everything, right? So I'm a little skeptical that the industry actually will evolve beyond it. I think some of the more, let's say, the savvier sort of security leaders will evolve beyond it. But I think it's a little aspirational to think there's there's just too much entrenched interest in this very, very fluffy category. Right.
Sounil Yu:
Yeah, I think that I think the term will evolve to expand to this, encompass everything rather than the actual conversation evolving because it's going to get confusing for us. Like what exactly do we mean by security posture management? I think the Kelleys point this term is the term itself is morphing and in fact, it's causing, I think, a lot of confusion in the marketplace for. What exactly do you mean by that?
Fernando Montenegro:
And that's the point. I would argue that the term is the cloud security posture management. Spm is relatively fixed where where we are evolving, like what said is we do have now FPM have security posture management, we have the SPM data security posture management. And the point I would make is that let's go back and talk about my grey hair here. Like if we go back to the 1975, right, that's when the microcomputer is like the Apple two and whatnot, right. If you go on on Google and Gram viewer and you look at AdWords for books, right, and you search for the term microcomputer security from around 1975 onward, they go up to 2019, you see a beautiful curve up on and then it died off. Right. Why did the term it's not as if we stopped using microcomputers, right? We still were we something.
Sounil Yu:
Better came.
Fernando Montenegro:
Along. Yeah, well well, I just on my on my desk alone, I can count. One, two, three, four, five. Right. So we still use microcomputers, but we don't refer to them as microcomputers anymore. It's just computers. So what I'm thinking is that the cloud security term is going to evolve in that way. So when if I'm a data security, if I if I'm someone in charge of data security, my data security platform, whatever that is, is going to include supporting cloud environments. If I am an identity management vendor. Right. And I and I support user identities inside my organization, right, Active Directory or Azure AD or what have you, I'm going to I'm incorporating cloud security or cloud identities into that. So what I mean is that the cloud security term over the next little while gets umbrella, It's not so much an umbrella, it gets emptied out by the other areas. Right? So it's a it's a.
Sounil Yu:
So so you think CSPM actually goes away and is replaced by DSP M as SPM.
Fernando Montenegro:
I think CSPM as.
Sounil Yu:
A SPM US PM There's all these. And by the way, I'm starting to see all these vendors emerge with basically fill in the blank SPM. So yeah.
Fernando Montenegro:
Yeah, it's.
Claude Mandy:
True management, all the things.
Sounil Yu:
Yeah.
Fernando Montenegro:
The other one is if something something up, right, it's a devsecops. It's like yeah, Kelly and I have have worked in ops, Right, right. And Secretary Kelly but.
Kelly Shortridge:
Oh yeah.
Fernando Montenegro:
Yeah. But it's super interesting because what this means, it's not diminishing the importance of cloud security, it's just that we are, it used to be that you needed something very specific to support security use cases in cloud. What we've seen over the years and what I think we're seeing more in 2023 is that we will continue to see this approach where other areas of of security, whether it be application security, whether it be data security, whether that be network security, what have you, they start to better support cloud security, use cases so that the term cloud security itself. Moves away, if you will. But but that's why I say it's evolving well beyond. I know I'm taking too much time. I have one data point. We did we did some work earlier in the summer where we we asked people for all the anybody who works with data and statistics has a George box of their patron saint. Right? They said that all models are wrong, but some are useful. So but we did some work looking into what are people concerned with about cloud security and in general like cost was was a high was a high area. Cloud security tools are too costly. It's difficult to do instant response. It's difficult to know where my data is. It's difficult to do compliance, it's difficult to do permissions. And we had the nice distribution cost came out as the top one, by the way. And but then we then we segmented people into two groups based on how they answered another question in our survey.
Fernando Montenegro:
We asked them how far along they were on their FPM journey, so we divided those into those who are early in their FPM journey and those were later in the FPM journey and sample sizes were relatively robust and we got that. The people who were early on their FPM journey, they worried about cost even more than the average. They worried about compliance even more than the average. They worried about permissions even more than the average. The people who are the people who were further along on FPM. So they their responses shifted tremendously. The two things that they were more so they were less concerned about cost, which we are interpreting, that they saw value in the tools that they were deploying. They were less concerned about compliance. Our interpretation is that the compliance data compliance needs were being met by those tools. What they were concerned about, they were concerned about where is our data, right? So where is our data on cloud? That was a bigger concern. And how do we respond? How do we do instant response on cloud environments? So we are interpreting this as saying that once you get past the initial SPM stuff, right, the world is your oyster and and things expand well beyond that. So that's, that's the, the prediction for the year is that we keep seeing these things move forward even faster. So whether it be data security or app FAC or others.
Sounil Yu:
Okay.
Claude Mandy:
Well I've got an interesting spin on that. I think why these are coming up as more important now is because that's where the shared service responsibility stops for the cloud, the data and the applications and what happens in the cloud. That's where it stops. And CISOs are starting to get comfortable with the cloud now. And now they realizing, Oh, there's all this other stuff I have to do to secure the stuff in the cloud that I put there. So it's it's an interesting paradigm shift.
Sounil Yu:
All right. Let's go ahead, move on to the next one. So thank you very much, Fernando. And the next prediction is from Kelly.
CI/C and IaC Will Be Used for Audit Trails and Other Security Problems
Kelly Shortridge:
Yes. So my prediction is CI/CD and IaC infrastructures code tools for are going to be used for audit trails and solving other security problems. This kind of dovetails with something you'll see on the slide as well, that platform, engineering and infrastructure engineering teams are also going to get more budgets. They are much more comfortable with these tools which I like to umbrella term them is configuration is code, and they're kind of some key use cases. For IAC in particular, that means that it's more viable for security or at least a more viable in solving a lot of these problems. So you can do faster instant response. You've got automatic redeployment of infrastructure. When incidents happen even better, you can automatically respond to like leading indicators of security failures. You minimize environmental drift. If you're not familiar with environmental drift, that's when infrastructure or production environments are different from staging environments. And with Iot, you can have automatic infrastructure versioning to minimize environmental drift, easier to revert deploys, you avoid all of the mistakes. With manual deploys, it's pretty great. You're also encoding your entire one second, encoding your entire deployment process that can be passed from human to human and team to team. Faster patching and security fixes Minimize Misconfigurations. Nsa talks about how misconfigurations are the number one problem and then also autonomic policy enforcement. Things like identity and management, simplifying adherence to industry standards like compliance. It's kind of like a secret weapon weapon really, and like a cybersecurity arsenal. But the problem is security people are more scared of these tools than they are trying to embrace them to help their own problems.
Sounil Yu:
Okay, lots to unpack there. So autonomic is one of my trigger words. Where do we really want autonomic security? Do you know what you're asking for? So let's let's unpack this a bit. Is it appropriate to ask for us to describe this more just generally as security as code? I mean, is that is that the.
Kelly Shortridge:
Security is a subset of quality? So I think it's much better to describe. It is configuration as code. Security is a type of configuration, but it is not the only one. And I'm very tired of us appropriating things and inserting ourselves like Devsecops when we were just a sliver and a facet of the problem. Quality is a problem. Things like manual deploys are bad, not just for security reasons, they're bad for a lot of reasons. So when we look at these tools, no, we should not have CIS SSIDs or IAC or anything like that. These are tools that are really useful not just for security teams but also for engineering teams to achieve a lot of different properties, including safer systems.
Sounil Yu:
So you just want to use the word safe, which I think is actually an important word to use in consideration of what we actually do. A lot of the things that we do is actually more cyber safety than cyber security. So if not security as code, could we actually label this as cyber safety, as code?
Kelly Shortridge:
I mean, I think I'm loath to use cyber for much else other than what we already have. But again, I think what's important is we we have already alienated software engineering teams who are the ones that predominantly control budget and software is leading the world. We probably want to stop alienating them and start collaborating with them. The more we put cyber in security into anything, I think the worse. I personally like the term resilience, just because that covers being able to gracefully adapt to any sort of evolving condition and any sort of like failure scenario. So I think configuration is code is exactly what it is. Maybe one day we could have a resilience is code, but given we're talking about predictions for 2023, I think it's safer to say things like configuration is code.
Claude Mandy:
2023. Given that that timeline, isn't this a bit aggressive in terms of a timeline for this actually happening amongst a broad set of organizations? I mean, security still doesn't talk to the engineering team in a lot of cases.
Kelly Shortridge:
That's because the engineering team is already doing this. They're already using IEC for an audit trail. The security team is just not aware because they're trying to shut down IEC. But again, if we had more of an open mind and curiosity, we might see that there are all these benefits. But I can definitely tell you, I've talked to many organizations that are already using it very much for things like even software provenance, but certainly just your standard audit trail and being able to revert things. It's great if you want to understand like we're blameless postmortems like what went wrong, fantastic for incident response, all the things I talked about. Even if security teams don't get on board, this is still happening and it's still going to impact security.
Sounil Yu:
So. So the cloud's point in the future is already here. It's unevenly distributed. If it's already here, what's the actual prediction in terms of like, are we saying some percentage of companies are doing this or what's the.
Kelly Shortridge:
What's actually solving other security problems is like a key part of the prediction because kind of all the things I listed, audit trails are the most popular. Again, this is still very early though. Think like generally you're more modern, more digital organizations. So one, I expect more platform engineering and infrastructure engineering teams are going to approach this. I do think more, let's say modern tech oriented security teams might start doing this more too, but really it's solving other security problems because again, faster incident response, minimizing misconfigurations, faster patching all of that. There are huge security benefits. And when you consider the fact again prediction the other side that platform and infra teams are getting more security budget, they kind of like align really nicely.
Sounil Yu:
So would you argue that like Git is like a core skill now for any security person in 23 and 2023?
Kelly Shortridge:
I think if you have anything related to engineering in your title, you should probably learn how to use Git. The beautiful thing also about infrastructure is code, by the way, is you don't have to be that in the weeds technical. These are declarative tools. They're not you're not getting in the weeds with like rust or something that would be awful and security teams would probably just quit. These are very again, with declarative systems, you're very much stating your intention. And then the tool itself kind of like ratifies whatever you've intended. They're very kind of like intuitive systems to use, which should be, again, very accessible to security teams. So the optimistic part of that is that it won't just be other engineering engineering teams kind of adopting this. It will be some security teams to.
Fernando Montenegro:
Fernando, I may just have the comment that you asked for a prediction. Right. So I agree with Kelly wholeheartedly. We ran some we ran a poll the same the same one where I mentioned the CFP. One of the questions we asked was from a broad poll, the broad population usually more security related things. We ask them, where does your product security reside? Right? In other words, it's not exactly the same. Again, George, Box four to win. But we asked them who is responsible for securing your product, Right. Cliff, making the point here, I agree wholeheartedly that it's engineering, product engineering. Right. The number we got was on aggregate was about 15% of respondents in aggregate indicated that, yes, this is this is a this is separate than the fifth. Still, 85% said the FIFA. Right. My prediction, if I would if that when we run the survey again, perhaps next year or perhaps the year after, I expect to see that number be higher than 15%. Right. I think that the message Kelly is pushing is positioning here is absolutely on point.
Kelly Shortridge:
The other thing I'll mention is if you're predominantly interviewing security people, often they don't know that this is happening. So they might not think that product security is happening by engineering teams, but it still could be, even if you think about static analysis. Right. That's something that security related platform engineering teams often are the ones implementing those kind of tools into like eye IDs or like other workflows. That counts as a product security thing. Security team might not even be aware.
Fernando Montenegro:
Yeah, I agree 100%.
Kelly Shortridge:
Yeah, that's not a bad thing. By the way, this is not like shadow like product security work or shadow IEC. That's that's nonsense. The point is that they need to be collaborating.
Sounil Yu:
So certainly it creates an interesting problem as far as having governance over those activities. And if we're not even aware of those activities, how do we know that they're happening? How do we govern them? How do we ensure consistency across them?
Kelly Shortridge:
And I was going to say, I think part of that is security fault, though, because if I'm an engineering team and I have something that I know is helping me reduce mistakes by humans on our deployments, it's allowing us again to like upgrade faster. These are all great things. I've had experience in the past maybe where we mention, oh, we're using we're now API fying our product in the security teams. Like, no you're not. No, that's scary. Shadow APIs, this is all bad. Why would you tell them about it? Like, you know that what you're doing is right. You can see empirically, like we have better version control, we now have better incident response. Why are you inviting the security team to the table? And I think that's on the security teams to be more curious and again, have this more kind of Fernando like product mindset of like, okay, there's a problem that needs to be solved. We want to be building safer systems, like can we be partnering and collaborating not with a like we have to be in control of everything sort of mindset.
Claude Mandy:
If we look at the the opposite side of this, which is this is a complaint about burnout, they got too much to do. They've got all these things that they're worried about having someone take on board some of these capabilities and actually do it at a better rate and quality. Why wouldn't you welcome that if you're already stressed out?
Sounil Yu:
Well, okay. So I welcome up to a point where it becomes autonomic. All right, let's unpack that a bit, because you made a.
Kelly Shortridge:
Dramatic stuff is happening, whether you like it or not, though, like automated deploys, like what's the downside? You want people manually deploying. So yeah.
Sounil Yu:
Well, let's define what do you mean by autonomic, though, because that's different than automatic and automated. So when you say autonomic, is there a reason why you chose that term?
Kelly Shortridge:
Not particularly. It sounds nice. Like I personally, it's kind of like infosec for cybersecurity. It's like whatever floats anyone's boat specifically by like let's say automated sort of policy deployments. That means that again, you can define policies as code, as part of zero configuration means you're less likely to fat finger. It means that that can be automatic. It means that you can compare like diffs and versions. You can pass them along to teams. So you have this kind of standard template that you can then make sure is kind of reified as code and can work across all these different teams. It's just a lot more standardized, which again reduces the chance of mistakes. And there's a lot more auditable documents. If you're onboarding people, people always talk about like, Oh, it takes forever to onboard security people. Guess what? If all of this is kind of defined as code, it's going to be so much easier for people to navigate. And also they don't have to worry about all the pressure of like, I'm the one manually deploying this out. Like, no, you're not. Like the system is handling it for you. Now. You can do the things you think you for better stuff.
Fernando Montenegro:
Just to pick up on that, like you're passing on the responsibilities for others, right? A good friend of mine and I know that many of you know him as well and the Alex right. He wrote about how the FIFO authority how. If actually a sidekick role, right? We are not. We are sidekicks to product and to do engineering, right? We are here to support revenue. We are here to do revenue protection. Right. We're not here to we're not the ones necessarily in charge of we're not the hero at the gates with the sword and saying, no, we are supporting a much, much, much broader team. You've seen the people talk about the stats that there is 100 developers for every ten ops people to every single security person. There is absolutely no way in God's green earth that we can do this at scale without, like Kelly said, something else called The beautiful thing about infrastructure of code and configuration of code is that it scales really well. But there's two different problems here. One is how do you scale what security does and the other is how does security decide what it does in the first place? Right. And if it's your job to do that, no, it's your job to work with with product engineering, like she says very well and coach them on how you do things, but it's up to them.
Kelly Shortridge:
This is one of the other things. I don't know if we want to go into it, but that dawn of platform resilience engineering is precisely this is very much viewing is like you have customers who have problems. You need to solve those problems. You're very much in a servant sort of role, like you are serving a certain set of users, you are building products for them. If you are not solving those problems and you're making those problems often worse, guess what happens if you're a vendor who makes customer's problems worse? Probably you're not going to be used, which I think is why we see a lot of bypassing of security tools. So I think again, all of these confluence of kind of trends are going to result in a very different looking security than we're used to.
Sounil Yu:
I do want to revisit that. We'll come back to that maybe. But the dawn of platform resilience engineering suggests maybe the sunset of site reliability engineering or that does sunset of security engineering. Anyway, it's we want to impact I want to unpack that one a little bit further. But the dawn of something suggests the sunset of something else. Right. But anyway. Okay. All right. Let's let's move on. I still debate the autonomic term. That's it. That's okay. Well, we'll move on. All right. Next up, Claude, go ahead.
Claude Mandy:
So my prediction is, is really radical data breach transparency. We've had all these instances in essentially the last two months where Joe Sullivan's case has kind of come out about essentially, however you put it, is not being transparent about a data breach. You've got the uber whistleblower case, which again, is coming out and saying, well.
Sounil Yu:
I'm being transparent. Twitter whistleblower.
Claude Mandy:
Oh, the Twitter. Twitter whistleblower case. Being quite transparent about this is stuff that we've we've had happening within our organization and we haven't been transparent about it. You've got SEC to six comment. All of these are putting pressures on the CISOs to essentially be a lot more transparent or radically transparent about data breaches as they occur. That put pressure puts pressure on them to be quicker and faster and be more precise in kind of being opening up about these data breaches. If we look at some of the previous breaches that have gone well, it's where that transparency is coming to it and they've been quite open about this is what's happened, this is how it's kind of fed within our organization. We kind of go back to Mandiant as an example. So all of this is kind of putting this pressure on CSIS to be and organizations to be more transparent about data breaches as quickly as possible when they occur.
Sounil Yu:
What are what do you think are some of the unintended consequences there? Because going back to Kelly slide, evolving attacker access monetization strategies, I'm curious, is there an unintended consequence that may emerge from being more radically transparent here?
Claude Mandy:
Do you want to kind of jump in? Because I think it's been on this.
Kelly Shortridge:
Yeah. So when I think about this as if you was going to maybe sound a little harsh, but, you know, that's very much my my thing. But if you're currently laundering breaches through bug bounty programs, that is a very efficient way for attackers to get paid. Right? You just say, knock, knock. Like, hey, we've compromised this or that and you know that you'll be ferried in through the bug bounty program and you get this nice payout. Probably if I had to imagine it's actually through legitimate sort of financial channels, which maybe are harder for law enforcement to perhaps intervene or shut down. Now, though, if those kind of reliable pathways, monetization pathways aren't available because seasons are like, well, you informed of this, this now we have immediately have to go report it. That means you maybe aren't getting your payout as an attacker, so you might have to resort to other forms of monetization once you do get that access. And I think that could take a variety of forms. But I think it's safe to say that the monetization strategy will evolve, assuming that that breach laundering through bug bounty programs or paradigm actually changes.
Sounil Yu:
Well, hopefully. Well, Sullivans case is unfortunate for a number of reasons, but that that said, hopefully not many people are laundering money through bug bounty, although I can see how that hack could happen as well. Are there other than the consequences that cloud you are thinking of as well with respect to this radical transparency?
Claude Mandy:
Yeah, I definitely see that organizations being more transparent. They'll start to kind of put.
Sounil Yu:
Well, that transparency cause other unintended effects, like for example, the monetization example that Kelley gave or will it cause CISOs to all of a sudden be fired more often? Maybe, I don't know. Is there what do you think what what what will this cause as a downstream effect?
Claude Mandy:
I think there'll probably be some implications on your cyber insurance if you've been a lot more open and sharing a lot more of that information about what happened and what how many incidents happens, it's going to provide a lot more data about how frequent these incidents are. That's going to have interesting implications for cyber insurers. I mean, they used to having a very hard kind of line, and people are only really claiming when it hits a certain threshold now it's going to be open and upfront and working through that. I think what it also means for the industry is when you're telling people that these incidents are happening a lot more frequently than we currently suspect, there's pressure to be a lot more detailed about your analysis of how it happened, which can only improve security as a whole.
Fernando Montenegro:
Yeah, I think a little bit more of a of a negative stance on this, just because I think I see those events happening. Absolute cloud. One of the challenges, though, is that we are still, as a society trying to navigate what our standards of due care for cybersecurity. Right. So if we I think that's one meaningful consequence is if you start having radical data breach transparency, which we may see more often, to what extent are you opening yourself for liability, even even if you're a not necessarily even if if I'm not a lawyer or come from a family of lawyers, but I'm not a lawyer. Right. Even if you are. Even if there is no there there. Right. To what extent are you opening yourself to frivolous lawsuits and reputational damage and on and on and on? So are.
Kelly Shortridge:
They frivolous? Because I kind of wonder like and I was talking to some of my engineering buddies talking about the they were appalled by the idea that AC so would ever have such little integrity to sweep a breach under the rug that seems completely counter to what we tell the world, that we're here to protect users, right? I personally view this as like it's radical accountability and I've been talking about for a very long time. We do not today really tie our security programs to security outcomes. Maybe this is going to be a good forcing function to finally, as the kids say, get good at stuff.
Fernando Montenegro:
Oh, I see that for for ransomware, like ransomware. First of all, I'm on record saying that I hate the word ransomware. And my my soapbox about this is that it's not we shouldn't call it ransomware. It's a multi-stage extortion campaign and ransomware. Let's use the ransomware for for the vernacular. Ransomware resolves the externality of poor security practices, Right? It used to be that if security was if you did bad security inside your organization, and then all of a sudden, yeah, there'll be an outage, there'll be an audit finding. There'll be somebody. Somebody is going to get brought in front of the board and that person may be fired or not. But ransomware now has dedicated, motivated, properly incentivized actors who have very fast feedback loops, actively looking for exploitation techniques and goes to Kelly. To your point, I think you're spot on different monetization strategies. But when I when I when they bring up frivolous lawsuits I mean the context that. As hard as it is, there has to be, when I say it, a frivolous lawsuit in that if the current laws don't support the case. Right, it doesn't matter. We have to change the laws. Of course we have to evolve the laws. But to what extent are those read Radical Data Breach exposes the organization to, Oh, we're now going to sue you for this, even though it's going to be thrown out because there is no standing. Right. Again, I'm not a lawyer, so but I think that's why I meant more by framework. I would love I would love for organizations to be more secure. Oh, my goodness. Absolutely. But the law is about what the law is about What? What's on paper, right? What you what you can prove. Right. So and that's the point I was trying to make.
Claude Mandy:
I think when we kind of explore this a little bit more, when we see organizations doing it individually, we kind of call them out mandating, oh, of course, they're going to be great at the data breach response and be transparent about that because that's their day to day job. But what I'm hoping to see is, is regulation step in and more organizations do. So it's not just the one company that steps out and kind of goes, Oh, look at me. I had a data breach. We actually need all organizations to be open and transparent, and that's the radical part about it. It's the whole industry kind of taking the stance of being we're going to be transparent about data breaches.
Sounil Yu:
Let me challenge that assertion because I earlier I mentioned this distinction between safety and security. I think when it comes to safety issues, we should absolutely be transparent. And just to use an example like nuclear safety, we share that type of information with North Korea and Iran and Russia. Nuclear security. Well, unless you're Trump, we generally try not to share that, right. So when it comes to the notion of security breaches and having radical transparency around that, I think there's a natural interest, actually a national level interest in some cases to not share that. Whereas when it comes to safety incidents, we absolutely should share that.
Kelly Shortridge:
I So I think this is kind of getting at a social, philosophical sort of problem in the industry, which is on the one hand, we very much frame things in moral terms where the good guys there are bad guys, attackers. And the way we're able to get away with that is because we have this noble duty to protect user data. We are the guards, we are the very noble guards who are protecting all of these assets. We're helping society. However, what you're talking about is very much, are we instead protecting the business and making sure that there isn't anything kind of like nasty exposed about that, in which case security then is just part of protecting ongoing business operations, which I'm biased toward. That's that as well. That is not about good guys and bad guys anymore. Right? You could absolutely be almost an enemy of your end users, Right? It's the same thing. You know, the line about HR is there to protect the company. They're not there to protect you. Security might be seen as they're not there to protect you as the user. They're there to protect the company. That is a very different vibe. And I think we very much could not call ourselves like good guys anymore. Right? So I think this hopefully will be a reckoning with some of the kind of terms and mythos that we create about ourselves.
Fernando Montenegro:
You're spot on. I think that we are. I again, I'll point to my gray hair. I think one of the things that we learned over time is that as we evolve as people is that the world is a lot less absolute than when we when we start, right. It's not black and white. A lot of these things. And yes, there is a there is a lot of grey in the middle there 100%.
Sounil Yu:
All right. And there's some other ones here I want to unpack. If we have time to revisit this, the multicloud becomes even more multiverses is a fascinating comment there. So but let me let me hit my last I'll hit my quick prediction and then we'll revisit some of these other ones that are potentially more interesting. All right. So my prediction is that we basically give up on the user as a line of defense. I think the the fact that we saw or rather a lot of us counted on things like MFA to be phishing resistant, only to realize that attackers find some clever way to still trick the user. And over 2023 20 I think we're going to basically just we as a in the security industry are just going to design our security controls to not rely upon the user at all as a control. So I think a lot of us hate well, at least the recipients hate phishing simulations, right? And many of us in the security don't feel like it's a really effective control either. So why do we keep doing it, especially when we have potentially better options to just take the user completely out of the equation?
Kelly Shortridge:
So it's convenient for us to.
Sounil Yu:
Because it's a crutch, right? We're using it as a crutch, thinking that we could potentially have the user be our line of defense. But, you know, even there are days when we fail as security professionals to write.
Fernando Montenegro:
Yeah, but isn't there. Isn't part of the origin of all the security awareness training has to do with a little bit the fact that when all this things started, people didn't know computers in general. Right. And so it's like Max Planck says, like science evolves one funeral at a time. We are seeing a generational gap. It's not perfect. And I'll tell you a story in a second, but I think a lot of the problem with with the training, like Kelly said, it's convenient, but it's also it's an imperfect signal that the organization is saying, hey, listen, this is this is what we can report on these people, right? We're trying our best kind of thing, but there's a lot to unpack there. Well, I don't know if the word is so much gives up on the user as a line of defense or security understands the role of the user in security architecture. I'll be positive here. If security understands that the user is fallible, that the user is subject to they're bored, they're tired. Like me, they have 100 degree fever right now. And so they it gives up on the user in that it doesn't necessarily depend on the user. I would argue that if your organization falls because of a phishing campaign that got through and that has material impact for your organization, I say that's a sign of a poorly designed security architecture rather than the user's fault.
Sounil Yu:
So in the.
Kelly Shortridge:
Same way, you know, I guess at one, I, you know, I completely agree with that notion. I will say it gives up. I think it's perfect because as you you we've been talking for a while. I've been saying for what, six, seven years more about how human error is a big bunch of, you know, problematic language. I still get pushback all the time. Nearly every time that I say that, I get pushback. I think it's absolutely gives up because I when I started to go back to look like what was the rise of training, it's very correlated. When CISOs started being held accountable, I put that very much in air quotes by the board. It's a lot easier if you're now being interrogated by the board, like, why is this happening? Like, what are you doing about it? Wouldn't it be better to say, Well, it's these employees, they just don't understand the security, it's the employees fault. It's like you said, it's very much evidence that you've designed your security program poorly. But if you say, well, it's on the users to do this, then you are shifting the accountability. Right? I think it's a very clever strategy for CIA. So I think gives up is going to be accurate because they're realizing they can't even really see why anymore by shoving things onto the user, which I'm a little skeptical that this will actually happen, just because I hope it does, because we should. Like my view is the users are the victims and we're now blaming the victims. It's just it's messed up again. Good guys, bad guys. Come on. Right.
Sounil Yu:
Yeah. The way that's really great. I really like that analogy there. And understanding the evolution of security training as a as a way to essentially pass plane to the user. The way I think of giving up here is also in the same way that if we got breached then I'd say, well, AV, I was counting on AV to fix it, you know, to address it was like, what do you, what are you thinking? I mean, that's like so, so in the nineties, right? And this perspective that we are counting on the user as our defense just like we're counting on AV is just a really antiquated way of thinking about security.
Fernando Montenegro:
I'm a little more optimistic in the sense that I think that the user plays a role, right? They reduce. It's like MFA, to your point about MFA. Does MFA is MFA perfect? No, it's not right. And can it be we've seen the evolution of the attack, right? It used to be there was no MFA. Somebody gets phished password. Hey, there you go. Now it became Oh, look, we are going to do MFA, so they're going to trick you into a listen. We just sent you this code. Can you please authorize it Now we are in the in the MFA bombing phase where they they get your password and they know that they got your password because of a credential dump somewhere and then they start hitting you for. Please log in. Please log in. Please log in. Please log in. Please. Look, at some point you do write.
Kelly Shortridge:
Oh, I wasn't there. Right. Limiting. That's not something that relies on the user like Fairpoint.
Fernando Montenegro:
But the question. But. But the thing is, I wouldn't necessarily throw out the baby with the bathwater and that there is no expectation that MFA is going to is going to stop 100% of things. Right. There is no perfect security control. And if and if we I'll quote a line from from from the West Wing, if we expect our leaders to be perfect and on this moral ground where we're asking, are we asking to be defeated? Right. I think that we are we can't expect a perfect security. But if MFA can bring down those incidents from 100 months to two months, that's a win. Right. By the way, you have to by all means, you have to fix those other two as well. But so I wouldn't be I wouldn't be as harsh.
Kelly Shortridge:
But I still think and this goes back to kind of the platform engineering prediction, which again, I think it's going to have a long tail. Look at platform engineering sort of work. And I'll credit Camille Fournier for kind of pioneering it, really. You're not you're looking at how to design solutions to problems. You're not looking at how to implement processes or policies so much. You're trying to figure out how do we automate certain types of work, How do we build products so people can achieve whatever their outcomes are? And I think it's very interesting, for instance, with the rate limiting that is a design mitigation. There are a lot of design mitigations that are overlooked by security teams because guess what? It involves designing things and it's a lot easier to then say, well, the user is in charge of this and this and we have a policy for this. So I said in my most recent blog post, you have the ten to the Ten Commandments. You can point like this is where the user messed up during the breach. That is a lot easier to do than obviously like documenting requirements, doing user research, defining a minimum viable product, building that doing iterative sort of design based on feedback that is a lot more complex. It involves very different skills than security teams have today. And again, it's just it is more convenient. And my point is security teams wonder why they constantly feel behind in firefighting in this kind of like futile sort of feeling because things aren't being implemented by design, solutions are being designed. And I think hopefully that's going to change. I'm not sure if that's 2023, but I think again, on the platform engineering side, maybe there's a little more hope just because there's more of that muscle memory around actually implementing design based solutions, not just human behavior based ones.
Sounil Yu:
All right, Claude.
Claude Mandy:
I'm going to propose a little tweak to your prediction. Right. What what I think the problem is security sees the user sometimes as the only line of defense, Like there's literally only the password and an MFA and the user going. Yes, between that and boom. So if that's your kind of security model, protect your data, that's probably not the right place to start. So absolutely, you need to bring in those design considerations, etc.. So just that a little tweak as the only line of defense, I think this is our legs. And yes, I want to get rid of phishing testing as much as possible.
Sounil Yu:
All right. Well, those are our main predictions. Let's revisit a few other ones, because I think we have some extra time. And you guys had some hot takes here, too. So let's let's all do well. I'm going to go backwards. Multicloud becomes even more multiverse. What does that mean?
Claude Mandy:
So what we kind of see in at the moment is if you go into AWS, GCP, Azure and you kind of look at all the implementations of all the security features across those different clouds, there's all these unique features. That means that they're slightly unique and different. And if you're trying to kind of grow a team that has understanding of all the nuances of a zero versus GCP, even at the identity attribute level and access control pieces, you need basically two teams who know those in depth. So what we kind of saying here is to get across these multi cloud as multi cloud, you actually almost need a broker between them to to understand how what those differences are and navigate them.
Sounil Yu:
So I think that that was already clear to me. In terms of multi-cloud, you have to understand the distinctions between Azure versus. I thought you were also referring to like within itself, we have a multiverse of things. So we talked earlier about SPM. When we get the same SPM results and U.S. East one versus US West or even even US China like if not us China, but whatever is in China, will we end up with different SPM results for the exact same instantiation? That's what I thought you might be mentioning is that do you envision that.
Claude Mandy:
There is some of that availability of security features being rolled out on a prioritization per region and localization so that there is some of that differentiation per region? But I think the biggest difference is people are kind of now adopting these different cloud versions because they need the capabilities that GCP provides over AWS. So they kind of are very a zero Microsoft shop and they're looking to kind of grow that. So it's a multi-cloud is becoming very much an important shift.
Sounil Yu:
So to then Kelli's point earlier around platform engineering and IAC is that is that is that our savior to to normalize that.
Kelly Shortridge:
Security vendors don't like that.
Sounil Yu:
Sorry because.
Kelly Shortridge:
Vendors don't do that security vendors are going to tell you no you need all this posture stuff. Right. But yes, being able to define policies and other things declaratively would help with the vast majority of the stuff you're talking about intra and even across clouds, too, because you can define it again, you have version control. There's a lot of good stuff. But I think it's it's just natural. No one wants to multicloud on purpose. It's normally M&A or something else. I do again, think it's very interesting that it's not like security is the only team encountering the problems of multi cloud, but it is the only team that is trying to think about do we need like a broker or something else and almost inevitable over engineering. And I think it's worth potentially not here, but thinking through why that might be.
Claude Mandy:
Especially that abstraction layer that you kind of need to make sense of all these differences, whether that's a data security, cloud, security posture management, or you actually put it into your configurations, you still need that abstraction layer to make it easy to understand what the differences and nuances are so that you can actually see actually doing the same thing. We're doing it differently, but it's the same outcome that we're driving.
Fernando Montenegro:
The thing I like to talk about Multicloud is multi cloud, first of all, and your organization, let's say 500 users and above, there's is like a 90% chance they are multi cloud already in some way, shape or form, right? What ends up happening is multi cloud is an emergent property of an organization. It's an emergent property of an organization, not as a project. Right. Nobody gets out of bed in the morning thinking, oh, today I'm going to do cost arbitrage between GCP and Azure for my container workloads. No. Right. What's happening is that Project A chooses one thing, Project B chooses another. Kelly mentioned M&A spot on. Like we buy another company that uses a yet another cloud. Security teams are one of the few centralized teams and I think she nailed it there when she mentioned it. It's the only thing that's trying to enforce that control, right? So you end up with it's the idea that it's running away from you, right? I think the cloud's prediction is spot on and that it is becoming more complex. But the way to solve that complexity is part of it is having that intermediary layer which runs into the problem that Kelly described, which is people don't want to write down and design the solution that that work they want, that they want the product that's going to magically solve it for them. Yeah. And the other problem is that you still want to maintain control, but realistically you have to let go. I still have to make a meme at some point of the preacher in in Footloose you're. I haven't seen the other photos, but the original Footloose writer John Lithgow, is when he's doing the speech and he's saying like, how do we trust our kids to do something so that they become trustworthy? Right. Spoiling the 1984 movie here. But but it's that problem. How does security let go of what it needs to do or what it's trying to do? Multi multicloud is a perfect example so they can work on the design level problem that Kelley's talking about.
Kelly Shortridge:
That's actually something you bring up, something that I'm talking about in quite a bit of depth in the security cast engineering book, which is the fact that that was really cool. My light changed like right as I started talking about the book, which is the fact that if you have a complex system that is one that does not have linear interactions, which is basically all of our computer systems at a certain level of scale, centralized management does not work. Centralize management generally, you have to implement that through tight coupling, which in the security case is often you now have the security team that is tightly coupled to any particular asset or process. You know what happens when you take coupling plus complex interactions, You have nuclear meltdowns. Those are the very hard to recover from failures. What you need for complex systems is more loose coupling and decentralized management. Security is going to have a huge reckoning with control. I think at some point even look at things like Kubernetes, right? In a weird way, even though Kubernetes is kind of a central tool, it is decentralized because each team can define their configuration and how their cluster looks. We need more of that and much less of like the one tool to rule them all at the center that controls everything else. By the way, did we learn nothing from SolarWinds wins? Because if you now have one tool that the security team has access to that controls all of your cloud across all of your clouds, Oh my God, that is a great single point of failure. Maybe attackers will switch to that to make money. I don't know.
Sounil Yu:
Well, that's certainly the most likely outcome. If once you start consolidating all that stuff into a single control plan, we certainly wanted to secure that control plan far more greatly as a result. But I would argue it's better to secure one thing than to try to secure lots of different things.
Kelly Shortridge:
Right. Well, I mean, I'll quibble there, but I know that's that's outside of predictions.
Fernando Montenegro:
Yeah, I will I will support her quibble and say that if you do this as code at scale, it is viable.
Sounil Yu:
Right. Right.
Fernando Montenegro:
Or more viable. It's not it's not perfect. There's no silver bullet. Right. But it is if we adopt the mindset of deploying this as code early on, there's something there.
Sounil Yu:
So, Kelly, we only have about three or so minutes left, but you kind of started to hit upon this this first bullet on the other predictions. Can you unpack that a little bit for us?
Kelly Shortridge:
Sure. So security, CAS, engineering people think of that a lot as the actual experiments and the experiments are important, allows you to kind of test adverse scenarios, simulate adverse scenarios, and see how your system responds, not just the machines. It's not just a Boolean test of like, did this work, did this not work? It's actually even how the humans respond in the system to like an adverse event. Did your firewall or did your configuration management tool. It's again, it's very holistic, but security cost engineering, and especially as we're writing about it in the book, is much more than the experiments. It's a socio technical transformation and it's really top to bottom. It's not just, Oh, you're now conducting experiments, you can start with that. It's very much your organization looks different, your priorities are different. How you measure success is different, How you integrate across the software delivery lifecycle is no longer ramming yourself in and obstructing things like Devsecops. It's not like you're just doing it earlier now and it's very much more of a resilience approach which kind of goes to that dawn of platform resilience engineering. Ultimately it is about can our systems like stay healthy and can we ensure the business can continue to succeed despite the presence of attackers? Can we ensure the system responds gracefully? And again, that includes the humans and the system. So it's really we think of it as kind of a revolution and hopefully discarding all that traditional security that's going on, the phishing awareness and all of that towards something a lot more empirically based, much more aligned with software engineering, much more aligned with some of the stuff we talked about with the ICD, IEC, thinking about users, whether those are internal users, external users as customers of a product, it's it's kind of everything.
Sounil Yu:
Well, I mean, any sort of transformation is hard. You remember the whole thing about what drives digital transformation and covert being what what caused it. Right. What is there do you imagine a trigger, something some something that will trigger this rapid transformation? Because I don't necessarily see it happening on its own, left to some massive sort of a worldwide event, as I'm sure.
Kelly Shortridge:
I think it's interesting you see chaos engineering as kind of an emergent thing, especially like Netflix obviously was the pioneer. It was really because of how important availability is that's only going. Get more prevalent across organisations as they digitally deliver services and products. Because downtime means users can't use it. Generally that means you lose money or otherwise you lose reputation, whatever else. So I think that's often the forcing function is just how can we get more reliable systems? I think there's also the trend of things just feel fast and there's an extreme kind of pace of change. Security is just not keeping up, and I think that's going to be a forcing function is again, if you have platform.
Sounil Yu:
I'm sorry, I'm wondering if the trigger event in like, for example, Netflix was preemptive by saying, well, let's let's blow ourselves up and see what happens, what a trigger event be like a ransomware event where someone encounters something like this and says, okay, we need to really change how we do things.
Kelly Shortridge:
So that's what I was about to get to. So I think Netflix, in case they had maybe like an outage or something like that, I can't quite remember. What I'm envisioning though, is the platform engineering team how now has something like IAC in place, Security team is not on board with that. Some sort of incident happens. All the security tools don't detect it. They don't alert on it, but you know it does. Again, configuration management tool, you know how it's resolved through like reverting configuration. And if you were the head of platform engineering or now the CTO and it's the CTO and so like now reporting the CTO and it's OC, how did this happen and what fixed it security team is going to have to change pretty radically if they're going to want to not be kind of like taken out of the room and have all their budget given over. Right. It's just the fact of the matter is platform engineering can just tie whatever they do to outcomes much more concretely through metrics. Obviously, again, they have just more productivity through things like automation. I think it's it's frankly, the matter is security is going to increasingly look outclassed and outmatched. They're not going to get tangible results. And a certain point saying, well, we saw this kind of malware strain over the past month, that's not going to cut it.
Kelly Shortridge:
It's going to be. Oc Are you doing more faster? Can you actually prove like more like engineering teams or using now are standardized, vetted, like authentication? Can you even tell us like around our dependencies, are they upgraded faster? Guess what helps faster upgrades, things like IAC, It's not some sort of posture management or like policy enforcement. It's that. And if security teams can't actually achieve that again, it's going to move to whatever team can do that, which is often the platform engineering your infrastructure engineering team. So I think it's it's something where if security doesn't want to get left behind, they have to kind of get on board with this. And I think that's going to be the forcing function on the security side. But frankly, the book we're writing it for anyone technical, any sort of technical team, the platform engineering teams want to adopt security, cross engineering. They absolutely can. We have a very agnostic view of defenders where it's anyone who just cares about system safety, right? Your systems resilience more generally. So I think there are a few different kind of, I would say like factors at play, but it does feel like now is the time.
Sounil Yu:
Okay. Well, with that, thank you to my panelists, Kelly Cloud and Fernando for sharing your predictions for 2023. Be interesting to well, we should revisit this in December 2023 and see what actually happened. But we thank you for the thank you to the audience for attending. And at some point, I think we're going to put this into a blog. So if you guys have any challenges or any sort of want to contest some of these predictions, we would love to hear them as well. So thank you again. And I hope you guys have a nice day.
Sonix has many features that you'd love including enterprise-grade admin tools, automated subtitles, automated translation, secure transcription and file storage, and easily transcribe your Zoom meetings. Try Sonix for free today.
Customers worldwide use JupiterOne to bring clarity to their complex cyber asset infrastructure and drive digital transformation.
"Manual threat modeling, regardless of the analyst’s diligence, is prone to errors….A tool like JupiterOne is critical to completing a robust threat analysis.”
Zack
Security Analyst at Aver
Sign up to launch your free JupiterOne account today.