Trust in Computer Systems and the Cloud

Trust in Computer Systems and the Cloud

Stephen Walli receives Mike Bursell to discuss his book “Trust in Computer Systems and the Cloud,” with a particular focus on the impact of Confidential Computing on security, trust and risk. Deploying Confidential Computing workloads only makes sense if you can evaluate the assurances you have about trust. This requires establishing relationships with various entities, and sometimes rejecting certain entities as appropriate for trust. Examples of some of the possible entities include: hardware vendors, Cloud Service Providers (CSPs), workload vendors, open source communities, independent software vendors (ISVs), and attestation providers. This fireside chat hosted by the Confidential Computing Consortium discusses the book, some of the material it covers, and the approach it takes to trust – not to mention why trust is important within the context of computing systems and the Cloud.

Stephen
So we'll wait for another moment just to see who and how many show up, and then we'll get going. But I'll introduce myself because I'm the least important person in the discussion so we can burn time that way on the front end. So my name is Stephen Wally. I'm a Principal Program Manager in the Azure office of the CTO. I work in the open source ecosystems team. And I'm also, and the reason I'm even here, is because I'm the Governing Board Chair presently for the Confidential Computing Consortium. And so today, I have the pleasure of Mike Bursell's company to discuss the book he wrote called "Trust in computer systems and the cloud" that's recently been published. I've had the pleasure of working with Mike in the Confidential Computing Consortium for three years now. Mike, well, working on the Red Hat office of the CTO, was one of the first participants through the planning phase for the Consortium, and brought one of the first open source projects to the Consortium as well once we were up and running. And that's the Enarx project, which is still very successful today. Mike has since left Red Hat and is now CEO at Profian, venture backed startup in the Confidential Computing space, and he's still very much engaged in the Consortium. So I'd like to welcome Mike and others as they dial in. Mike, do you want to add anything to that introduction about yourself?

Mike
I would add something to you as well, it's not the only reason you're here because you are the chair of the Consortium, you're a esteemed colleague and friend as well. So thank you very much indeed, for doing it.

Stephen
Well, thanks for keeping me in the loop, so to speak. So I wanted to start off, I love the book. I've read the book. And as I was, just the whole approach to it, you know, your discussion in kind of the introduction, I thought was really useful at setting that baseline. So what was the genesis of writing this book? And how long have you been thinking about it?

Mike
Yeah, well, it depends how you ask the question, really, I kind of started thinking about these issues 20 plus years ago, maybe, back 20 years ago, let's say, and I would have been, I lost my job before, because it was taken over. And I was looking at possibly doing a PhD. And I did some research on a PhD around authority, actually, and how we think about authority in a variety of different ways. And that I kind of ended up turning it around a bit and thinking, well, that's kind of really about trust. Trust and authority , a very sort of interlinked themes. And then, about four years ago, now, I guess, I was at a fairly major tech conference, and I went to see a talk about trust. And I was, shall I say, unimpressed. And it wasn't so much that the quality of what the person was saying, but it just seemed to be very disconnected. It didn't seem to be, you know, founded in any joint understanding in the industry about what terms meant. I didn't feel that it had any academic or intellectual backing, it was just his "some things I think," and it just didn't, it didn't fit together. And I came out of that session, and I said to my then boss, Steve Wass, at Red Hat. I said, I'm gonna write a book. Because I thought if I said I'm gonna do it to him, and he says, that's cool, then it's difficult to get out of. And he said, That's great, which is a bit of a problem, but great. And I went and told to another very good friend, who is part of opensource.com, called Jen Huger, and they both said, You should do this. So I was kind of stuck. And I had to do it. Right. So that was the thinking. I was just annoyed that there wasn't a joined up thinking. Trust is used in a number of ways in the industry. You have Zero Trust, I will probably come back to that. You have Trusted Computing. You have, you know, Trusted Platform Modules. They didn't seem to come together. And my background is kind of weird in that, I come from Humanities, I guess let's call it in North America, a Liberal Arts background. Yeah. So my degree is in English literature and theology, which is not an obvious thing, right.

Stephen
But it's important to trust.

Mike
Yeah. Right. Because authority is actually a really important thing in both of those, you know: what are authorities? How do they fit together? What is trust? How can you trust them? And as I've gone through my career in sort of in computing, and security, specifically, I start to think about how those applied, and just get more and more annoyed about it. And every time I spoke to someone about trust, they all said, yes, yes, it's really important. And we'd start talking about it. And clearly, they thought something, it meant something different to what I thought.

Stephen
And that was something that I really loved. In the whole opening of the book, you know, basically, the first chapter, you're grounding people with this really simple, clean definition of trust. And you started off going based on the simple idea of you trust your brother and sister. And you trust them in very different ways. And then you start layering corollary on that of context, time and the asymmetry of a relationship. And I thought that this was absolutely fantastic. As you kind of build up the layers of this word trust that we all just kind of, we use it every day, we all think we know what it means. And you start teasing it apart to really bring it down. So give us that beautiful one sentence description of trust, if you will, and then kind of walk us up through that the way you leveled it, you layered it, in the book with you trust your brother and you trust your sister kind of thing. And I can't remember which one was the doctor and which one was the...?

Mike
Yeah. So I'll tell you that. First. I want to make sure I get the quote exactly right. Because it would be foolish if I didn't, but I'll settle for the thing about my brother and sister. I started the book, I say, I trust my brother and my sister with my life. And this is true, right? I really do. The problem is that it's not as simple as that, because one is a doctor and one is a scuba diver instructor, my sister is a scuba diving instructor. And so I would trust her to, you know, service, my scuba gear, but not my brother. So I'm already, you know, there's a different context for trust, right? On the other hand, he is probably better placed to give me CPR if I fall down in the street, or a family gathering or something, whatever it may be. So that's the first thing that, you know, the context for trust are different. I trust them both, meant the same degree, but for two different things. So that's the first thing. And the second thing was that time is important here. In fact, my sister hasn't practiced as a scuba diving instructor for maybe 5 or 10 years. So maybe her skills aren't up to scratch as they used to be. So maybe I trust her less now than I did to service my scuba gear. That seems appropriate. And you know, if you haven't spoken to someone for a long time, and you're not aware they've been keeping up with their training or something, that seems an appropriate thing. And the other thing was that these contexts are asymmetrical. Just because I trust my sister to service my scuba gear doesn't mean that she trusts me to service hers. Again, and you know, as soon as you think about it, it makes a lot of sense. But you would say I trust you and you trust me but you can't just say that it doesn't really give you enough information.

Stephen
I have had these exact discussions with my daughters, and there's been the accusation of you don't trust me. And it's like, but no, I always trust you. I always have trusted you, but there is this one situation that all I have is observed fact. Yeah,

Mike
Yeah, absolutely. And it's really silly. I wish I could just show a picture of the book that is available.

Stephen
This is an excellent book.

Mike
Thank you. I published by Wiley. It's available in e-reader Kindle, whatever, hardback. I don't think it's in softback yet, but this is what it looks like. And yeah, so let me find the exact date...

Stephen
I actually looked it up, I have it right here.

Mike
I have it as well. You go for it. Let's see if we got same one.

Stephen
Exactly. We hopefully chose your same basic definition. Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation. That's the one.

Mike
That's the one and then there are three corollaries: Trust is always contextual, one of the contexts for trust is always time, and trust relationships are not symmetrical. Yeah, that's right.

Stephen
And I thought it was perfect because if you had asked me what it took to define trust, I would have fallen into that trap of exactly like the kind of situation I just talked about with my one daughter. It's like, well, how do you layer nuance on that idea? Of, well, of course I trust my daughter. It's just that there was a particular situation that I guess technically I don't trust my daughter in that one situation based on observable history, but how do you find a simple way to say that? And how do you find a thoughtful way to say that? So as you began to layer it, and I thought it was wonderful, because I've been arguing with folks for a long time now that relationships aren't symmetrical. I think this has always been my rant about Facebook and friends, is this idea of oh, we're friends now. And it's like, yes. Relationships are never symmetric. Yeah. And the idea that you immediately brought trust into that space. So where did the corollaries come from? Like, how did you narrow it down to these three wonderful...?

Mike
I spent 20 years thinking about it. I said, it comes down to you, on and off. And the more I thought of, okay, we need to be able to express this in terms of human relationships, because trust is the thing which we take out of the human world, but I wanted to be able to transfer it to, you know, the world of computing and the Cloud and stuff. And I thought that if we don't define stuff, then there's no way we're going to be able to transfer it. And that was the thinking, you know, and let's say we've got one system trusting another system. And I'm a big believer in systems and abstractions. And I talk a lot about abstractions, that's really important. But if we can't say this system, if you want to this system trust this system, and what does that mean? And it kind of means well, to do what. One entity and another entity to do a particular thing in a particular way. And I trust this amount. And just because one system trust the other system doesn't mean the other... So one system might be providing DNS queries to the other, right? It's a nice computer related thing. But just because, you know, Google will provide me with DNS queries, does not mean that Google trust me to give it the answer to DNS queries. It's absurd. But you know, if you're talking about, you know, machines in a Cloud, to trusting each other, what does that mean? And I think that same problem arises in all of these spheres. And you can take it down all the way to the chip level, or the microcode level as well. And this is one of the really interesting things, I wanted to come up with a set of ideas, and a set of definitions that you can apply to those levels, if you need to.

Stephen
Right. And I had forgotten that you had a background in the Humanities, because looking at your definition of trust, it is a very simple sentence. But at the same time, there is a philosophical aspect to it that really does capture "this is what the trust thing is" that I thought works really wonderfully as, especially as you started it from that perspective of let's talk about trust, because people trust each other. And so we all know what it means except apparently we don't, and then be able to apply it outwards into this, okay, let's start talking about systems and how other things could be seen to fit this definition of trust. I mean, one of the things I love that you tease out as well, because it's a real pet peeve of mine is this idea of Zero Trust. So, you know, you mentioned the idea of Zero Trust early in the book, and you kind of you've stepped around it long enough to say this is a very nuanced idea. And then later, in chapter five, you start drilling down into this idea of the nuance that's in there. And I really appreciated this, because we hear a lot over this last few years, especially as we start to deal with all of the debate and ethics around Confidential Computing, all of the debate as we fall into the software supply chain discussions, that there's idea about we need a Zero Trust system. And I always get uncomfortable when I hear Zero Trust. Because as a person, as a human, I naively believe there needs to be a basis of trust somewhere. And one of the silliest examples I have in that kind of human economic space, was I recently read discussions with my daughter about capitalism. And it walks in the opening chapters, it walked into this idea that the banks just make up money. And it's like, well, how do they do this? Well, it's because they can. Well, how can they do that? Well, because they trust, because banks are bounded by a nation state. They trust that the nation state has their back. There's that central banking authority.

Mike
We're getting on to a whole bunch of blockchain and cryptocurrency questions here?

Stephen
Well, exactly. We can get there in a moment. But there was still that idea of that. Well, what happens? The nation state level is like, where does the money come from? Well, they make it up. And it's like, but you fall into that, of realizing that it is actually our trust in the institution. And that institution's trust in the nation state institution, that there's a fundamental thing here, where when you step into this idea of cryptocurrency, and it's, oh, we don't need institutions anymore. It's like, I'm pretty sure that's not true. But you know.

Mike
You've built up a whole a whole set of questions here that I want to answer kind of one by one, I'll try. And so the first one is about trust and institutions. And one of the things there's been a lot of discussion about in which I tried to call on in the book is ideas of trust in institutions, people like Fukuyama and Onora O'Neill. And Francis Fukuyama and Onora O'Neill had done some great work around why we trust institutions, how we trust them, what we trust them for, and I talk about them as authorities. At some point, you've got to have an authority and you've just trust the authority. And the reason you trust that authority may be for a variety of reasons, it may be because they have financial control, military control, whatever it may be, but when it comes down to it, there's probably a Root of Trust. And we'll probably come back to that phrase in a bit as well. And, so authorities are important for whatever reason. And the other thing you talked about is Zero Trust. And I don't like the phrase either, because, well, two reasons. One, I think it's misleading. Secondly, the Zero Trust movement, which is based on some really good ideas, really well defined ideas, has become this nebulous thing that people point out and say, it's a good thing without really understanding the bits that work and the bits that don't work. And for me, we need to, I prefer to use the phrase explicit trust, we've got to be explicit about what we trust and what we don't. And it's not as sexy as Zero Trust.

Stephen
It's not as sexy. You're being a philosopher and not a marketing...

Mike
I know. If we go down to what the good things about Zero Trust are right? Zero Trust kind of says, if you've got two components, they should start off with the idea that they don't trust each other, and then decide how much to trust each other. So there's Zero Trust as the basis of the start of the relationship. That's kind of the basis of Zero Trust. The problem is that doesn't mean it's a system with no trust, because: a) they have to decide what rules to trust to build out those relationships for a very start. And then b) this question is like, Okay, once we've decided what rules to follow, I'm gonna have to trust that this other component will behave in the way I expect it to for the foreseeable future, and what I'm going to do if it doesn't, and also there's a whole bunch of other things like, okay, let's say we're going to use TLS, the cryptographic encryption to communicate with each other? Well, first of all, quite apart from setting that up to trust it, I need to trust things like the cryptographic implementation, the cryptographic protocols, the mathematics underneath that, there's a whole bunch of things of trust, which on the whole aren't necessarily relevant for everyday discussion. But you need to think about what the bits are you trust and what bits you don't need to. And which brings me nicely, in fact, to Bitcoin and to blockchain. Because one of the things in the very original white paper is this idea that it is trustless. And it's not. What it's doing is saying that we're not trusting authorities, institutional authorities, as the basis of what we're doing, we're trusting something different, which is the mathematics of hashing, the implementations of how you link all these things together, etcetera, etcetera, etcetera. It is not trustless, but there's one type of trust that you usually expect that we're not doing. Now, that's fine. But you should be explicit about it. And I think that people who have spent a lot of time thinking about this understand that, but one of the reasons I wanted to write the book is to give us language to express that more generally to discuss it, to be explicit about what we're being explicit about, have a framework so we can do that.

Stephen
Yeah, absolutely. And that was part of the joy of the book. There is, as you build out these frameworks and these models, and you've kind of layer the nuance and the detail on them, I now have a way to go into these discussions. Because I've joked with you for three years now that this is not my area of expertise, sitting in Confidential Computing, so I'm always uncomfortable when I'm in a room full of engineers where that is their expertise. And so the book has given me a wonderful approach to think about some of these subjects better.

Mike
You mentioned Confidential Computing, you actually have questions coming.

Stephen
I was just about to say, like, what you were just discussing, I think is really comes to Muhammed's question there in a really nice way. You can read the question there properly?

Mike
So the question from Muhammed is how realistic is this notion of trust from really complicated systems like SGX or TDX, specifically, given that many parts of the system are proprietary? And the answer is, a large part of the point of the book is to say that trust is contextual, i.e. you trust things to do certain things. And it's based on certain assurances. And you ask about the question of proprietariness. Well, when it comes down to it, if you're writing code to be executed, you have to trust the thing which is doing the executing of the code. Now, I would be happier if I were trusting that because it was open source. Frankly, that's how I am. And I mean, open source all the way down to the hardware. There are some, you know, open source, open hardware projects, which are great. However, they are not how most people do computing at the moment. And that means that, in the end, we have to come down to a root of trust and authority. And in the case of SGX, or TDX, that authority is Intel. So I have to base my trust on Intel, that they have done the things they have said they have done. Now, of course, I can trust the community to test that. It's the trust, but validate, which is the old Cold War thing, right? But we can't necessarily be sure, without being able to see it ourselves, that, you know, we've actually done all the trust we're expecting. And I'm currently reading a book called a vulnerable system. I've started really just yesterday, arrived yesterday, which talks about how this came about, people realizing that, you know, you can't be sure that you've checked everything. Just because a tiger team finds a vulnerability doesn't mean it's found all of them. Just because it hasn't found any vulnerabilities, doesn't mean that there aren't some to be found. So we can and we must trust these proprietary pieces and prove them as we can. And there's a really important part to be played here, by academia, by researchers, by anyone in the commercial sphere, looking at this, but when it comes down to it, you have to decide what to trust and to do what. And as I said, the more that's open source, the more opportunities we have to test. But when it comes down to it, you have to trust something. And the other thing, of course, that we're trusting is not just the implementation is right, but they're using the crypto right, and that is stuff we can test. So, you know, for those people who aren't maybe familiar, particularly with Confidential Computing, what it is, it's about using what we call Trusted Execution Environments. And they will take code and data and encrypt it and allow you to use it "in use." So encrypt data "in use" in your actual application, so that bad actors or even the host that's running, the host system, can't look inside it. And also there are test. So they say I can prove to you cryptographically that this Trusted Execution Environment - in Enarx, as we call them, Keeps - this Trusted Execution Environment has been set up correctly. And so we can test the links they're doing and the certificate chain which proves the attestation. We can test that it's correctly done. But when it comes down to it, we have to trust that the chips are doing what Intel, in this case, and AMD have a version, Arm has a version coming out, IBM has a version, RISC-V has a version. We have to trust them to some degree that they're doing what they say they're doing. Great question Muhammed very, very relevant to our talk. Thank you.

Stephen
Muhammed's continuing with his line of question. So why don't you follow down that thread.

Mike
So he says: "I appreciate your idea of explicit trust," Muhammad says, "this will give you more confidence in the design system. Unfortunately, this is rarely the case in the examples he's given. I wouldn't pick out any particular chip vendor for this. I think it's true across the industry. How, in your opinion, could it be encouraged change?" Well, trust, validate, I think, when we're talking about doing Confidential Computing, it's not just a chip, there's a whole software stack we need to be looking at. And the more of that is open source, the better. And the smaller that is, the smaller the Trusted Compute Base is, the better as well. And the other thing is, I would love to see forums like the Confidential Computing Consortium, who are very kindly hosting this talk, encouraging its members to open source their hardware, open source their firmware, open source their microcode. I would love to see more of that. It may take a while, but we can keep our fingers crossed that we get that?

Stephen
Absolutely, well, and that was something that, you've got an entire chapter on open source in the book, and I really love that because, again, that was a shift back into this idea of, you know, philosophy an the humanities. And we are social animals. At the end of the day, you know, the entire concept of a trust relationship. We are social animals, and that's presumably an evolutionary behavioral trait that keeps us going forward. But the thing I loved in the chapter was you build up on these ideas of trust from people acting as a small, possibly singular individual maintainer, to a community of practice in that community itself, and you based it on Lave and Wenger's work, which I have not read, I keep coming up against it, though, and I know I have to read it now. But, and then how you judge trust in the project, based on things like the Linux Foundation's CHAOSS work and such. I have the good fortune right now to be teaching an open source software engineering course at Johns Hopkins University. And that's, you know, a substantial part of this course, is teaching the students that just because it has a label on it, you know, open source, doesn't mean that you can just reach into that bucket and play without considering the source and, now, I never framed any of it as an explicit trust relationship. But there's this idea of how do you judge a project? And I thought you just did a fabulous job in that chapter of bringing on screening these ideas of a framework to use as a trust judgment on a project. Can you can you walk us into that a little bit, because you also have the pleasure and privilege of running the Enarx project with a group of fellow maintainers there, you know, that was something that you started with them. So you're into this idea of trust in the community of practice, and then into kind of a structure of how do you judge something?

Mike
Yeah, there's an entire book in there. There have been some very good books written around this. I wanted to apply some of these trust, thinking ideas and frameworks to it. I think one of the first thing is that, when we say I trust the project, what does that mean? And the first thing is, maybe let's talk about the people I trust and to do what, and I'm stepping back from our more explicit and defined use of trust, and maybe a more fuzzy human type of trust here, but now we're trusting architects and designers to design the software. We're trusting developers to implement it well. We are trusting developers to review other people's code, technical writers, you know, testers, people to deploy it in the ways it's meant to be deployed. Because, you know, we see that not happening a lot, don't we? Because, you know, if I write some code that's brilliant, but you're running into something else, then that's not right. That's not going to help because that's contextually inappropriate, right? We are hoping that people will report bugs, or will fix bugs if they come up. So there's a whole bunch of initial things about trust in particular people to do this sorts of things. And, you know, people enjoy getting involved in projects. And one of the things that healthy projects don't, and CHAOSS is about measuring the vitality and health of a project, really, not about the trustworthiness. But there's an interaction here, there is an intersection between these things, because my view is, and I think it's borne out by a lot of work that far better qualified people have done than I, is that if you have a healthy and vital project, I mean, a project with vitality, which has a lot of conversation, a lot of trusting of each other, and the space to come up with problems or report bugs and to fix things. I think that the chances of all the other things falling into place is much higher, if that's the case. And so I think there is an intersection between these things, you know. We've all seen projects that fall apart, because the community doesn't work or stops working. There's some, you know, lots of examples. I don't want to call on a particularly. But on the other hand, we've also seen ones, which you wouldn't necessarily expect would take off, which have taken off because the community is really got behind it. So that's the kind of the first thing and then the next thing is we talk about trusting the project itself. And what does that mean? I think it's more than just trusting that the implementation is correct. There's a lot more to it than that, because let's say we have a project which is implementing a security, a cryptographic protocol, let's say. So it may be that I trust the maintainer to be very good at reviewing code. But just because they're good at reviewing code doesn't mean that they're necessarily experts in the cryptographic protocol. And the code may be written beautifully, but it may completely incorrectly implement this particular protocol. And that's a real problem, right? Or even if it does, at the moment, the tests for it may not, you know, probe all the different options. So we actually, suddenly say, we take just a project, we need to trust that there are sufficient people on the project with sufficient expertise to be able to do all of these things. Otherwise, what does it mean to trust it? And then there's questions like, how do you build it? How can you be sure that what you receive from the website is what was actually reviewed and built? If there are problems found, how quickly they are fixed. All those sorts of things. And one final thing, I would say, that's really important, is that I truly believe that the best chance you have for secure software, or software which is as secure as possible, I don't believe there is any perfectly secure anything. But for software which requires security is open source. And that is because there's the opportunity for lots of people to look at it. There is a fallacy in the open source community. It's down to a misunderstanding. It's a particular one, which, you know, it's taken from a very famous quote, which is in open source, all bugs are shallow, which means that if something is open source, there's a decent chance that bugs will get fixed. And I think the problem is, it's the phrase decent chance that we miss. Right? Yeah, it's particularly difficult for open source security projects, because security is really hard. I've been doing security stuff for 20, 22 years, and I can scratch a tiny bit of one part of that huge thing that is security software, right? And there aren't that many people who are experts, and if you have a security project in open source that doesn't have those people looking at it, it doesn't matter that there are lots of eyes, because they're not the right eyes looking at that.

Stephen
Right? And that's, you know, Linus' law, you know: all bugs are shallow given sufficient... I've always thought it was understood backwards from the way it was said. I'm going back to your earlier ideas of, if you have a real community of people that are engaged in that body of software, then you have the pre commit inspections going on. That's the ability to review on the front end, not how fast can we find the bugs on the back end. But Linus' law is a really good law. It's just really about front end inspection and having that expertise in the community of people.

Mike
Absolutely, yeah. And I'm going to make a plug I've made here and elsewhere on my blog "Alice, Eve, and Bob," so aliceevebob.com, there is the plug for that as well, which is that is really important for commercial companies to commit experts to open source, because it is the people who are experts are likely to be employed. Because that's what, that's how the world works, right? And if they're employed just to do the stuff for their company, and not contribute back, then the stuff, then that company will be consuming, will not be as good as it should be. So it's a beautiful positive feedback loop, and, you know, there's some great companies out there doing some great work, but anyone who consumes open source software, you should be finding ways of giving back to that community.

Stephen
Absolutely. I mean, I walk people through all the time, product teams, executives, the whole, the whole zoo, around this really simple example of the value capture that you can do is great, literally, if you are consuming open source components into your customer facing products and services, then literally orders of magnitude of value capture on this, purely the software pieces. But then you're living on a fork. And this isn't giving back out of altruism, you really want to be giving back out of the simple engineering economics of forks get brittle very quickly, and then you end up in fork management hell, rather than just keep contributing any changes you make, any small change you make, is not going to be safe.

Mike
And it's also normalizing the practice, because, you know, it doesn't work if just one big company does it. But if it becomes the norm for companies, then they'll all contribute bits. And it all works together as a whole. And everyone benefits. It's you know, this is where it works. This is when it all works. And the trust is built up because you have a community of practice. There we go. Labor, love, and... I've always said..., but I have no idea whether it's correct or not.

Stephen
You know, as an English speaking Canadian, you see certain words that look French, and you usually assume that as well. But so I suspect it's... but I didn't see the accent on the end. So anyway, it doesn't really matter. So we've got another question, though. Please go ahead.

Mike
Yeah, so Silvana says: "Hello, Mike: in chapter four, defining trust in computing, you talk about cryptography being one of the important tools. How do you think this can happen, considering some possibilities like quantum computing? Would that significantly affect the dynamics used in the stress process?" Oh, yes, it would. So let's just talk about why quantum computing is important here. The typical way that it's considered an issue is that cryptography, classical cryptography, that's called classical versus quantum cryptography. Classical cryptography is generally based on some particularly difficult problems. They are difficult mathematical problems, which are easy in one direction and harder in another. Factoring primes is the well known example, for one type, and there's elliptic curve cryptography. This is mathematics, which gets really scary really quickly. And I'm not going to go through it because I'm not a cryptographer. But the point is that there is a significant likelihood, shall we say, that advances in technology and mathematics, and we'll come back to that, will mean that some of the forms of cryptography that we use at the moment may not be secure for that much longer. How long that much longer means, nobody seems entirely sure. I think probably 10 or 15 is the minimum really, that we'd like to have problems, I could be wrong, but there we go. The other thing is that there are mathematical techniques, which are also whittling away at some of these cryptographic protocols, which mean that they may fall for other reasons. So yes, one of the key tools that we use to establish and maintain trust in the world of computing, and the cloud, is cryptography, or more specifically, various cryptographic primitive ciphers and protocols built from them. And in order to be able to worry, and this is what you've brought up here so far is the question of time, because how long we can trust this for, we need to consider what the risks are. So if I'm encrypting something, which is a bank transfer, which I know needs to be kept secure for the next 12 seconds, whilst the bank goes across, then I'm probably fine using, you know, a standard cryptography, classical cryptography. If I'm trying to create something which I want to be safe for 25 or 30 years, let's say the deeds for a house or for a, you know, for a big company takeover, maybe, then maybe I need to start worrying, and thinking about other ways to protect it, maybe combining different forms of cryptography, those sorts of techniques. So cryptography is one of the building blocks. And like all building blocks, we need to consider the time context of our trust in it. So that's a very good question. And the answer is a yes. I hope I've given enough of a reason for why it's a yes.

Stephen
Well, and actually, Mike, you made reference to something there that I hadn't thought of before, because I keep hearing, you know, the promise of quantum, you know, all of our cryptographic techniques to date will just, you know, burn in the forest of quantum kind of thing when it shows up. But I hadn't thought of the inverse, looking through the telescope the other way, what math will we enable with quantum that will give us that security in a quantum computing world that we haven't had access to till now. I've joked for a long time that the reason we keep inventing a more powerful compute, is, you know, video for movies, you know, like, I'm a big fan of certain videos that do a lot of computer generated work. And, you know, it's that kind of thing. That's why we keep doing this, you know, more and more powerful computer. That idea, though, that we would have access, and be able to perform more powerful calculations in a cryptographic world. Presumably, there's math coming as well, that we would have access to that we just haven't had the compute power to do yet.

Mike
Yeah. What's really difficult is we don't really know what's coming yet. And what's going to work and what won't. There's, over the last few weeks, there's been news, rather bad news for the community, actually, and NIST, which is the National Institute of Standards and Technology, there we go, they were running a competition for people to come up with protocols, which should be, I don't like the word post quantum, but quantum compute resistant, or quantum resistant. And there were three options, one of which has been shown to be insecure, even to classical computing. And it's really hard, this mathematics is well beyond my ken. And so we just don't know. And I think this is going to be a really interesting area. If you have the sort of expertise to do it, please, please do. And I'd love to hear about it. I will understand it. But I'd love to hear about it.

Stephen
Well, and maybe not that they aren't asking the other question, but maybe NIST was asking the wrong question in that competition, it shouldn't be that it will survive quantum computing, but maybe it should be what are the next algorithms that are quantum enabled?

Mike
I think it's a different competition. And we need to have that. Yeah, absolutely. It's a great set of questions. There's a question from Muhammad. I see: "can you summarize the key ideas and your views of cloud versus edge from trust perspective?" Not without going I think a little bit more into where I go to with this in the book. So let's just give a bit of a how I've kind of got to this stage. So part of the trajectory of the book, shall we say, is to say, the way that computing has worked is meant that we've always been able to trust most those systems that we own and we control to some degree, and what owner control means is an interesting question there, but let's just go with it for now. The problem is that we are, as a society, doing a different way of computing at the moment, which is great, but comes with risks that aren't talked about much. And that type of computing is cloud computing. And the cloud is just somebody else's computer. And even my mother in law understands this now on a good day, and she'll tell me, it's just great. So and that's probably fine. If I'm putting my Holiday snaps up, that's great. But if you're a bank, or a healthcare provider, or a telecommunications company, or an oil energy supplier, putting your data and your algorithms in the cloud is not great. And it's not great for the simple reason that you don't own that computer. And the way that computing works up until about now, is that if you own the computer, you can look inside, anything that's running on it, and you can change anything that's running in it. Which means if my bank runs from the cloud, anyone who controls that machine, whether it's the CSP, that cloud service provider that is, or someone has compromised it, it can see and can mess with it. And that makes me very unhappy. And it makes banks and people very unhappy, which is why they don't put all of their stuff in the cloud. Now, Confidential Computing, which brings us nicely onto that, is the capability to use some very nice threads. This is a way of using some of the nice capabilities in certain modern chips, from AMD and Intel at the moment, although others are coming from Arm and IBM and RISC and people, to run your applications and keep your data in such a way that no one can look at it, even if they own the machine, even if you have hypervisor access, administrator access, kernel access. So that is huge. That's a really great thing. Because if we can make it easy to put applications in the cloud, which have these sensitive data or algorithm concerns, we can just change how the industry works. And I'm a big fan of that. And that's what Enarx tries to do. The company I'm involved with, Profian tries to do it. It's what the Confidential Computing Consortium is all about. Which brings us to the question of what's the difference between the cloud and the edge? Yeah. So I think there's a number of differences, I think the key trust difference in the cloud and the edge is access. So in the cloud, you generally expect that the host machine is under the general control of a particular known provider, like a cloud service provider. So, it's pretty secured from tampering by, at least physical tampering, except by employees of that company. On the Edge, things look very, very different. So let's say you're putting a machine in a stadium to do video processing, or to do, up a telegraph pole, to do 5G, or on a gas pipeline, to provide a gateway for monitoring of IoT devices, on an oil rig, or in a drone in a war zone. All of these are edge situations where you've got to assume that actually, that machine is more vulnerable to physical tampering, possibly, in fact, to logical software tampering than it would be if it were a cloud machine, which means that you need to think harder about who you trust to do what and which parts of the machine you trust. So let's say, in the cloud, you want to, and let's say you're not using Confidential Computing, you're gonna have to need to trust the cloud service provider, the operating system, the hypervisor, and the hardware before you start anything. In the edge, well, you also have to trust that anyone who can access that machine, can be tampered with any of those things. Yeah. So you've just got this. It's just got blown up. And there are a bunch of other types of differences between the cloud and the edge, which I do touch on a bit in the book, in terms of network access, and things like that, and as a station, blah, blah, blah. But I think those are probably the key aspects that I have.

Stephen
Absolutely. I mean, I just, I work for Microsoft. I've had the pleasure of one of our data center tours in the Azure world. So I know what the physical security looks like on that. And it's very impressive, kind of thing going to, as a guest physically going into the data center complex. The level of security and scanning is higher than anything you'd go through, going through an airport checkpoint, regardless of what you think about the the theater of airport checkpoints. But the flip side is when Microsoft released the Azure sphere device as an IoT device, I was talking to a colleague that works on that project, and they were talking about hardware ackles. Pardon. The idea that, that an access control list concept that I've always thought of as a soft concept could be done in hardware was a surprise. But apparently, you might imagine, Microsoft learned a lot from our Xbox experience, because of an enormous number of people that wanted to hack on the Xbox physically, and we put the device right in their living room for them to hack away. And so that idea of that physical access in the physical real world, you use the phrase, really early in the book, programmatically encoded bias. And I laughed when I saw that, because the phrase is so perfect. Of, as developers, we build models in our head of, we are modeling in software a process in the real world, it doesn't have to be a physical process. So I started my career in real time. So I was physically or I was modeling the physical world. And it's amazing how fast reality smacks you up the side of the head with your bad assumptions. But I love the phrase of programmatically encoded bias, because that's a really fancy way of saying bad assumptions.

Mike
Yes, but I think... Thanks for bringing it up, because it gives me opportunity to make another plea. And that is for diversity. Because bad assumptions aren't always things that you can see. But you may not be able to see them because of where you come from. Right? And one of the things about diversity, I think it's important in open source, I think it's important in software, and I think it's even more important in security. Diversity of thought, diversity of background, diversity of education, diverse of ability, language, all of these things is vitally important. Because we can only code we can only test what we can think of. And if we don't have people thinking in different ways, the people who, there's a classic line, I'd probably use it in the book as well, I think it is that, any fool can create a cryptographic protocol that they can't break. Right? Yes, the point is, if you can't break, does not mean that it's not breakable. You need attackers who have different views of the world and they will undercut your assumptions, they will look at things differently, they will just, you've made the assumption that no one would do that, or it wasn't possible, and they will just do it. And if you don't have people on your team, or in your community, on your open source community, looking at saying "what?" or that's not the way I think of it, you're lost.

Stephen
It's that exactly. So what we are actually coming up on time right now. And there are no other questions. So this is the last call for questions. But while we wait and see if there's any last questions coming in, how do you want to summarize this, Mike? How do you want to wrap up, aside from: please buy the book, it's a wonderful... Show us the book again, that's great. Well, there you go.

Mike
So maybe I'll just read the last paragraph of the book: "I believe that more explicit consideration of trust is an obvious next step within the field of computer systems, and that we have to do it. I believe passionately that risk, and security and trust all fit together, and we need to talk about it." This is not a perfect book, I already have stuff I want to put into the second edition, all that sort of stuff, if it ever happens. But it aims to provide a framework for us to discuss these issues. And if nothing else, I hope people will look at it and say, This is wrong. Let's talk about it in a different way. That would be great as far as I'm concerned. Thank you so much, Steve. I really appreciate it. And everyone who has asked questions. Some great questions. Really good. Thank you to the CCC, Confidential Computing Consortium, the Linux Foundation, for hosting us.

Stephen
Well, an thanks for the time today, Mike,. This has been great. Helen, can I ask you to take control and start wrapping us up, please? Kick us all out.

Mike
You can go to Enarx (enarx.dev) and you can get a chance to win the book. If you star the the GitHub project, you have a chance to win a copy of the book for free.

Stephen
You're being a GitHub star *** right now. Wow.

Mike
Or go to aliceevebob.com, and I'll give one of those. So yeah, come along. Thank you very much indeed!