The release of Ubuntu ‘Bionic Beaver’ 18.04 is important. Not only is it the LTS – with five years’ worth of support – that will see millions of users installing Ubuntu for the first time with GNOME firmly nestled in the desktop environment slot, but it could be the release that sees Canonical, the company behind Ubuntu, through IPO. We spoke to Will Cooke, Canonical’s desktop director and David Bitton, engineering manager of Ubuntu Server, about the overall goals for Ubuntu 18.04 LTS and future plans.
WILL COOKE: So we’re at another LTS release, which comes with five years’ worth of support. And that’s important to our typical user base, because they don’t want to be having to… well, they want to be safe in the knowledge that the platform that they’re working on, and that they rely on, is going to be secure and up to date, and is going to be kept running for a long time.
Typically, we find that most of our users like to install it once, and then leave it alone, and know that it’ll be looked after itself. That’s more important in the cloud environment than it is on the desktop, perhaps. But the joy of Ubuntu is that the packages that you run on your desktop – let’s say that you’re a web developer, and you want to run an Apache instance and a MySQL instance, and you want to have your developer tools on there. You can do all of that development on your machine, and then deploy it to the cloud, running the same version of Ubuntu, and be safe in the knowledge that the packages that are installed on your desktop are exactly the same as the ones that are in your enterprise installation.
And having those supported for five years means that you don’t have to keep upgrading your machines. And when you’ve got thousands of machines deployed in the cloud in some way, the last thing you want to be doing is maintaining those every single year and upgrading it, and dealing with all the fallout that happens there.
So the overarching theme for Ubuntu in 18.04 is this ability to develop locally and deploy to – either the public cloud, to your private cloud, whatever you want to do – your servers. But also edge devices, as well.
So we’ve made lots of advances in our Ubuntu Core products, which is a really small, cut-down version of Ubuntu, which shifts with just the bare minimum that you need to bring a device up and get it on the network.
And so, the packages that you can deploy to your service, to your Desktop, can also be deployed to the IoT devices, to the edge devices, to your network switches – you know, across the board. And that gives you a really unparalleled ability and reliability to know that the stuff you’re working on can be packaged up, pushed out to these other devices, and it will continue to work in the same way that it works on your Desktop as it does on all of these other devices.
And a key player in that story is the snap packages that we’ve been working on. These are self-contained binaries that work not only on Ubuntu, but also on Fedora or CentOS or Arch.
So as an application developer, for example, […] you can bundle up all of those dependencies into a self-continued package, and then push that out to your various devices. And you know that it will work, whether they run Ubuntu or not.
That’s a really powerful message to developers: do your work on Ubuntu; package it up; and push it out to whatever device that is running Linux, and you can be reliant on it and continuing to work for the next five years.
What is the common problem that developers have with DEBs and RPMs that’s led to the development of the snaps format?
WC: There are a few. Packaging DEBs – or RPMs, for that matter – are a bit of a black art. There’s a certain amount of magic involved in that. And the learning process to go through it, to understand how to correctly package something as a DEB or RPM – the barrier to entry is pretty high, there. So snaps simplify a lot of that.
Again, part of the fact, really, is this ability to bundle all the dependencies with it. If you package your application and you say, “OK, I depend on this version of this library for this architecture,” then the dependency resolution might take care of that for you. It probably would do.
But as soon as your underlying OS changes that library, for example, then your package breaks. And you can never be quite sure where that package is going to be deployed, and what version of what operating system it’s going to end up on.
So by bundling all of that into a snap, then you are absolutely certain that all of your dependencies are shipped along with your application. So when it gets to the other end, it will open and run correctly.
The other key feature, in my mind, of snaps, is the security confinement aspect. X.Org, for example, is a bit long in the tooth now. It was never really designed with secure computing in mind. So it’s fairly easy— well, not necessarily X.Org actually, but the whole OS; if something is running as a root, or it’s running as your user, then it has the permissions of that user that’s running it.
So you can install an application where the dev, for example, could go into your home directory, go into your SSH keys directory, make a copy of those, and email them off somewhere. It will do that with the same permissions as the user that’s running it. And yeah, that’s a real concern.
With snaps and confinements, you can say, “This application, this snap, is not allowed access to those things.” It physically won’t be able to read those files off the disk. They don’t exist as far as it’s concerned.
So from a user’s perspective, you can download this new application because you heard about it on the internet. You don’t know what it is, you don’t know where it comes from, but you can install it and you can run it, safe in the knowledge that it’s not going to be able to just walk over your disk and have a look through all these files that you don’t necessarily want it to have access to.
So that, in my mind, are the two key stories. The write once run anywhere side of things, and then the confinement security aspect as well.