Episode 17

To Docker or not to Docker – how container technologies are changing our lives

"Just to clarify, the relationship docker and container is a bit like "Tempo" and "tissue""

 

Note: This Podcast is exclusively avaliable in german. 

Hello and welcome back to "Computer Scientists Explain the World"! In today's episode we talk about Docker and container. At first, this sounds more like ship harbour than software development, but we'll stay true to our area of expertise, we promise. Today's topic will be discussed by consistec in-house software developers Dennis Groß and Konstantin Bauer: What is Docker? What are container technologies? What are they needed for? And what are their advantages and disadvantages?

Stay tuned and have fun listening!

 

Transcription. 

 

Konstantin Bauer: Hello, welcome to a new episode of "Querverlinkt"! My voice is new, so I'd like to introduce myself briefly. My name is Konstantin, I'm a software developer at consistec, working on the backend. And luckily I get to share today's podcast moderation with Dennis, who grew up with podcast episode 14 on the topic of the cloud. Hi, Dennis!

 

Dennis Groß: Hi! I'd like to introduce myself again for everyone. I'm Dennis, I'm a software developer as well, working frontend and backend at consistec .

 

Konstantin Bauer: It's nice to see you again, live and in colour. The last meetings we had were always via a screen, via camera. I have prepared a matrix entry. I didn't bring any pills, but cards, and you can decide.

 

Dennis Groß: Okay! So now there is a red card and another green card?

 

Konstantin Bauer: There is a red and a blue card. Nothing bad will happen either way, I can promise you that.

 

Dennis Groß: I'm brave, I think I'll take the red card.

 

Konstantin Bauer: I think you like it. You are a music fan. The question is, just call in: Do you know the following song lyrics? "I woke up drenched in sweat after a night of horror. I've never dreamt anything so stupid. I was in a real panic. I was standing on the beach and there was a whale in front of me. It was still alive, I was alone, it was such torture. I'm not the strongest, the animal weighed a ton, and the waves were shouting: Push the whale back into the sea". Do you know that?

 

Dennis Groß: I heard that once, but I actually don't know who wrote it.

 

Konstantin Bauer: It doesn't matter. It used to be a hit by "Die Toten Hosen". But you can probably guess why I chose the lyrics.

 

Dennis Groß: Because today we're talking about container technologies. Specifically, it's about Docker and how container technologies have strongly shaped our lives as software developers and still do. And since the whale is a bit of a Docker mascot, that fits the context quite well, of course.

 

Konstantin Bauer: Yes, exactly! I definitely had, when I hear "whale" this song always comes to mind. I thought, maybe you know it too. But exactly, that's our topic today, Docker and containers. Docker started back in 2013. So it's been out in the world for a few years now. What was it like for you back then? When did you first come across Docker and what was your first encounter with it?

 

Dennis Groß: The first time I heard about Docker was from friends of mine. They founded a start-up at university and told me something about how they were now using Docker and that it was all so great, that it made everything so much easier with deployment. I actually didn't know that before, so I went straight to the Internet and typed in Docker. I found documentation about it relatively quickly. But at the beginning it was relatively difficult for me to really understand what exactly a container is, what does it do, how do I work with it? It took me a relatively long time to really understand this concept of containers, because it is quite extensive and you can do a lot with it. And that's why we want to take a look at the whole thing today and think about it a bit: What do I actually need it for? What do I actually do with it?

 

Konstantin Bauer: Yes. We definitely want to approach it from that point of view. It was similar for me, for a long time I had no idea how I would actually use it. Now today we are at the point where we use it very regularly and really love to use it. And for all those who have only ever encountered Docker, but have not yet used it themselves, we would like to offer a little introduction. And maybe those who already know Docker can still take away one or two new things.

 

Dennis Groß: Maybe you'll start by explaining to people what a container is and what it's for.

 

Konstantin Bauer: Yes. Maybe we'll start like this. A container is a unit in which I can pack the applications I have developed with all their dependencies and I can run this container as a shippable unit locally or on any other machine.

 

Dennis Groß: And we want to go into the technical background a little bit today. Konstantin, you have now explained what the basic idea of a container is. Now explain to the people what containers should actually be used for.

 

Konstantin Bauer: At the moment, I use containers quite a lot in integration tests. I use it to start dependencies before a test. For example a database, I have a clean environment in the container, I can fill it with data and at the end of the test the container is removed again. I also think back to my time as a working student. I didn't use such a setup, I tested with a test environment instead, played around with my tests a bit, and suddenly all the data in the database was gone. So the question is, of course, was it sensible to do it that way, yes or no? But it was all too late, it happened. And it wouldn't have happened with containers.

 

Dennis Groß: So this typical drop table ...

 

Konstantin Bauer: Yes, exactly!

 

Dennis Groß: ... test written.

 

Konstantin Bauer: But unfortunately I affected everyone and not just myself. What was it like for you then? Do you also have a situation like that in mind?

 

Dennis Groß: The last time I would have been better off using Docker was not that long ago. And I actually made a software release of a Python software, of a framework that we offer to our customers. And that built, the tests ran with me, they also ran with my colleague, they also ran on our Jenkins, which is a CI/DC software where you can test things automatically. It was running in a VM and we thought, "Good! Runs". When we released it, it didn't take long before the customer contacted us again, and we simply forgot about half of the dependencies in this dependency file, in this requirements file. And this is simply due to the fact that everything is already installed in our system on the laptop, and Jenkins was already installed. We simply didn't notice that something was missing from the dependencies. And if they had used a container, it would have been a completely fresh environment where you start from scratch, and then of course it would have been noticed immediately. The software would not have started at all because dependencies would have been missing from the requirements file.

I know that you did something with Docker not too long ago. Maybe you can explain where you last really used Docker as a software developer, for what.

 

Konstantin Bauer: My current go-to scenario for Docker is in the area of testing, i.e. integration testing. If I have a component that uses an external dependency, such as a database, and I want to test it, does my code work in the sense that it queries the database correctly, that it also processes the data returned by the database correctly. That I then start a Docker container for testing, instead of having to rely on some existing test environment. That's simply a great scenario, because I can really work with the database, but everything runs locally on my computer and I'm only on my own and don't run the risk that we're doing this to interfere with someone else's data.

 

Dennis Groß: In this big use case, it's about some kind of integration tests that test the different components with each other and that needs a certain test setup, and in your case the database. And I would like to automate that somehow, so that this database is available in the system in which I am testing. And ultimately you use Docker for that.

 

Konstantin Bauer: And in this respect, containers are ideal because they are very short-lived. I can start the container, I can do something with it. The container is terminated and that's it. That is definitely a field of application for containers.

 

Dennis Groß: That's what we're doing a lot of now in the area where I'm currently working at consistec: We have a Jenkins, which is a CI/CD server, where you can automate tasks, run tests, things like that. At the moment, we have also switched to this in the area where I work and we also use containers a lot. We have some tests where first a piece of software is built and then it is tested whether everything works and then it is perhaps deployed. We simply found out that it is much easier with containers than when you run such Jenkins jobs, as they are called, directly on a VM, where you always have a very fresh environment with the containers. Then you suddenly realise, ah okay, with this Python project, there are still a few dependencies missing. If you then have it in the VM like that, you might have already installed the things manually and then simply don't notice when something is missing.

 

Konstantin Bauer: You also don't directly fill yourself with dependencies that you might only need for a short time.

 

Dennis Groß: Those are two big use cases, CI/CD, i.e. Continuous Integration / Continuous Delivery, and testing, of course. Another big use case is development itself and productive systems later. So it's quite good if you now make your product available in a container on the target system, so to speak, and can also run the same container relatively easily on your own developer laptop. In the end, you have a really strong connection between the development environment and the productive environment. This means that if I have an error in the productive environment, it is relatively likely that I can reproduce it on my development environment with the container.

 

Konstantin Bauer: It's the classic error that everyone has probably run into at some point: "Runs on my machine" and I pass on the software, but it suddenly doesn't work on another machine.

 

Dennis Groß: The typical things that run on some productive VM afterwards and someone has dialled in with SSH at eight o'clock on a Monday evening, moved a few config files around, maybe installed something else and then the dependencies and the configuration are slightly different on the productive system and then you wonder why it doesn't run like that on the development system on my laptop. A container simply helps because it always provides the same environment.

 

Konstantin Bauer: I think these were classic application scenarios. And now we have already talked a lot about Docker and containers. Should we continue with that? Maybe roll it up a bit again, what is the whole thing?

 

Dennis Groß: Exactly!

 

Konstantin Bauer: Should we perhaps also briefly say Docker and containers, what is the connection?

 

Dennis Groß: Perhaps we should first explain what the difference is between a container and a virtual machine. Most people already know about virtual machines, but many may not yet know the difference between containers. The idea of a virtual machine is basically that I have concrete hardware lying around somewhere in my data centre for storing data for the network and I try to separate it a bit. I don't want to have just one supercomputer with one hardware, but I want to have several small PCs in principle. And then you have a module, it's called a hypervisor, and it goes and divides up this hardware that you have lying around into virtualised hardware. What comes out when I bundle these virtualised networks, memory and system resources together is a virtual machine. There is an operating system on it and this operating system has an operating system core. This kernel knows how to handle these virtual resources, for example, how I ultimately store something. That is ultimately the VM. I make many small VMs, many small PCs, so to speak, out of one concrete piece of hardware. That has advantages, but it also has disadvantages. Such a VM is relatively large, usually in the range of several gigabytes. Now, of course, you want to deploy software in different, simple systems so that they are separated from each other, i.e. isolated. Now, of course, you can classically go and create a separate VM for each software, for each service. But that is always relatively time-consuming and resource-intensive, because this virtualisation is very expensive. So this container has now been developed, and the container is basically a process that simply runs on the virtual machine and virtualises the operating system, so to speak. What that means is: I can now have a Docker container that has an Ubuntu operating system. However, this Ubuntu operating system does not have its own operating system kernel, but simply uses the operating system kernel of the virtual machine's operating system. This means that the container runs on the virtual machine as a process and for the user of this container, inside, it looks as if he has a fully-fledged Ubuntu operating system. That is basically the idea, the virtualisation of an operating system. Of course, this also makes it much more resource-efficient.

 

Konstantin Bauer: I usually wear developer's glasses, not operational ones. I know from my own experience that when I set up a virtual machine, it wasn't automated, but I manually start VirtualBox, install an image and then, as soon as the image is installed, I have to make some configurations until I install software. And then a lot of time has already passed. And with Docker, that has become easier. When I want to run something, I usually find documentation in the products, the Docker command "docker run" is already there somewhere, and I've started something super easily. That usually happens faster, too.

 

Dennis Groß: That is one of the main reasons. If you compare it to this, you can say that a VM is several gigabytes in size, a container is several megabytes in size if it is well written. To start up a VM, i.e. to do all the virtualisation with the hypervisor, usually takes several minutes, whereas with a container it really takes seconds. That simply gives you completely different possibilities, and that's why containers are so popular, because of course they are also very resource- and time-efficient.

 

Konstantin Bauer: Maybe I should briefly mention that the relationship between Docker and containers is a bit like "Tempo" and tissue.

 

Dennis Groß: Exactly!

 

Konstantin Bauer: So Docker is a tool in the field of container technology, so containers are a concept and Dockers is the explicit tool.

 

Dennis Groß: There are various abstraction layers and standards in this container world. We don't want to go into detail about that now, because it goes a bit too far. Generally "Docker" are relatively well known because they have made the container a bit popular, I would say. But Docker basically offers a tool set for developing and deploying containers. But there are also many different ones. And then among these Docker tools there are also so-called "Docker Runtime", which are container images. So there are relatively many abstraction layers. And ultimately, it's a fairly modular system and Docker is simply a company that provides a tool set for its container technology.

 

Konstantin Bauer: You're right, they have definitely managed to push the whole topic and somehow create an ecosystem that has now established this whole container technology.

 

Dennis Groß: I don't even think they were the first to use container or...

 

Konstantin Bauer: Yes, you're right!

 

Dennis Groß: ... invented it, it's a bit older. But they were, I think, definitely instrumental in the success and that it has become such a big topic. Nowadays, it's already the new standard in principle that you deploy your software on a container and no longer directly on the VM. This is also becoming more and more common, so the trend is simply going in that direction. That's why it's really interesting for anyone listening to the podcast who is perhaps not yet familiar with this to invest the time and simply find out, because as a software developer, container technologies also give you many more options for deploying your software. And nowadays, they say, this DevOps culture, we are not only there to write code, but also to deploy the code a bit. Then it definitely makes sense to deal with container technologies as a developer.

 

Konstantin Bauer: Yes. I just remembered that in our preparation we also came across Katacoda, an e-learning platform from O'Reilly. They also offer lessons on Docker. We can put that in the show notes. There's also an environment where Docker is already pre-installed. So if you don't want to have it on your computer, but you would like to use it, then it's highly recommended that you click into it and simply run a few commands. Maybe it also makes the whole concept a bit more tangible, because you just type in commands by hand and see what happens. And you can't break anything. That might also be reassuring.

 

Dennis Groß: That is always important, that is always reassuring. I think it makes sense to learn that way. When I first heard about Docker, I tried to understand what it actually was a high level . There are quite a few abstract concepts that come along with it. We'll go into some of them and the whole Docker ecosystem in a moment. But in practice, I think it's best to just try something hands on. And with this Katacoda, you have a terminal on the website and then you can just try things out, a whole workflow from writing the image to running the container. Then I think you understand quite well how the individual concepts are interconnected.

 

Konstantin Bauer: In the best case, that should take away the last shyness to actually run a Docker command yourself.

 

Dennis Groß: Then I wanted to ask you another thing. I think many people have heard that Docker is like an onion. That is always used as a figure of speech. It's always interesting to actually explain it: What exactly is meant by that? What does a Docker container have in common with an onion? What do these things have to do with each other?

 

Konstantin Bauer: I think that's one of the advantages, I don't have to start from scratch, but there are so-called images that are based on an operating system, let's say Ubuntu, and on this basis packages can be pre-installed, such as NPM, or rather Node, that is, NPM as a package manager with Node Runtime included. If I now develop any project with Node, I can save myself this step and look for an image of Node instead. Most environments are also available as official images, so that's what you should look for. So it could be Node, .NET, something with Java, Python. That means that on this layer of a basic operating system, I am offered a layer with a further environment, which I can then build on again. And that's how these layers are created. Which then ultimately leads to the, well, we mentioned the terms image and container. So an image is these individual layers, which are write-protected, and a container distinguishes an image in such a way that it then puts a writable layer on top, with which the container can work.

 

Dennis Groß: Ultimately, the image is basically a special Docker or container syntax with which I can say what a certain layer of this container onion looks like. The idea is: I don't always have to start with an Ubuntu operating system, but I can simply add another layer. For example, I can use a Docker image as a basis, which already has node.js installed. And this node.js Docker image is perhaps based on an Ubuntu Docker image. That makes it relatively easy, of course. Of course, you can then start at the level where you want or where you simply have to start. If you compare this with a VM, the workflow is more like this: I have now made a VM available via the hypervisor and now I have to get in with SSH and then I have to install everything the next time. If I want node.js, I have to install it, then I have to install NPM and so on and so forth. And with Docker, the whole thing runs via images. That means it's also automated, I use the images to describe to Docker how the container starts up and what is installed on it. And the nice thing about it is that I have this onion idea, I can reuse things and can start where I feel comfortable in the end. In practice, that saves me a lot of time in development.

 

Konstantin Bauer: Totally! Exactly! I'm searching, so to speak, you never actually start from scratch, but you look for prefabricated images and only then do you start from there.

 

Dennis Groß: And the nice thing is that many open source communities also offer such official Docker images. There are a lot of things that you can set up. And you save yourself the work. You really have something at first hand. You don't have to laboriously configure it yourself. We've already talked a lot about Docker, but now it might be interesting to talk a little bit about the Docker ecosystem. There are so many different things. We have now talked about the container image, which describes a bit how the container should ultimately look. We talked about the container. In the end, this is the process that runs afterwards. But there is something in between. Konstantin can you tell us what it is?

 

Konstantin Bauer: When I want to build my own image, I use a Docker file, which describes in a certain syntax, what my image looks like. So that usually starts with a "From" command, so where you base the whole thing on. So as we just said, it can either be Ubuntu or even Node. I can then add commands to copy certain files over from my project. I can additionally install my own packages. I can execute commands to install NPM packages, for example, if we stay with the example. And usually the last command is that my application is executed.

 

Dennis Groß: If I am a user and I want to deploy a container from Start to Finish, what do I have to do? So the first step: I have to write a Dockerfile.

 

Konstantin Bauer: Exactly! You can also, if you develop your own application, write a Dockerfile for your application.

 

Dennis Groß: I write in this Docker syntax basically what my image looks like. And then I make a doctoral image and give the image a name, a so-called stack. Then I have this image and I can run this image. I can use Docker Run with the image name and then Docker knows how to start a computing container from this image. That is now the use case, I want to run the container on my ...

 

Konstantin Bauer: So far we have only been working locally.

 

Dennis Groß: But what if you don't want to start your container on your PC, where you wrote and built the Dockerfile, but you want to start it on your productive system? Then the image must somehow ...

 

Konstantin Bauer: ... be distributed.

 

Dennis Groß: ... be published.

 

Konstantin Bauer: Exactly! That's where the container registries come into play. With Docker, you would send a push command and then the image is pushed into the registry and is then accessible to everyone else via the registry. In Docker, the best-known registry is Docker Hub, but now GitLab and GitHub also offer options for hosting images directly in their repositories. Someone else who wants to use this created image can now access it via this registry.

 

Dennis Groß: In principle, the registry is the database for container images that have already been built. And if someone pulls a Docker Pull, for example a node.js container, this is usually actually a container image that comes from Docker Hub. If you take a closer look, you will also see in the console output that this is called a "pullen", downloaded from Docker Hub and then it starts that. Good! We have already explained quite a lot about Docker, should we now evaluate the topic a bit? What is your final impression of Docker and container technologies?

 

Konstantin Bauer: Maybe you can already guess it, we are definitely little Docker fanboys, we use it a lot in our everyday lives. And if someone out there now says: 'but I can't use Docker because …', then that will certainly have its reasons. What could be a particular reason for you not to use Docker?

 

Dennis Groß: Just as you said, Docker is not a golden hammer. There are a few use cases where you shouldn't use it. What comes to mind are legacy applications, i.e. relatively old software that has a relatively large number of dependencies, that communicates with several databases, for example, that addresses the databases very directly, i.e. does not use protocols such as http, for example, and for which very specific conditions are necessary in a virtual machine. You often have problems containerising these things, i.e. writing Docker images for them. And often it doesn't make much practical sense, because these are often systems that have to be redeveloped. So this is called a "green field". Where you probably start with a container approach and it's no longer profitable to port it. Maybe one last aspect that comes to mind is that container technologies have made it a bit easier to run software. So, for example, I'm going to get a Postgres database. In the past, I had to install it, depending on what kind of operating system I have, Windows, Mac or Linux. I have to configure and start it. And I can only have one database, I can't simply install two Postgres on my system. There are certainly ways to do that, but it's complicated. And nowadays it's enough, I just do a Docker run, load this Postgres image from Docker Hub and that runs. This is not only with things like Postgres databases, but also with proprietary software. For example, if we develop a backend service in Scala, that's called "containerising", which means we now make a Docker image that describes how this software can run in the container. Then anyone can easily deploy the software. In the end, it's just a matter of making the Docker image to build the image, and then Docker Run. The rest of it is just automated. That means I don't have to be so involved in the details of the software.

 

Konstantin Bauer: I don't have to get into this tooling at all, so what do I need for Scala in particular, what kind of image tool do I need?

 

Dennis Groß: That also makes it a bit easier to separate. This separation between the one who develops it now and the one who deploys it, sure, nowadays the boundaries are a bit blurred. But in practice there's always a slight separation. There are always people who are more responsible for operations, for deployment, maintenance, and people who are responsible for development. And for this separation, it's easier if you have a container. Because the person who deploys it knows quite simply how to deploy it, because there are these standard tools, from Docker for example. Now, of course, we've talked a lot about Docker, and now we've slowly come to the end. I would now like to summarise what we have talked about. We started off by explaining what exactly a container is, what the difference is between a container and a virtual machine. We explained a bit about the connection, that the container basically runs on the virtual machine. We then went into a bit of detail about where this onion idea or metaphor comes from, what an onion has to do with the container, that there are simply different image layers, container image layers, on which you build. That there is simply a great reusability in this container ecosystem. And maybe you can explain a little bit about the keyword ecosystem, what we said about it.

 

Konstantin Bauer: We also heard earlier that there is something like a registry through which images can be distributed. We just heard about the difference between an image and a container. And above all, you heard a bit about our use cases, i.e. that containers are particularly useful in the area of CI/CD, in testing and that containers have made many things possible in the area of cloud computing in the first place.

 

Dennis Groß: That's it for today's episode. Thank you for tuning in again, and we hope you will join us again for the next topic.

 

Konstantin Bauer: I had a lot of fun too, Dennis.

 

Dennis Groß: See you then!

 

Konstantin Bauer: Bye! Ciao!

 

There! That's it for today. We hope you enjoyed today's episode and that we were able to bring you a little closer to the topic of Docker and container technologies. You can find links about the current episode in the show notes. And if you are interested in the wonderful world of software development, we would be happy if you subscribed to us. For our episode on March 3rd, we have Thomas and a very special guest in store for you. Until then, have a good time! Stay healthy!

 

 

 

 

 

 

 

 

                                                                                                 

Your cookie settings

Technically necessary (essential) cookies

Information on the individual cookies

  • Show more

    Technically necessary (essential) cookies

    Necessary cookies help to make a website usable by enabling basic functions such as page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

    Name fe_typo_user
    Supplier consistec.de
    Purpose Secures anti-spam measures when using the contact form
    Expiration Session
    Type HTTP
    Name conCookieSettings
    Supplier consistec.de
    Purpose Saves the consent to cookies
    Expiration 30 days
    Type HTTP
    Name mtm_consent_removed
    Supplier consistec.de
    Purpose Used by Piwik Analytics Platform (matomo) to determine that the tracking has been contradicted
    Expiration 1 month
    Type HTTP
  • Show more

    Statistics

    Statistics cookies help website owners understand how visitors interact with websites by collecting and reporting information anonymously.

    Name matomo.php
    Supplier consistec.de
    Purpose Records statistics about the user's visits to the website, such as the number of visits, average time spent on the website and which pages were read.
    Expiration Session
    Type HTTP
    Name _pk_id#
    Supplier consistec.de
    Purpose Records statistics about user visits to the site, such as the number of visits, average time spent on the site and which pages were read.
    Expiration 1 year
    Type HTTP
    Name _pk_ses#
    Supplier consistec.de
    Purpose Is used by the Piwik Analytics Platform (matomo) to track page requests of the visitor during the session.
    Expiration 1 day
    Type HTTP
    Name _pk_testcookie..undefined
    Supplier consistec.de
    Purpose Is used by Piwik Analytics Platform (matomo) to check whether the browser used supports cookies.
    Expiration Session
    Type HTTP
    Name _pk_testcookie.#
    Supplier consistec.de
    Purpose Is used by Piwik Analytics Platform (matomo) to check whether the browser used supports cookies.
    Expiration Session
    Type HTTP