I was almost convinced that app deployment on battery powered robots running ubuntu can be better done with docker instead of some non docker approach because of different reasons like
- atomic updates (if pulling of new container version fails, current version does not get messed up)
- transactional updates (docker downloads only invalidated layers)
- easier version control of whole runtime stack through Dockerfile
- out of box support for features by cloud providers built around container technologies such as
- dynamically spawning containers for parallel simulation
- CI/CD
- end to end lifecycle management through solutions like Azure IoT edge, Azure IoT
But I came across new requirement, limited internet connectivity.
Our robots may not be able to connect to high speed Internet directly. If it is able to connect at all, it will be always through other machine connected to it, that too considerably slower Internet (consider 500 KBps). I was trying to evaluate whether limited Internet connectivity undo all above benefits?
I am thinking to package my app in two Dockerfiles.
- EnvDockerfile: Contains Ubuntu base image and other dependency libraries (at least 1 GB in size)
- AppDockerfile: Based on EnvDockerfile. Multi staged, with first stage to build the app and second stage to copy only build artefacts to final image (at max 100 MB)
Here are my specific questions:
Q1. Will it updation process be OK if our device is able to connect to 500 KBps Internet?
Our compiled app is of 50 MB size. Even with 500 KBps speed, I feel updation will take at max of couple of minutes considering. Also this can be automated using some script running on robot.
Q2. What if robot is not at all able to access Internet?
The solution we thought is that we have to update docker container on another device connected to robot (again it will only download 50 MB of invalidated layer), export it as tar, copy it to the robot over USB and then update container on robot using that tar. This can also be automated by script running on connected device. The concern is size of tar and update duration. I guess the tar will contain whole system, the base ubuntu image, the app image, possibly in GBs. Copying it to robot will take considerable time. Then it has to be extracted on robot and finally update the container on robot from extracted tar. Am I thinking this process correctly? It seems that this is not feasible approach. Or there is some other approach, say creating tar of only of updated layers?
Q3. Will changing base image invalidate all subsequent layers?
Assume I have created images explained above (EnvDockerfile and AppDockerfile). Will changing ubuntu base image version or version of any of the dependency library in EnvDockerfile invalidate all layers in both EvnDockerfile and AppDockerfile requiring to update whole OS 1050 MB of docker image? Is there any better way to architecture Dockerfiles to avoid this?
Q4. Will non-docker approach be suitable in this scenario?
I am not sure what non-docker approach can be. But say a script which will update just the target dependency library with say `apt-get update` without requiring to download GBs of file. (Though we will not be able to update the host Ubuntu itself.) But I guess this will require to forfeit all benefits of docker based deployments listed above: challenges in version controlling the updates, atomic updates, updating host OS will be out of question and letting go CI/CD and simulation support provided by cloud providers.