I attended my first DockerCon last week, here are my final thoughts;
Docker is here to stay! Which is a good thing, but I can see the passion in all of their employees, and they are going to continue to evolve their product line to provide the software development and operations world great tools.
My goals for the conference were to see not only how I can apply this to my everyday life at work, but how I can apply this technology in my open source projects as well. Hopefully I could meet some great resources along the way! Oh, and eat some Brisket, mmmmmmmm brisket!
Likes: This conference was super focused, nothing even the slightly bit not related to containers was mentioned. It was nice to not have any distractions. The staff conference and Docker were extremely nice and helpful! I liked that the vendor expo had extended hours, many times I go to a conference and I am forced to choose between attending sessions or talking with vendors at the expo, I didn’t have to do that at Dockercon, plenty of time for both.
Dislikes: The workshop I signed up for was horrible, we spent the first 30 minutes doing hands on work, and the other 2.5 hours listening to two guys read their slides, it was not very helpful. Overall, I wish there would have been more Windows content, I know its new, but it was lacking. In general I feel like there could have been more sessions. It appeared that all of the “here’s how we did it” sessions were from big huge companies, nothing in the SMB range to compare myself to. My last criticism is that the community theaters were horrible to attend, people were pushing and shoving to get closer, because it was very very difficult to hear. I liked the concept, but the delivery just didn’t work out.
I’m very interested to learn more and test drive VMware’s VIC (vSphere Integrated Containers). Due to the fact that I work with VMware products a lot, it seems like a logical decision. I also like that they are providing a complete solution here, with Harbor, Admiral and the Photon Platform (minus Windows).
Windows has a long way to go. It works, but its not ready for prime time. The images are too large, and the application pool is too low for it to be a versatile and enterprise solution. It may be useful if someone is already running Hyper-V as their hypervisor, but I assume this is not the case for most.
Less is more! I learned quite a bit, however the message that never changed was to continue to reduce the size of the container images! Abby Fuller had a great session on this that kept playing over in my head.
YOU ARE NOT ALONE!
I already updated my Windows images to conserve space. I saved about 1gb on my PowerTools image, and several mb on my Nano Image by following some simple tricks learned in the Abby Fuller session. Check it out on Docker Hub.
Soon Windows will allow you to run Linux and Windows containers at the same time from the same host OS, this is exciting. This will allow me to create a small application on a single host, without having to use a Windows application stack, but use my powershell containers for processing the work.
I also learned how the layers of the Images worked, and that I should avoid leaving files on the layers to conserve space. This forced me to upload some of my modules to the Powershell Gallery to provide easy installation, and a smaller footprint. So that’s kinda cool!
I need to get to it, and start developing a plan to migrate from vm’s to containers!
Last, but not least:
I was worried that I was very behind the curve on Docker, but in talking with quite a few of attendees, it appears we are all in the same boat. There is plenty of help out there and many folks that are in your same situation. Join the community, join the slack channels, and start asking questions. It’s a very helpful and welcoming community, which is very refreshing! Give Docker a test drive and see how it can help your organization run faster and leaner. I encourage everyone to throw a bathing suit on and go ride a whale!
I’ve updated my Windows Docker image to include PowerNSX, PowervRA, and Vester. Why? just because. I’m sure there are folks out there who may want to run and explore with Docker for Windows and Windows containers. So why not use a container that you can apply to your daily tasks as a VMware admin?
You can check out my previous blogs on Docker and PowerCLI to understand how to get your environment setup.
The really cool thing about the Docker container is that you are guaranteed a clean consistent environment each time. And its quick! (Well after the initial download of the container).
Here is the list of available modules in my new Docker Image:
Hi everyone! This idea really stemmed from the VMworld Hackathon Vegas 2016. Yes, it really took me this much time to get this thing put together. I think that the project was initially intended to be used with Pester and NSX. This new project is a user interface that wraps Vester. The hope is that it will expand to other Powershell unit/integration tests, or configuration management tools. I’d love to start playing with PowerNSX and Vester!! The great part about this tool is that you can share a single interface with a team or group of system administrators, not everyone will need to install and run command line tests. I work on a team of 11 folks with varying levels of skills, this tool allows us to share a single interface to inspect our entire vSphere infrastructure, remediate configuration drifts, report on problems, and compare historical data easily.
Right now the project is completely separate project than Vester. Somewhere in the future we will need to determine what the right model will be and whether or not to start integrating the two code bases. At this time there are no code changes needed in order to run XesterUI on top of Vester.
Side note, I do not claim to be a web developer. I have chosen the WAMP stack because it is very easy for a System Administrator to manage, update, and customize. This tool is intended to be functional first, pretty second. I will take any help I can get with making the pages prettier, but I will not sacrifice looks for the ability of a junior or mid level SA to make changes that could help their organization.
Lastly, this is a Beta release. I’m interested to get feedback on whether this will be useful, or improvements and feature requests, nothing is off the table at this point!
TestRun – A set of specified tests that run on specific Targets.
System – A group of Targets, must contain at least one vCenter
Target – A single vSphere entity, vCenter, Host, Cluster, vm, etc.
Queue Manager – Manages the Queue of tests, assigns them to an appropriate TestRun Manager, also aborts cancelled testruns.
TestRun Manager – A process that can execute a TestRun workflow.
Workflow – A wrapper script for Vester.
Database – Collects all of the meta data about the vSphere systems and TestRun results.
User Interface – Set of web pages where you are able to slice and dice the TestRun data, submit TestRuns, Remediate, etc.
How does it work?
XesterUI simply wraps Vester right now. There is a workflow script that is executed by the TestRun Manager. The wrapper script gets test information from the Database, executes Vester specifying the config.json file and whether or not to remediate problems. After the test completes the script parses the XML result file, and imports the TestRun data as records in the Database. Once the data is imported, the user is free to view, sort, and filter the TestRun data.
Setup is not difficult, I’m still working on a deployment script (and better documentation) that will make things fairly simple, until then follow these quick directions (and let me know if you have any problems):
Download and install:
WampServer 2.5 or newer
PowerWamp project (place in c:\openprojects\PowerWamp)
PowerLumber project (place in c:\openprojects\PowerLumber)
XesterUI project (place in c:\openprojects\XesterUI)
Note: The deployment of Components does not have to all be on the same machine. You can deploy the Queue Manager and TestRun Managers to different machines. Not sure the Queue Manager makes sense at this point. Additionally you can install the ‘AMP’ on a linux host, so long as it will have access to a CIFS share where the log files are accessible to all parties. Setting up multiple TestRun Managers could prove to be very useful if you have multiple sites and low WAN speeds.
Status and Results:
Here is a list of the status and results included in the Sample Data. I Highly recommend you use the default ID’s and Text Status/Result. Feel free to change the HTML color though!
The result of an overall TestRun will bubble up from the lowest test case.
A critical result means that the Total number of testcases reported in the XML, did not get inserted into the database.
Pass means that ALL testcases passed, and are accounted for.
Fail indicates that at least 1 test case has failed.
Agent_Error indicates that the system failed somewhere.
From this page you have several options, You can view the TestRun logfile, or Result XML.
All of the TestRun worflow messages will be logged here.
This is the XML produces from Vester/Pester.
There are two options for viewing Testcase results for a specific TestRun. Drilling down by TestSuite, or viewing all Testcases.
Viewing all TestSuites of a TestRun. Failures bubble up.
Viewing all TestCases from a TestRun, sortable, filterable data.
From the TestSuite page, you are able to drill down and view the Testcases:
On most of the UI pages you are able to sort any column and filter on strings. If a test case has failed, you are able to view the stack trace, and are able to resubmit the test with the remediate flag.
Submitting a new TestRun is easy, Click the Systems link, and in the row of the system you’d like to test, enter a TestRun name, specify if you want to remediate, and click ‘Submit TestRun’.
You are also able to view the targets from a specified system or all systems.
Additionally, if you click on a target you have the option to view all previous TestRuns and the results.
Creating a new System and Target are easy, simply fill out the supplied forms.
Only basic information can be entered for a new Target. I’ll work on making this more robust.
Systems are straight forward to add.
Lastly, you can view the Queue Manager and TestRun manager log files from their respective pages. Additionally you can shutdown the Managers. There is a ticket in to be able to start those components. The log files for TestRuns and the manager are stored in ‘c:\XesterUI\component‘ (Queue_Manager, TestRun_Manager, TestRuns\TestName_ID).
View status, logs, and shutdown the Queue Manager
View status, logs, and shutdown the TestRun Manager
Hopefully this gives you an overview of all the features of this tool, and how it can be useful to have all of that data in a searchable/sortable format. Note that Targets do not need to be populated into the DB before you run your first test (with exception of a vCenter). If a target does not exist, it will be created when the XML is being converted and results inserted into the DB.
Build config files on the fly for an individual ‘target’ or groups of ‘targets’.
Mulitple vCenter’s per system.
Ability to create a config.json file by clicking through the web.
Creating a useful exportable report (potentially email reporting).
Ability to abort a test.
vCenter config.json files are more appropriate than ‘System’ config files.
Currently a problem with the SMTP check not returning a vCenter ‘Target’ in the XML. This causes a problem with inserting the data into the DB. I know the Target is a vCenter, but I am not given the Target name. I’ll work on tracking down why that’s missing.
I’m not a security expert. So taking a look at WAMP best practices might be prudent in your environment. I can certainly give you advice, but do your due diligence and make sure you are meeting your organizations goals for security. There are several area’s of this product that could present a security risk. Keep in mind this project is currently in beta, and will continue to grow in maturity over time. Let me know if you have any questions or concerns.
That’s it, 7 lines and I have import a ton of functions that will help me speed up the rate at which I can write custom code/scripts. With the newer versions of VScode it is not necessary to update your preferences to load a powershell profile. It just loads it by default now. Oh, of course this does require the powershell plugin!
Once you create your Profile script to import your specified modules, simply reload VScode. Open a Powershell file, and witness the magic. See below, the ‘Invoke-MySQLInsert’ is a function from one of my imported modules PowerWamp.
To show you the Awesomeness of PowerCLI + Windows + Docker, I’m going to walk through the setup of Installing Docker on Windows, Pulling an Image, running the container, performing a task against a VMware infrastructure and interacting with a MySQL database.
Double click the .msi, select the ‘I accept’, click install
Wait for about 30 seconds, and it finishes! Super quick. Go ahead and check the launch Docker box, and select finish.
You’ll see in the notification area that Docker will try to start.
But it will fail! Oh No! (It’s okay, we expected that since we did not enable Hyper-V)
Select ‘Ok’ to allow Docker to enable Hyper-V. It will also restart your Windows Operating system in order to properly enable the Windows feature. Thank goodness for SSD’s!
One thing I like to do on my Windows operating systems is to show my Notification Icons on the task bar. Right-click on the task bar and select settings. Scroll down and select ‘Select which icons appear on the taskbar’ (Windows must be activated) <—– Not sure why?????
Then just swipe the ones you want or don’t want.
After the reboot, Go back to the notification area and right click on the Docker Whale (not that you have any other whales down there). Then select ‘Switch to Windows Containers…’
It will ask you to select ok, and Reboot to enable Windows Containers.
BOOM, It’s running, we’re ready!!
I also like to pin Powershell to the taskbar. Click the Windows button, type ‘powershell’ Right click, and select ‘Pin to taskbar’
Here are some reference links to the items we will go over
WOW! Needless to say I was not super optimistic when I started the Docker discovery project. Man was I wrong, I am super impressed with the latest release and will continue to explore all the potential uses for Windows Containers. It was probably about 1.5 to 2 years ago that I had first heard of it. Generally speaking I do most of my work with Windows, so it was essential that I could run the engine on Windows. So when I first gave it a shot, I started reading the documentation and it said I had to load VirtualBox, install Ubuntu, etc. NOT A WINDOWS solution at all. It was laughable! I mean, just say its Docker in a box that you can run in a Windows machine.
Anyway, No big deal. I was fine with waiting. I’m glad I never gave up. The past few hours has been well worth it, and with some of my upcoming projects the discovery project I started is going pay off big time. Another huge piece to the equation was that Microsoft finally supported Hyper-V in a nested environment, and finally the native support for the Docker container Engine.
So a quick rundown of my current test environment…. I’m running a MacBook Pro, with VMware Fusion installed (Thank you @VMworld 2016 for the key!). Inside of fusion I have a Windows 10 x64 vm, where all the Docker Magic happens.
I first started with what I would consider the normal scenario, someone wanting to run a Linux Container… So I enabled Hyper-V, don’t forget to edit the Base vm to enable VTX or you will get errors. Oh, and make sure your vm has enough physical requirements, that bit me in the ass a few times as I continued to redeploy. I followed the steps found here to get Linux Containers to run on Windows. It was simple and flawless. I can think of so many applications where these containers could be useful in my daily job.
Ok, on to what I’ve been excited for, Windows Containers on Windows. Sadly this is all about Windows 10, I don’t have licensing for Server 2016, and its not worth me wasting my time on a trial when Windows 10 works perfectly fine. So my first mistake was mixing up the page linked up above (which I used to install Docker) and this page for running Windows containers. I kept looking for a toggle switch for the Docker engine to switch from Linux to Windows, I thought I was crazy. RTFM! and when you are done… RTFM again! When I was trying to run the Windows containers, it was only available in the Beta, and I didn’t install the Beta, I had installed the latest stable release. That was a simple problem to fix….
The link above is probably a bit more complicated than it needs to be for installing, Now you can download the latest software from Docker. Enable the Hyper-V feature, ensure the the Windows Machine has VTX or virtualization technology enabled, then install Docker, the software installer does all the work for you.
Then the real fun begins, at first, I admit, I didn’t get it. What’s the benefit? Why am I doing this? Why doesn’t it work? What did I do wrong? Am I wasting my time? What else could I be doing? Golf?
So if you have Docker installed, and have toggled over to the Windows on Windows engine. I recommend you start on this page, at the ‘Running Windows Containers’ section. And stop at ‘Using docker-compose on Windows’. I never ran through the Music store or IIS type demo’s. My primary function is just using them for PowerShell applications.
So after doing the ‘Hello World’ piece I needed a break, so I took a couple weeks off. I’m not sure if that was a good or a bad thing, because it took me a few minutes to get my mind back into the idea of what Docker could do for me and how I get it to work. I found a page on Microsoft’s website that outlined how to create a Dockerfile. This was the single greatest link I found it really laid out the concept of how this file can build an image. And how I can tailor the Container for my needs. Once I read this page, I was ready to go, and quickly started playing with building images. Did it go smoothly? NO. It was a lot of trial and error to get the commands I wanted to run, work properly. Nothing too crazy, just tedious at first. I also started with WindowsServerCore, vs. Nano, so that I was working with a ‘full’ OS.
That’s it, of course I have more plans, but If I could get these pieces working, I am in business! Problem #1, VMware does not like you to download PowerCLI from the Internet without logging into their website, so I had to load those files on the host. I chose 6.5, because they are just modules, no true install required. Hooray! Ok, I know the VMware guys are wondering why I am not using the PowerCLI Docker Image. I currently only have one reason, the Powershell/.net MySQL plugin, I need it, and it won’t work on Powershell for Linux (to my knowledge or Nano, see below).
# Invoke-Automation Windows Dockerfile
# Indicates that the windowsservercore image will be used as the base image.
# Metadata indicating an image maintainer.
# Copy install files to the container
COPY mysql-connector.msi c:/temp/
COPY powercli.exe c:/temp/
# Install PowerCLI
RUN powershell Start-Process c:\temp\powercli.exe -ArgumentList '/s /qn /w /V"/qr"' -Wait
# Move PowerCLI Modules to correct Directory
RUN powershell Move-Item -Path 'C:\Program Files (x86)\VMware\Infrastructure\PowerCLI\Modules\*' -Destination 'C:\Program Files\WindowsPowerShell\Modules'
# Install MySql connector
RUN powershell Start-Process c:\temp\mysql-connector.msi -ArgumentList '/quiet /passive' -Wait
# Copy PowerWamp Module to container
ADD https://raw.github.com/jpsider/PowerWamp/master/powerWamp.psm1 c:/temp/
# Copy PowerLumber Module to container
ADD https://raw.github.com/jpsider/PowerLumber/master/PowerLumber.psm1 c:/temp/
# Add powershell script
COPY verifyInstall.ps1 c:/temp/
# Validate Imported Modules
RUN powershell -executionpolicy bypass c:\temp\verifyInstall.ps1
# Sets a command or process that will run each time a container is run from the new image.
CMD [ "powershell" ]
Once you have Docker installed, you can pull my image:
docker pull jpsider/invoke-automation
# then run the Container
docker run -it jpsider/invoke-automation
#run this file to import the modules to the console
This will import the modules mentioned above into the current console and print a list of active modules into a log file in the c:\temp directory.
I’ve started a PowerShell Module to standardize my logging across all of my projects. I kept using the same code over and over, so this will be a great time saver when I start up a new project or script. Right now I’ve kept it pretty basic, but I’d like to add more advanced functions around rolling logs by day/hour/user specified time. If you have any other requests drop me a message or add an issue.
The write-log function gives you the ability to write to just the console, just a log file or both, which can be extremely useful, even when developing/testing a script in the console. Quickly being able to specify whether something gets spit out to the console or not by adding a parameter has been much easier than switching my code from write-host to write-output all the time.