Scripting with VSCode is Awesome! Import Custom Modules to Intellisense.

I absolutely fell in love with using Microsoft’s VScode for writing Powershell Scripts. One of the best features is the ability to import custom modules into the Intellisense feature.

It can be a simple or complicated script. Luc does a great job with his script to capture all kinds of options for PowerCLI on his blog. But what about other modules you might want to use?

Simply add them to the script located here (you might need to create it first):


C:\Users\<USERNAME>\Documents\WindowsPowershell\Microsoft.VSCode_profile.ps1

Here is my example profile script:

Function Enable-Modules
{
Get-module -listavailable vmware* | Import-Module
Import-Module C:\OPEN_PROJECTS\PowerLumber\PowerLumber.psm1
Import-Module C:\OPEN_PROJECTS\PowerWamp\powerWamp.psm1
}
Enable-Modules

That’s it, 7 lines and I have import a ton of functions that will help me speed up the rate at which I can write custom code/scripts. With the newer versions of VScode it is not necessary to update your preferences to load a powershell profile. It just loads it by default now. Oh, of course this does require the powershell plugin!

Once you create your Profile script to import your specified modules, simply reload VScode. Open a Powershell file, and witness the magic. See below, the ‘Invoke-MySQLInsert’ is a function from one of my imported modules PowerWamp.intellisense

And this shows my other custom module PowerLumber:

lumber

Think about how much time ‘tab’ completion saves you in the console, and its available in VScode!

Using a PowerCLI+Windows Container, From the Top!

To show you the Awesomeness of PowerCLI + Windows + Docker, I’m going to walk through the setup of Installing Docker on Windows, Pulling an Image, running the container, performing a task against a VMware infrastructure and interacting with a MySQL database.

I mentioned in my last post, that the reason I need a Windows container for PowerCLI vs. the Linux, is due to some .net assemblies missing for a module I use to work with MySQL.

To get started I created a Brand new Base Windows 10 Pro x64 vm in VMware Workstation. Be sure to enable nested virtualization on the VM (VTX). I also gave it 40gb storage and 8gb of memory.

Quick Reference for Software versions I used for this demo:

  • VMware Workstation: 12.1.0
  • VMware Powercli: 6.5 R1
  • Docker for Windows: 17.03.0-ce-win1 (10296)
  • Powershell: 5.1.14393.206

Step 1: Install Docker –

First, Download Docker for Windows Stable version.

img1

Double click the .msi, select the ‘I accept’, click install

img2

Wait for about 30 seconds, and it finishes! Super quick. Go ahead and check the launch Docker box, and select finish.

img3

You’ll see in the notification area that Docker will try to start.

img4

But it will fail! Oh No! (It’s okay, we expected that since we did not enable Hyper-V)

img8

Select ‘Ok’ to allow Docker to enable Hyper-V. It will also restart your Windows Operating system in order to properly enable the Windows feature. Thank goodness for SSD’s!

One thing I like to do on my Windows operating systems is to show my Notification Icons on the task bar. Right-click on the task bar and select settings. Scroll down and select ‘Select which icons appear on the taskbar’ (Windows must be activated) <—– Not sure why?????

img5

Then just swipe the ones you want or don’t want.

img7

After the reboot, Go back to the notification area and right click on the Docker Whale (not that you have any other whales down there). Then select ‘Switch to Windows Containers…’

img6

It will ask you to select ok, and Reboot to enable Windows Containers.

img9

BOOM,  It’s running, we’re ready!!

img11

I also like to pin Powershell to the taskbar. Click the Windows button, type ‘powershell’ Right click, and select ‘Pin to taskbar’

img10

Here are some reference links to the items we will go over

Step 2: Let’s get started!

Open up powershell (as admin) and run:

docker images

You should get some headers, with no containers listed.

Now run:

docker pull jpsider/invoke-automation 
docker pull Microsoft/windowsservercore

img13

It may take some time to download, windows containers are much larger than the Linux ones. This will pull my custom Image that include : Powercli 6.5, PowerWamp, and PowerLumber on Windows Server Core

After it completes, run:

docker images

Now you should see the new jpsider/invoke-automation image!

img12

Now let’s run the Container, type:

docker run –i jpsider/invoke-automation

It will open a console in your active console window. Type hostname, and hit enter (it should return the name of the container.

img14

Now run these 2 commands:

cat c:\temp\modules.log
c:\temp\verifyInstall.ps1 

which will return the installed modules ran during creation of the image, then import them into the active console. Now type :

get-module

img15

SWEET! Now we have a running container with our desired modules, lets get a new script to run! Use these commands to copy my file down, or any file you want.

$webclient = New-Object System.Net.WebClient 
$filepath = "C:\temp\demoScript.ps1" 
$url = "https://raw.github.com/jpsider/Invoke-Automation/master/Docker/demoScript.ps1" &nbsp;
$webclient.DownloadFile($url,$filepath)

This script will do a few things to prove modules are imported and everything is working as expected…

  • Create and write a log file
  • Connect to an esx server.
  • Print out the Modules to console and log file.
  • Print out the vm’s to console and log file.
  • Query a remote DB for some info & print it out.
  • Insert the VM data into the db

Simply run the script:

C:\temp\demoScript.ps1

You’ll be able to see the VM information in the DB, and you can cat the log file.

img18

cat c:\temp\dockerDemo.log

img17

On your Docker Host, you can open  a second powershell window and run:

docker ps

img16

That’s it, head back to the container console window and type exit to kill the container.

In that second window run:

docker ps

Which will show its all gone now!

img19

Hopefully you found this useful, I will have some follow up posts that go into more detail on how I am planning to use all of this together to manage Infrastructure systems.

Docker on Windows 10, are containers for me?

WOW! Needless to say I was not super optimistic when I started the Docker discovery project. Man was I wrong, I am super impressed with the latest release and will continue to explore all the potential uses for Windows Containers.  It was probably about 1.5 to 2 years ago that I had first heard of it. Generally speaking I do most of my work with Windows, so it was essential that I could run the engine on Windows. So when I first gave it a shot, I started reading the documentation and it said I had to load VirtualBox, install Ubuntu, etc. NOT A WINDOWS solution at all.  It was laughable! I mean, just say its Docker in a box that you can run in a Windows machine.

Anyway, No big deal. I was fine with waiting. I’m glad I never gave up. The past few hours has been well worth it, and with some of my upcoming projects the discovery project I started is going pay off big time.  Another huge piece to the equation was that Microsoft finally supported Hyper-V in a nested environment, and finally the native support for the Docker container Engine.

So a quick rundown of my current test environment….  I’m running a MacBook Pro, with VMware Fusion installed (Thank you @VMworld 2016 for the key!). Inside of fusion I have a Windows 10 x64 vm, where all the Docker Magic happens.

I first started with what I would consider the normal scenario, someone wanting to run a Linux Container… So I enabled Hyper-V, don’t forget to edit the Base vm to enable VTX or you will get errors.  Oh, and make sure your vm has enough physical requirements, that bit me in the ass a few times as I continued to redeploy. I followed the steps found here to get Linux Containers to run on Windows. It was simple and flawless. I can think of so many applications where these containers could be useful in my daily job.

Ok, on to what I’ve been excited for, Windows Containers on Windows. Sadly this is all about Windows 10, I don’t have licensing for Server 2016, and its not worth me wasting my time on a trial when Windows 10 works perfectly fine. So my first mistake was mixing up the page linked up above (which I used to install Docker) and this page for running Windows containers. I kept looking for a toggle switch for the Docker engine to switch from Linux to Windows, I thought I was crazy. RTFM! and when you are done… RTFM again! When I was trying to run the Windows containers, it was only available in the Beta, and I didn’t install the Beta, I had installed the latest stable release. That was a simple problem to fix….

The  link above is probably a bit more complicated than it needs to be for installing, Now you can download the latest software from Docker. Enable the Hyper-V feature, ensure the the Windows Machine has VTX or virtualization technology enabled, then install Docker, the software installer does all the work for you.

Then the real fun begins, at first, I admit, I didn’t get it. What’s the benefit? Why am I doing this? Why doesn’t it work? What did I do wrong? Am I wasting my time? What else could I be doing? Golf?

So if you have Docker installed, and have toggled over to the Windows on Windows engine. I recommend you start on this page, at the ‘Running Windows Containers’ section. And stop at ‘Using docker-compose on Windows’. I never ran through the Music store or IIS type demo’s. My primary function is just using them for PowerShell applications.

So after doing the ‘Hello World’ piece I needed a break, so I took a couple weeks off.  I’m not sure if that was a good or a bad thing, because it took me a few minutes to get my mind back into the idea of what Docker could do for me and how I get it to work. I found a page on Microsoft’s website that outlined how to create a Dockerfile. This was the single greatest link I found it really laid out the concept of how this file can build an image. And how I can tailor the Container for my needs. Once I read this page, I was ready to go, and quickly started playing with building images. Did it go smoothly? NO. It was a lot of trial and error to get the commands I wanted to run, work properly.  Nothing too crazy, just tedious at first. I also started with WindowsServerCore, vs. Nano, so that I was working with a ‘full’ OS.

My Initial requirements:

 

That’s it, of course I have more plans, but If I could get these pieces working, I am in business! Problem #1, VMware does not like you to download PowerCLI from the Internet without logging into their website, so I had to load those files on the host. I chose 6.5, because they are just modules, no true install required. Hooray! Ok, I know the VMware guys are wondering why I am not using the PowerCLI Docker Image. I currently only have one reason, the Powershell/.net MySQL plugin, I need it, and it won’t work on Powershell for Linux (to my knowledge or Nano, see below).

Here is the Dockerfile I have for my first pass, Pretty basic, but gets the job done. Here is the link to pull the ‘Invoke-Automation’ image as well!

# Invoke-Automation Windows Dockerfile

# Indicates that the windowsservercore image will be used as the base image.
FROM microsoft/windowsservercore

# Metadata indicating an image maintainer.
MAINTAINER @jpsider

# Copy install files to the container
COPY mysql-connector.msi c:/temp/
COPY powercli.exe c:/temp/

# Install PowerCLI
RUN powershell Start-Process c:\temp\powercli.exe -ArgumentList '/s /qn /w /V"/qr"' -Wait

# Move PowerCLI Modules to correct Directory
RUN powershell Move-Item -Path 'C:\Program Files (x86)\VMware\Infrastructure\PowerCLI\Modules\*' -Destination 'C:\Program Files\WindowsPowerShell\Modules'

# Install MySql connector
RUN powershell Start-Process c:\temp\mysql-connector.msi -ArgumentList '/quiet /passive' -Wait

# Copy PowerWamp Module to container
ADD https://raw.github.com/jpsider/PowerWamp/master/powerWamp.psm1 c:/temp/

# Copy PowerLumber Module to container
ADD https://raw.github.com/jpsider/PowerLumber/master/PowerLumber.psm1 c:/temp/

# Add powershell script
COPY verifyInstall.ps1 c:/temp/

# Validate Imported Modules
RUN powershell -executionpolicy bypass c:\temp\verifyInstall.ps1

# Sets a command or process that will run each time a container is run from the new image.
CMD [ "powershell" ]

Once you have Docker installed, you can pull my image:

docker pull jpsider/invoke-automation
# then run the Container
docker run -it jpsider/invoke-automation
#run this file to import the modules to the console
c:\temp\verifyInstall.ps1

This will import the modules mentioned above into the current console and print a list of active modules into a log file in the c:\temp directory.

I also created a Nano Dockerfile, however like the Powershell for Linux, It’s missing some assemblies etc, for things to get fully working. So mileage will vary! Here is a useful link for Tagging, pushing and pulling Docker Images, and the Command line reference.

As I continue to pull all of these technologies together I will have more posts, But I really enjoyed the discovery project with Docker, lots of fun and a really cool technology.

I’m interested in hearing whether you could use this ServerCore Image, or any other feedback you might have.

 

PowerLumber

I’ve started a PowerShell Module to standardize my logging across all of my projects. I kept using the same code over and over, so this will be a great time saver when I start up a new project or script. Right now I’ve kept it pretty basic, but I’d like to add more advanced functions around rolling logs by day/hour/user specified time. If you have any other requests drop me a message or add an issue.

The write-log function gives you the ability to write to just the console, just a log file or both, which can be extremely useful, even when developing/testing a script in the console. Quickly being able to specify whether something gets spit out to the console or not by adding a parameter has been much easier than switching my code from write-host to write-output all the time.

Here’s the project on GitHub : PowerLumber

Here is a basic example:


$logfile = "c:\temp\newlog.log

write-Log "Hello World" -Logfile $logfile

And more of a full example:

Here is a quick script to capture some vm information and log it (Ideally in the future I will have a post using  PowerWamp, where I input the information into a Database).


$MYINV = $MyInvocation

$SCRIPTDIR = split-path $MYINV.MyCommand.Path

#Import PowerLumber
$webclient = New-Object System.Net.WebClient
$filepath = "C:\temp\PowerLumber.psm1"
$url = "https://raw.github.com/jpsider/PowerLumber/master/PowerLumber.psm1"
$webclient.DownloadFile($url,$filepath)
Import-module $filepath

#Set script log file
$logfile = "c:\temp\vmlist.log"
$vCenter = "XXX.XXX.XXX.XXX"

write-Log "Connecting to vCenter" -Logfile $logfile
Connect-VIServer -Server $vCenter

write-Log "Grabbing list of vm's" -Logfile $logfile
$vm = get-vm | Where-Object {$_.Name -eq "Nagios"}

$powerstate = $vm.PowerState
$datastoreURL = $vm.ExtensionData.Config.DatastoreUrl.name
$VMversion = $vm.ExtensionData.Config.Version
$moref = $vm.ExtensionData.MoRef

write-Log "------VM Info------" -Logfile $logfile
write-Log "Information about vm: $vm" -Logfile $logfile
write-Log "Powerstate: $powerstate" -Logfile $logfile
write-Log "Datastore :$datastoreURL" -Logfile $logfile
write-Log "HwdVersion: $VMversion" -Logfile $logfile
write-Log "MoRef: $moref" -Logfile $logfile
write-Log "------VM Info------" -Logfile $logfile

write-Log "Disconnecting from vCenter" -Logfile $logfile
Disconnect-VIServer -Server $vCenter -Confirm:$false

Here is the output located in the specified log file:


2017-03-03 10:49:21 Connecting to vCenter
2017-03-03 10:49:34 Grabbing list of vm's
2017-03-03 10:49:34 ------VM Info------
2017-03-03 10:49:34 Information about vm: Nagios
2017-03-03 10:49:34 Powerstate: PoweredOff
2017-03-03 10:49:34 Datastore :synology
2017-03-03 10:49:34 HwdVersion: vmx-10
2017-03-03 10:49:34 MoRef: VirtualMachine-vm-39
2017-03-03 10:49:34 ------VM Info------
2017-03-03 10:49:34 Disconnecting from vCenter

And a quick screenshot to verify its also logging to the console:

consolepl

 

HomeLab Lessons Learned

My first post about the homelab was all glitter and good news. Read about it here. This one will touch on a couple of things that happened during the development of customized scripts for my homelab.

First, I made the ultimate mistake, multiple changes at once. When I first bought all my homelab components I quickly put them together, and ran the script from #TeamAlam . It worked great, and I was able to do my release testing for SummitRTS. However I knew that I wanted to do more with the deployment scripts. Enough already, whats the problem?

Problem #1

Inconsistent Services

Historically in my house I have always ran my own network and WIFI independent of my service providers, this way I don’t have to continue to change network settings, passwords etc on all of the connected devices if I switch companies. Over the past year I have decided to reduce the amount of WIFI devices and run Cat5/cat6 through the house. Which is great, however, it can be a pain when you have two networks. So after some time listening to my wife complain that we could not print from our WIFI devices (because it was on a separate network) I decided to flatten all the networks and remove my personal gear.

I know what your thinking, why not just move the printer to the same network or create a route between the two networks…? That would have been too easy! Actually based on the location of the printer it would have been a pain in the butt. So I did it, I flattened my network, all my devices including my home lab were on the same network, the dreaded service providers network. I logged into my Comcast Xfinity router and edited DHCP to give me a range where I can setup my own DHCP or have some static address space. So far so good. The wife can print, all of my physical and WIFI devices are reconnected. Total time 30 minutes. Hooray!

Notice I said physical and WIFI devices, I have not mentioned my virtual devices. So obviously my vCenter would no longer connect, I could have done the easy thing and just changed the IP address on the VCSA. However, I knew that I wanted to work on the deployment scripts, and off I went….  I changed a few things, opened up Powercli and started the deployment script, and I walked away….  But to my surprise I came back to an error. I forget what it was, and I didn’t spend much time really thinking too much about it. I ran the destroy script again, rebooted the ESX host, and started the deployment script. Same error…  So I decided to google around a bit and quickly determined that the VCSA couldn’t start because of a DNS related issue. WHAT?!?!?!?! the script uses and IP not a DNS name, I triple checked that! Honestly the script didn’t change much from when it had worked a month ago.

Resolution #1

Bottom line here is that I learned two things, Comcast Xfinity does not natively support Forward and Reverse DNS. Actually you cannot even add DNS records to their home routers as of the date of this writing (Unless someone can enlighten me). The other thing is that VCSA requires Forward and Reverse DNS to be functioning on the network, otherwise the VM will deploy, but the services won’t start. I could probably fake out the vm with a hosts file entry, but I really don’t feel like doing that every deployment, my goal is to deploy as quickly and easily as possible with no human interaction. I’ve redeployed it 4 times tonight…. So my solution for now is to isolate my equipment off the Comcast network using an older NetGear R600 router I have which can fully support Forward and Reverse DNS.

Problem #2

All sessions are not the same

I’m naive. I usually have 2 console windows open at all time. 1 for manually testing one liners for small scripts, the 2nd is for running fully automated scripts. With all of the things I was focused on my sessions were not one of them. This error popped up in my automated script running console :

PowercliError.png

So being that naive person I am, I thought I blew up my ESXi NUC. No problem, I grabbed a second USB stick, loaded ESXi on it, and was quickly back at it. Ran the deployment script…. and I got the same error. HUH? When I google for this error it led me to believe that the SDK service had not started on the ESX host or was broke. The IP’s were good, the web services had no problems. At the time I was also testing my on liners for adding NFS storage in my Manual console, and that worked fine. I should note that upon manual inspection, the automated script did work, even though it was still throwing the SDK errors. Very odd, I didn’t test disconnecting the vi sessions and reconnecting in the same console window, but I would imagine that the Powercli Session was toast.

Resolution #2

Refresh your Console windows if you start to get strange errors! Oh, and run as Administrator!!!

Problem #3

Dead NFS connection

Whenever I am getting started to work on a project, I turn my power strips on, then power on all my devices. It would take ESXi 10-15 minutes to boot up.  It would hang at the nfs4client, it would never fail, just take its sweet old time. It was never really a problem, I always took the time to open up my programs, check Twitter, Facebook, etc. I can always find a way to avoid work!

Resolution #3

Starting and waiting for my Synology to fully power on resolves this problem (which isn’t a long time). Once the Synology “beeps” I can power on ESXi and it quickly boots to the login screen. The time difference is amazing!

Summary

Keep at it! Some times it’s a simple fix. Oh and my deployment times keep getting faster! Also keep your changes to a minimum when testing new things. Save yourself the frustrations.powercliFast.png

Start-Homelab

The beloved home lab project, it was a pipe dream for years. How could I justify buying new or used equipment to just play or tinker with. Finally, and I’m still not sure how, but I convinced my wife to invest in my desire to have some extra toys at the house. I think the biggest question I had to answer was: Why? Why do I need a home lab? To me the answer was simple, I can be more effective in my online communities and at work if I have a reasonable lab to tinker with at home. Spending 30 minutes or an hour each day proves to be extremely helpful in developing my skills.

At VMworld this year I participated in the #hackathon, I think that is where my interest really spiked in learning more about the Intel NUC’s. I remember them being a solid platform for a small, robust lab.

So thanks to a shopping list from William Lam and some licensing from the vExpert program, I marched forward!

In addition to the Shopping list linked above, I went ahead and purchased a Synology DS216+II. I wanted a storage device that can be on my network, and use CIFS and NFS. You can opt for a more feature rich option, but this one will suit my needs. I added two 3TB Western Digital Hard drives.IMG_3674.JPG

William Lam is correct when he says that everything goes together smoothly, I ran into ZERO problems with the Hardware configuration of my new Intel NUC components. Which is part of the reason I love this setup so much. Not to mention its more powerful and easy to use than my company provided lab.  When it all said an done, here is what my “rig” looks like now:IMG_3679.JPG

And, no the wife does not approve of my TV on my desk. 😛

IMG_3779.JPG

Strangely I have never deployed ESX via a USB stick, so I relied on the community for that, These directions worked just fine for me Install ESXi 6 to USB via VMware Workstation

Now it was time for the rest of the deployment, with my requirements fairly simple, I wanted a lab with:

  • ESX
  • vCenter
  • VSAN
  • several Template vm’s (Windows and Linux)
  • NFS Storage
  • Licensed components
  • One liner Deployment
  • One liner Teardown/Destruction

Team “Alam” strikes again! PowerCLI Guru Alan Renouf and William Lam seemingly have a solution for anything automation when it comes to VMware.  I used the links above to download scripts that can be used to deploy ESX, vCenter, VSAN to the Intel NUC platform.

The only piece I really needed to add was my licenses, NFS storage and Template vm’s.  Additionally I updated the Destroy script to remove my vm’s and NFS storage. The scripts I use to deploy my new HomeLab test environment are located are located here on Github: HomeLab Scripts. I focused on trying to make the script more modular to meet my needs, and use common compoenents between the deployment and destruction stages.

I Highly recommend this setup for home labs. It is quick and easy to deploy, tear down and re-deploy. My current setup takes roughly 21 minutes to deploy.homelabTime.png

Whats next? Well now I have a solid platform to continue working on my companies open source projects and some of my own.

Thank you to Alan and William for your continuous contributions to the VMware community!

Get-ConsoleURL

VMware’s PowerCLI is probably the most used tool on any of my computers outside of a web browser. They are continuing to improve upon the product that basically saved powershells life (my personal opinion). With that said, I am constantly building little functions, modules, scripts to perform simple tasks.

I’ll be leaving as much of my work on my personal GitHub workspace. Feel free to check it out.

So, Get-ConsoleURL what is it? As you would imagine, its a simple function that returns the URL of a VM’s web console. Why is this important? Well wouldn’t it be nice to quickly share this with a co-worker? Or save this URL in a custom app/webpage you have? The Open-VMConsoleWindow function is very useful in a local machine, but it requires Powercli and opens a local copy of the vmrc. This function I have created will give you a shareable link to the vCenter and reference the vm via the MoRef.

Check out my function here (Get-ConsoleURL).

Example usage:

Get-ConsoleURL -vmName $vmName -vCenterUN $username -vCenterPW $password

Differences in return:

Open-VMConsoleWindow -UrlOnly -vm $vm

file:///C:\Program%20Files%20(x86)\VMware\Infrastructure\vSphere%20PowerCLI\VMConsoleWindow\VMConsoleWindow.html?host=<vcenter>&vmid=vm-32&vmName=<vm>&ticket=cst-VCT-52c82763-13b8-cc54-2cbf-ee726cfbd5b8--tp-B9-A6-19-C3-54-5F-34-4F-FF-06-99-29-74-0A-CD-70-AA-82-74-96&tunnelConnection=0

Get-ConsoleURL -vmname $vm

https://<vcenter>:9443/vsphere-client/webconsole.html?vmId=vm-32&vmName=<vm>&serverGuid=&host=<host>&sessionTicket=cst-VCT-529f6e91-8f95-6f1d-11c6-7765ee2202f4--tp-B9-A6-19-C3-54-5F-34-4F-FF-06-99-29-74-0A-CD-70-AA-82-74-96&thumbprint=5A:AB:D4:75:29:E8:D5:94:09:8F:D2:91:CF:DC:AB:C0:69:03:37:42

 

I have a few issues open to add some functionality as well as improve the error handling.