Implementing Staging and Production “Slot Swaps” with Azure Container Apps

A complete example for implementing this slot swap behavior is available at https://github.com/nshenoy/azure-containerapp-slots-example. Please check it out and feel free to open an issue/PR for feedback.

As a long time user of Azure App Services for app deployments, I’ve gotten accustomed to using staging and production slots as a good practice. Slots provide an opportunity to test new functionality in the staging slot, and then perform a “zero downtime” swap into live production. As I started playing around with the relatively new Azure Container Apps offering, I wanted to see if we could implement a similar zero downtime deployment mechanism that gives the same opportunity to validate before going live. I did come across Dennis Zielke‘s alternative (and excellent) blue/green implementation for Container Apps. However, I wanted to see if there was a different more “supported” way achieve this.

Though deployment slots are not explicitly implemented in Container Apps, there is the notion of a “revision“, defined as “an immutable snapshot of a container app version.” Assuming ingress is enabled, Revisions allow for ingress traffic rules to be set to split traffic between separate revisions. The particularly interesting bit is that revisions can be given labels. Each individual revision is created with an Azure generated unique string, and thus has it’s own URL to hit. However, Revision Labels give a deterministic URL based on the label name and not the revision name. In other words, something labeled as “staging” can always be hit with a URL similar to containerappname—staging.blahblah.azurecontainerapps.io . What’s more – the Azure CLI “az containerapp ingress” command allows for revision labels to be swapped. Now armed with the ability to create revisions, assign revision labels, and swap revision labels, we can now implement something very close to what the Azure App Service provides. We just need a little Powershell and Bicep magic to do the work.

Step 1: Determine the Current “production” Revision Label (if any)

The first main step is to run the Get-ContainerAppProductionRevision.ps1 script to determine if a revision with a production label exists.

# Finding production revision..."
$productionRevision = (&az containerapp ingress show -g $resourceGroupName -n $containerAppName --query 'traffic[?label == `production`].revisionName' -o tsv)

if([System.String]::IsNullOrEmpty($productionRevision)) {
    $productionRevision = "none"
} 

return $productionRevision

The script uses az containerapp ingress show to determine if there is a revision with a “production” label in place. The script either returns the revision name or a value of ‘none’ if the label doesn’t exist, the output of which will become a new environment variable called containerAppProductionRevision.

Step 2: Bicep Template Trickery

The Bicep template is then deployed. And here we have to do some trickery. The first trick is the containerapp_revision_uniqueid parameter:

...
param containerapp_revision_uniqueid string = newGuid()
...
          env: [
            ...
            {
              name: 'containerapp_revision_uniqueid'
              value: containerapp_revision_uniqueid
            }

In order to force a revision-scope change, we set this containerapp_revision_uniqueid params default value to a new GUID with each Bicep deployment.

The next bit of trickery is setting the ingress properties of the Container App:

      ingress: containerAppProductionRevision != 'none' ? {
        external: useExternalIngress
        targetPort: containerPort
        transport: 'auto'
        traffic: [
          {
            latestRevision: true
            label: 'staging'
            weight: 0
          }
          {
            revisionName: containerAppProductionRevision
            label: 'production'
            weight: 100
          }
        ]
      } : {
        external: useExternalIngress
        targetPort: containerPort
        transport: 'auto'
      }

Here we use a ternary operator to switch behavior off of the containerAppProductoinRevision parameter. If the previous Get-ContainerAppProductionRevision.ps1 step returned a revision name with a production label, then we have to setup the ingress traffic rules such that production remains with 100% of the traffic, but the latest revision we’re deploying is set to 0%. In other words, don’t mess with the current Production slot. Otherwise, if there was no previous production slot defined, then there’s no traffic rules to define (yet). This is the crux of getting this slot-like behavor to work.

Step 3: Apply the “staging” Label to the Latest Revision

Next we run the Set-ContainerAppStagingLabel.ps1 script to apply the staging label to the latest revision.

# https://github.com/nshenoy/azure-containerapp-slots-example/blob/main/deployment/scripts/Set-ContainerAppStagingLabel.ps1

[CmdletBinding()]
param(
    [Parameter(Mandatory=$true)]
    [string] $resourceGroupName,

    [Parameter(Mandatory=$true)]
    [string] $containerAppName
)

&az config set extension.use_dynamic_install=yes_without_prompt

# fetch latest revision
Write-Host "Finding latest revision..."
$latestRevision = (&az containerapp revision list -g $resourceGroupName -n $containerAppName --query "reverse(sort_by([].{name:name, date:properties.createdTime},&date))[0].name" -o tsv)

Write-Host "Latest revision: $latestRevision"

# Find revision with label of "staging" and remove revision.
Write-Host "Finding staging revision..."
$stagingRevision = (&az containerapp ingress show -g $resourceGroupName -n $containerAppName --query 'traffic[?label == `staging`].revisionName' -o tsv)

Write-Host "Finding production revision..."
$productionRevision = (&az containerapp ingress show -g $resourceGroupName -n $containerAppName --query 'traffic[?label == `production`].revisionName' -o tsv)


if([System.String]::IsNullOrEmpty($stagingRevision)) {
    Write-Host "No staging revision found."
} else {
    Write-Host "Staging revision: $stagingRevision"
    # Write-Host "Removing staging revision: $stagingRevision"
    # &az containerapp revision deactivate -g $resourceGroupName -n $containerAppName --revision $stagingRevision
    Write-Host "Removing staging label from revision: $stagingRevision"
    &az containerapp revision label remove -g $resourceGroupName -n $containerAppName --label staging
}

# Apply "staging" label to latest revision.
Write-Host "Applying staging label to latest revision..."
&az containerapp revision label add -g $resourceGroupName -n $containerAppName --label staging --revision "$latestRevision" --no-prompt --yes

# Write-Host "Setting traffic weights..."
if([System.String]::IsNullOrEmpty($productionRevision)) {
    &az containerapp ingress traffic set -g $resourceGroupName -n $containerAppName --revision-weight latest=100 --label-weight staging=0
} else {
    &az containerapp ingress traffic set -g $resourceGroupName -n $containerAppName --label-weight production=100 staging=0
}

At this point, the latest container image revision is staged. We can then test to make sure it behaves as needed. The revision FQDN can be retrieved from the Azure portal by going to your Container App -> Revision management and then clicking on your staging labeled revision.

The “Label URL” will always be the Container App name with ---staging appended to the end.

Step 4: Swap “staging” and “production”

Finally the production job will run the Swap-ContainerAppRevisions.ps1 to swap revision labels and verify that the production label has 100% of the traffic.

# https://github.com/nshenoy/azure-containerapp-slots-example/blob/main/deployment/scripts/Swap-ContainerAppRevisions.ps1

[CmdletBinding()]
param(
    [Parameter(Mandatory=$true)]
    [string] $resourceGroupName,

    [Parameter(Mandatory=$true)]
    [string] $containerAppName
)

&az config set extension.use_dynamic_install=yes_without_prompt

Write-Host "Finding staging revision..."
$stagingRevision = (&az containerapp ingress show -g $resourceGroupName -n $containerAppName --query 'traffic[?label == `staging`].revisionName' -o tsv)

Write-Host "Staging revision: $stagingRevision"

Write-host "Finding production revision..."
$productionRevision = (&az containerapp ingress show -g $resourceGroupName -n $containerAppName --query 'traffic[?label == `production`].revisionName' -o tsv)

if([System.String]::IsNullOrEmpty($productionRevision)) {
    Write-Host "No production revision found."
    Write-Host "Applying production label to staging revision..."
    &az containerapp revision label add -g $resourceGroupName -n $containerAppName --label production --revision $stagingRevision
} else {
    Write-Host "Production revision: $productionRevision"
    Write-Host "Swapping staging and production revisions..."
	&az containerapp revision label swap -g $resourceGroupName -n $containerAppName --source staging --target production
}

# set traffic for production=100 and staging=0
Write-Host "Setting traffic for production=100 and staging=0..."
if([System.String]::IsNullOrEmpty($productionRevision)) {
    &az containerapp ingress traffic set -g $resourceGroupName -n $containerAppName --label-weight production=100
} else {
    &az containerapp ingress traffic set -g $resourceGroupName -n $containerAppName --label-weight production=100 staging=0
}

Write-Host "Swap complete!"

What’s Next

The big thing still missing is the cleanup of old revisions. At some point in the scripts above (perhaps the final step?) we need to deactivate any revisions that aren’t labelled. Also, it kind of sucks to have these scripts live in the repo. Seems like these should be implemented as a set of build tasks that can be easily included into the workflow.

Advertisement
Posted in Projects, Work | Tagged , , | Leave a comment

Powershell 101: Using Custom Powershell Objects for JSON Requests

For our blue/green deployments in Octopus Deploy, we created some custom Powershell cmdlets to talk to our A10 Thunder load balancer. The cool thing about our cmdlets is that it uses the A10’s nicely documented REST APIs to manipulate all sorts of things in the device. And this is a good thing, because, frankly, the A10 web based dashboard UI really sucks. Using the API has proven to be a lot faster, with the added benefit of enabling us to create a custom Hubot script (but that’s a blog post for a different day).

I had a situation today where I needed to modify one of these Powershell cmdlets that sends a JSON request for updating a virtual service. The request was previously just created as a string since it was so simple (just needed to update a single “service-group” property). But today, I had to optionally update an aFlex rule associated with a virtual service. And in this case, string manipulation seemed really ugly.

Here’s what the code previously looked like:

$body = “{ “”ports”” : { “”service-group”” : “”$ServiceGroup”” } }”

 

 

This is simple enough, and just shoving it into the request as a string was sufficient. But, now, to make this a bit more “future proof,” let’s turn this into a custom Powershell object instead:


$body = @{ “ports” = @{ “service-group” = “$ServiceGroup” }}

 

 

Cool, now we have “service-group” as a property. And converting this into a JSON string to send in our request is easy:


PS D:\> ConvertTo-Json $body

{
"ports": {
"service-group": "staging-service-group"
}
}

 

 

Now, for my new requirement, I need to modify the object’s port.aflex-scripts property. According to the A10 AXAPIv3 documentation, the property is a JSON array of “aflex” objects. So let’s do this:

$body.ports += @{ “aflex-scripts” = @( @{ “aflex” = “$AflexRule” } ) }

 

 

This is creating a new property called “aflex-scripts” as an array. This array has a single element with a name of “aflex” and a value of the $AflexRule variable that was passed into the function. So now when we convert to JSON, we have the desired shape:

PS D:\> ConvertTo-Json $body

{
"ports": {
"aflex-scripts": [
"System.Collections.Hashtable"
],
"service-group": "staging-service-group"
}
}

 

 

Wait a sec, the JSON is showing “System.Collections.Hashtable.” That’s not right. We need to use the “-Depth” parameter to tell the conversion to apply for the various levels of our object. So let’s fix that:

PS D:\> ConvertTo-Json $body -Depth 3

{
    "ports": {
"aflex-scripts": [
           {
"custom-redirect-rule"
}
        ],
"service-group": "staging-service-group"
}
}

 

 

Ok, that looks much better. And is exactly what we want.

Posted in DevOps, Work | Tagged , , | Leave a comment

Previewing Octopus web.config Transforms Via Offline Package Drops

 

We Red heart Octopus Deploy at Mimeo. Lately, we’ve been doing a LOT of deployments with Octopus as teams migrate their apps off of our old fragile homebrew deployment system. The migration has allowed teams to cleanup years of questionable deployment steps, stupid app pool names, and weird configuration transforms. Part of what makes Octopus awesome is their system of scoped variable substitution by composition. Octopus has a rich system where variables can contain other variables, and the engine will take care of the maths for getting the proper values based on the deployment scope (e.g. values for QA versus Production). And this is exactly where some of our teams have struggled.

The initial deployment of an application with Octopus has been painful because roughly 90% of time the final transformed web.config has errors. Sometimes it’s just carelessness from the team where the transform itself doesn’t work (they didn’t verify with something like SlowCheetah). Sometimes people did a blind copy/pasta of values and didn’t take the time to visually verify the values are right or properly scoped. But most of the time, errors are due to folks getting buried under the indirect composition of variables and not knowing the right way to get the final values.

Wouldn’t it be nice if Octopus provided a tool to answer “Can I preview what my final transformed Production web.config will look like?” without actually deploying anything to production? Seems like it should be possible, given their engine does this work for us already. Sadly, they do not. The best Octopus provides is the notion of Offline Package Drop Targets, which will dump a JSON file with the final calculated variable values. This gets us part of the way there. So I wrote my own tool to get us a little further.

Here are the steps that we followed:

1. Create an Octopus Offline Package Target in your target environment(s)

– Follow the steps in http://docs.octopusdeploy.com/display/OD/Offline+Package+Drop . This will basically just treat a UNC share as the target. Make sure that your devs can access this UNC

2. Add the target to your environment

– Add this target to whatever environments that your team would like to use for previewing transforms.

3. Deploy the project to this target

– When on the Deploy release screen, make sure you first hit the Advanced link

image

then hit the “Deploy to a specific subset of deployment targets” link. Then select your offline drop target.

image

image

Once you deploy, you can navigate to your share and drill down into the Variables directory. This will contain JSON files with key/value pairs of all variables and their values for that environment. Identify which JSON file maps to the deployment process step that is responsible for your web.config transformation and variable substitution.

4. Download this Powershell script locally to a folder (e.g. d:\scripts\:

function transform($xml, $xdt, $output)
{
Add-Type -LiteralPath "Microsoft.Web.XmlTransform.dll"
$xmldoc = New-Object Microsoft.Web.XmlTransform.XmlTransformableDocument;
$xmldoc.PreserveWhitespace = $true
$xmldoc.Load($xml);
$transform = New-Object Microsoft.Web.XmlTransform.XmlTransformation($xdt);
if ($transform.Apply($xmldoc) -eq $false)
{
throw "Transformation failed."
}
$xmldoc.Save($output)
}
function substitute($line, $octopusValues)
{
$regex = [regex] "(#\{\b[a-zA-Z0-9-_.]+\})"
$groups = $regex.Matches($line)
if ($groups.Count -eq 0)
{
return $line
}
foreach($group in $groups)
{
$octVariable = $group.Value.Trim("#{").Trim("}")
write-host "[DEBUG] group.Value:" $octVariable
Try
{
$token = $octopusValues | select -ExpandProperty "$octVariable" -ErrorAction Stop
$token = substitute $token $octopusValues
$line = $line.Replace($group.Value, $token)
}
Catch
{
Write-Host "[WARNING] Could not find value of $octVariable"
Break
}
}
return $line
}
function Transform-OctopusConfig($json, $xml, $xdt, $output)
{
if (!$json -or !(Test-Path -path $json -PathType Leaf)) {
throw "File not found. $json";
}
if (!$xml -or !(Test-Path -path $xml -PathType Leaf)) {
throw "File not found. $xml";
}
if (!$xdt -or !(Test-Path -path $xdt -PathType Leaf)) {
throw "File not found. $xdt";
}
$outputTemp = $output + ".tmp"
transform $xml $xdt $outputTemp
$octopusValues = (Get-Content $json | ConvertFrom-Json)
$lines = Get-Content $outputTemp
foreach($line in $lines)
{
$line = substitute $line $octopusValues
$line | Out-File -filePath $output -Append
}
}

This script (work in progress) takes your JSON variable file, your web.config, your web.foo.config transform file, will perform the web.config transform substituting the variables from the JSON and spit it out in whatever output you specify.

5. Copy Microsoft.Web.XmlTransform.dll to the same folder as your script from step 4 above.
This assembly can be found in your Visual Studio folder. On my machine I found it under C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions

6. “Dot source” the script.

In a Powershell console, simply type . .\TransformOctopusConfig.ps1 to make the functions available in your console.

7. Run the script

> Transform-OctopusConfig “d:\tmp\OctopusDeployment.variables.json” d:\tmp\web.config d:\tmp\web.prod.config d:\tmp\web.config.transformed

 

The result of this script execution will be a web.config.transformed file that will be your final production web.config with variable values substituted into it.

 

Known Issues:

  • If you’re using any passwords or values that are marked as “sensitive” those values are not transformed (i.e. the values from the .secret file are not read).
  • I’ve only written this with web.config files in mind. If you want to do some other arbitrary file transform and substitution, this won’t help you.
Posted in DevOps, Work | Tagged , , | Leave a comment

Fixing “Stuck” Downloads in the Comixology Windows App

The Comixology Windows app is an unfortunate disaster. What started off as a nice offline reader for the Comixology library has suffered from lack of any new features (like simple organization of owned content), and worse, serious regressions in functionality. The worst, and most popular “issue” that damn near everyone is facing, is the sudden inability to read downloaded content. For whatever reason, this issue has plagued me all week when trying to read Daredevil: Born Again. Sometimes, trying to re-download the content causes the app to get stuck and show no progress (and sometimes crash). Most people solve the issue by uninstalling the app, reinstalling, and then re-downloading their entire library.

That sucks. But there’s a slightly less sucky way.

    1. Determine which of your comics are in this ‘stuck’ state.

    The most common symptom is when trying to click on Read causes a black screen and a progress notification that never stops. Another symptom is that your list of actively downloading content shows no progress. Today, the symptom is my comic no longer appears on device (despite the fact that I was just reading yesterday).

    2. Close the Comixology App

    Either click on the X in the upper right corner, or if on a touch device you can drag the window to the bottom of the screen. Wait for it to spin before dropping (this will actually shut it down as opposed to suspending the app).

    3. Identify the location for where the comics files are

    For some reason, the root of the issue is that some metadata for the comic(s) is corrupted. I haven’t figured out exactly what, but clearing the files for the comic appears to work. So open File Explorer, and navigate to the %LOCALAPPDATA% folder. This is the folder that contains all of your Windows Store apps. Navigate to the folder that starts with “comiXology.Comics”.

    Then go into “LocalState”, and then the folder with your account name. This is the folder that contains the files for your comics.

    You’ll notice that the files are numbered with an ID, which makes it tricky to identify your comic. But you’ll also notice that for each ID folder, there is an associate ID.comx file. Navigating into each ID folder reveals a bunch of JPGs, which are the pages for the comic. One of the JPGs will the cover art. Make sure you enable thumbnails on that folder so that you can identify the cover image to locate your bad comic. Note the ID of the folder that you are in.

    4. Delete the files for the bad comic

    At this point, simply delete the folder and .comx file assocated with the ID you determined in #3 above.

      5. Start Comixology again

      When you restart Comixology, you should be able to successfully re-download the comic and enjoy.

BUT a few notes

  1. Once you start a download, leave the Comixology app running and active until the download is complete. I’ve noticed that I’ve ended up in this ‘stuck’ state if I switched apps or if the device went to sleep.
  2. Download one comic at a time. I also noticed that trying to download more than 1 or 2 comics at a time causes bad mojo. It’s very lame, but play it safe and just download one at a time.

It’s terrible that us folks on Windows with a large Comixology library are stuck with this terrible experience if we want to read offline. And since not all of my comics in my collection have backup PDF or CBRs available for download, I’m completely stuck. I really hope Comixology gets their shit together and invests some time in the Windows 10 timeframe to build a more stable universal app. Or even make the app open source so that others with time could build something that actually works.

Posted in Projects, testing | Tagged | 4 Comments

I Didn’t Mean to Break It, Really. :(

Some people feel that the tester’s job is to break software, whereas the developer’s job for performing testing activities is to make sure software works. This dichotomy, or disconnect, can sometimes be fun. Like yesterday…

At Mimeo.com, our ability to commit to fast printing and delivery times is driven by a huge backend infrastructure of services and LOB apps to help the production people get material printed and shipped with efficiency. One of our LOB apps is a simple web app that allows an order to be reprinted (sometimes, a few copies of an order can get messed up and we need to reprint some quantity to make it right). The UI for the tool is fairly simple. You enter an order ID in the textbox, give a reason for why it needs to be reprinted, and then the quantity for how many reprints you want. The quantity must be greater than 0 and less than or equal to the quantity in the original order.

reprint

Although the implementation details don’t really matter, I will say that the UI is an ASP.NET MVC app that’s just a façade, where the underlying controller makes calls out to other services. For this sprint, we had some work to do in one of the services responsible for actually doing the reprint logic. The UI did not change at all in this sprint, only the dependent WCF service logic changed.

The UI looks very familiar to any tester. It’s basically an interview question. Testing 101. You have a textbox and a button, how do you test this? How would you, dear reader, test this?

We’re at the end of the sprint, so our feature team decided to do a big group hug pair-coding/pair-testing session where we can all give test ideas and find/fix bugs quickly. And we started off with this reprint app. I happened to be driving the session at this point. The project lead asked, “Ok, what quantity should we try?” As the words exited his mouth, I just happened to start off with “0” in the quantity box. As soon I hit the Reprint button, the project lead said, “Wut did you just do?! Did you put in 0?? Don’t do that!” So, of course, the tool accepted 0, made the service call, and it ended up reprinting the entire order. Bug.

We then setup the order to reprint again. The project lead was driving the session this time and again asked, “What quantity should we try this time?” I answered, “The quantity for this order is 200 right? Let’s try 201.” He typed it in, and again, the quantity was accepted and passed down to the service, which printed 201 copies. Bug.

At this point, the team was dying laughing. Simple UI validation checks weren’t performed, and this was a tool that has been in use for several months. One of the devs said something to the effect of how this is why we have testers who can break the software.

But is that what I was trying to do? Was I purposely trying to give inputs to break the app? My first instincts when seeing a textbox is to explore boundaries. So something that takes in a quantity (presumably an int) should be explored with at least a 0, maxQuantity+1, -1, characters. All of these should first pop some sort of validation in the UI so that we catch bad input prior to any underlying service calls. Once I verify that basic validation is happening at the UI level, then we can test the meat with real values. That’s just how I think. I wasn’t trying to break anything. I was trying to explore the behavior of the app by entering bad input in order to learn what sort of instructional message we return back to the user, and to learn how the application itself reacts to bad input.

I don’t feel that testing activities are meant to “break” the software. I also don’t feel that the activities should solely be meant to verify that things work. Exploration is key in understanding how the software works (or should work), and to identify the potential gaps in expectations.

Posted in testing, Work | Tagged , , | Leave a comment

Herding Unikitties

Coaching kids is hard. I mean, really hard. Last week, I had the awesome opportunity to be a coach for the first time. I’m coaching my son’s Junior FIRST LEGO League team. I’ve never coached kids for anything before. Sure, I’ve done a few show and tells in my kid’s daycare, and read a book to his Kindergarten class. But I’ve never sat with a bunch of kids for an extended amount of time with the expectations that they had to listen to me.

Before the first practice, I envisioned this:

itn12897

But in reality, this happened:

mythbusters_herdingcats

And sometimes:

41e1738f6084e01b0d717688000263cf-room-full-of-cats-going-crazy

Holy hell, my repeated remarks of, “please take a seat,” and, “stop running,” were interpreted as “ZOMG-SHOWER-ME-WITH-MORE-LEGOS-ARRARRRHAHAHAHAHA-I-DON”T-KNOW-WHATS-HAPPENING-SUGAR-AND-PRETZELSTICK-LIGHTSABERS!”

The practice was pretty awesome. It was absolutely amazing to see a bunch of kids who didn’t know each other, almost instantly bond like some sort of human katamari (it literally looked like rolling a human katamari sometimes). The LEGO really interlocked these kids. And yet, every kid was completely different. Right away, we observed the clever thinkers, the shy ones, the rebels, and the storytellers.

I learned a lot in this first practice.
1. Adults are easier to coach than kids. Adults will mostly stop what they’re doing, pay attention to the speaker, and do as they are asked. Kids don’t. Full stop.

2. Use the session plan in the LEGO coach’s materials as a very general idealistic guideline. There is absolutely no way in hell that 6 year old kids will spend an hour and half going thru all of that material. We did an initial brainstorm for team names, but couldn’t get their attention spans in check to actually decide on a name yet. Forget making a logo in that first session. And once the BuildToExpress kits were introduced, we only made it thru 3 of the challenges.

3. Keep things moving. This is where I failed. I came in with a bulleted agenda based off of the lesson plan in the coach’s guide. I tried to go through the program and discuss the six “core values” for the program (e.g. “We are a team,” “We share,” etc). But I didn’t effectively cater to the short attention spans of the 6 year olds. And once the BuildToExpress kits were introduced, it was game over – it was buildin’ time.

4. The kids want to build with LEGO, so let them. Before the kits were given, they spent the whole time, just looking at the big tote of materials in the class asking, “When we can use the LEGO??!?” instead of listening to what I was trying to say. Once the kits were introduced, it seemed like they weren’t listening to what I was saying, but they sort of were. They’re heads were down and focused on building, but they would actually pick up on some of the words coming out of my mouth. The kids are there for the LEGO. So let them build. The stuff for the season challenge will come in time.

5. The most important thing is to make sure the kids have fun. I found it important to get all of the kids involved, creating, and sharing.

We spent quite a bit of time (probably too much time) brainstorming team names. The kids were all antsy (see the third picture above) and it was clear that they wanted to get they’re energy out. So we tabled the brainstorm and just had them open the BuildToExpress kits and start having at it. The decibel level in the room quickly dropped for the first time. We just let them play with the kits for a good 10min or so and not give any guidelines or challenges. I finally gave them a 2 minute warning and told them that they would all have to present their build. I had each kid do the following:

1. What’s your name?
2. What grade are you in?
3. Who’s your teacher?
4. Tell us 1 awesome thing that happened today.
5. Tell us about your LEGO creation.

This not only got each kid to initially present to the team, but served as a pretty good introduction. I’m pleasantly surprised that even work. Having each kid go through the intro actually took a lot longer than expected. A few of the kids got pretty…enthusiastic… about the story behind they’re model. Some of the stories were pretty elaborate. It’s hard to cut off a kid’s story, especially when they’re so excited and proud.

Next we went into the challenge cards. Sticking to the “2-3min” is nigh impossible. Every time I said that time is up, it was pitchforks and yelling of “MOAR TIME PLZ!” It’s more like 5-6min for the build. And again, the sharing part took way long. I found it super important to make sure everyone was involved. If someone was too shy to share or was stuck, I tried to lead them along with questions to get them thinking and building. We got through 3 builds and decided it was time to try to nail a team name from the initial list we had at the beginning of the practice.

graph

Yeah, the team name didn’t happen. We were about an hour into the practice with about 20min still left, but the kids were done. My voice was background noise to their running and yelling. They were done with sitting, done with LEGO, done with humanity. I’m talking Lord of the Flies.

LordOfTheFlies 7

Ok, I may have exaggerated a tiny bit. In all seriousness, I’m really excited about this opportunity. It’s really cool to see the kids thinking and creating and working together. And this year’s challenge, Redefining Learning, looks incredibly fun. I’m super excited to see how this season goes.

Posted in JrFLL, LEGO, Projects | Tagged , | Leave a comment

Oh snap – I denied “Everyone” permissions from my MSMQ

Yeah, so in a brilliant move today, an MSMQ on my QA box was accidentally destroyed maliciously when I set Deny permissions for “Everyone”. Oops. Luckily this StackOverflow response from user Houman saved my ass. Simply navigate to C:\Windows\System32\msmq\storage\lqs and delete the queue. The names are cryptic, but lucky for me, that was the only queue that I touched and was able to locate it by ordering by timestamp.

Oops.

[Blogging this so I can easily find this solution the next time I do this same boneheaded move.]

Posted in testing, Work | Tagged , , | Leave a comment

Inspiration in Everyday Things

I was lucky enough to attend CAST 2014 last week. CAST is a wonderful conference bringing some of great minds of the testing community together under one roof. Michael Larson has great summaries of the two day conference on his blog. Out of all of the great keynotes, Ben Simo’s talk was probably the most entertaining from a purely geek perspective.

 

Ben’s keynote, “There Was Not a Breach; There Was a Blog,” shares his experiences visiting HealthCare.gov and the wonderful ways in which the site totally failed. Ben goes in depth into the functionality issues, usability problems, and potential security implications of how the site was implemented. The talk was truly inspirational. Ben captivated everyone in the room, going step by step into what he tried, what he found, and why it’s a problem. Exploratory testing at his finest. My key takeaway from this talk and what really inspires me is this – any one of those testers in that room could do this, too.

If you watch Ben’s talk, he starts off talking about the tools he used. Everything he used is pretty much on your machine already:

  • Chrome and IE? Check.
  • Developer tools? Checkit’s part of the browsers!
  • Fiddler? This was the only ‘special’ tool that had to be installed. And this is free. And if you’re a tester who’s testing a website or app with network connectivity, you should have this on your machine already. Check.

He didn’t have a spec or user stories. He didn’t have source code available. He had no special access to internal builds, log files, etc. He wrote no “automation” or code of any kind to find bugs. Everything he did was public facing using the tools that we all [should already] have. And that to me is inspirational. And it should be inspirational for you, too.

Posted in testing | Tagged , | 1 Comment

AppFabric v1.1 Installer Hates You

[Blogging this because we’ve hit this a few times at work in the past 3 months]

If all attempts for installing AppFabric v1.1 fail with the ever helpful 1603, look in your %TEMP%\AppServerSetup1_1_CustomActions(datestring).log file [where datestring is the last time you invoked the installer]. You may see a message like this:

6/23/2014 11:32:45 AM EXEPATH=C:\Windows\system32\\net.exe PARAMS=localgroup AS_Observers /comment:"Members of this group can observe AppFabric." /add LOGFILE=C:\Users\me\AppData\Local\Temp\AppServerSetup1_1_CustomActions(2014-06-23 11-32-31).log
Error: System error 1379 has occurred.
Error: The specified local group already exists.
ExitCode=2

Answer: Visit this StackOverflow answer: http://stackoverflow.com/questions/16655899/appfabric-installation-error-code-1603. Basically, blow away the AS_Observers and AS_Administrators local groups. Then try to run the installer again.

Then go grab some ice and Tylenol. That past hour of slamming your head on your desk probably left a bruise.

Posted in Work | Tagged , , | Leave a comment

Better Workaround for AppFabric Cache’s Tracing Hatred

As a follow up from my last post, here’s a simple extension method to forcibly restore the default trace listener every time you invoke an AppFabric Caching Administration Management cmdlet. It’s not the prettiest code, but it’s functional:


public static class PowershellExtensions
{
public static Collection<PSObject> InvokeCommandAndRestoreDefaultTraceListener(this Pipeline pipeline)
{
bool hasDefaultTraceListener = PowershellExtensions.IsDefaultTraceListenerPresent();
Collection<PSObject> results = pipeline.Invoke();
if(hasDefaultTraceListener && !PowershellExtensions.IsDefaultTraceListenerPresent())
{
Trace.Listeners.Add(new DefaultTraceListener());
}
return results;
}
private static bool IsDefaultTraceListenerPresent()
{
foreach(TraceListener listener in Trace.Listeners)
{
if(listener is DefaultTraceListener)
{
return true;
}
}
return false;
}
}

Posted in Work | Tagged , , , , | Leave a comment