Turning Off a Nightlight with a 555 Timer

Several months ago, my sister gave my 4 year old son this simple little dragonfly nightlight. It has an RGB LED running off of two coin cell batteries that just slowly cycles through all the colors. Simple and effective – my kid loves it. Two things we learned from having this light:

  1. I’ll be damned if my boy could remember to shut the damn light off in the morning.
  2. Coin cell batteries can get expensive (see #1).

Frustrated one day, I mentioned to my boy half-jokingly, “We should build something to make the light automatically turn off.” To which he replied, “Yes daddy, and I want it to shut off after 20min.”

Which brings me to lesson 3: if you mention an idea to my son, he expects it to come to fruition (see Halloween Robot Costume). Clearly he’s on the management track.

I wasn’t quite sure how to proceed. The first reaction was to use a microcontroller. It would be pretty simple to use a Netduino to write code to just turn a pin off after 20 minutes. A few problems with the initial approach. First, the Netduino needs a 3V source and the night light requires 6V; so I would need effectively 2 power supplies and have the Netduino switch the power off to the night light via some sort of transistor. Not to mention that I would be throwing away $30 on a microcontroller for something so small and trivial.

After Googling with Bing for a while, I came across several explanations on using a 555 timer as a monostable multivibrator. And who doesn’t like vibrators? Am I right? A monostable vibrator generates a pulse for a certain amount of time when a trigger is applied. In my case, I want this one-shot to last about 20min to keep the night light on.

I began by designing this on my Windows Phone and Surface RT. Yes, I’m a former ‘Softfie and I drank the Kool-Aid hard. And yes, I’m still listening to music on my Zune, so suck it. I used the iCircuit app, which is available for both devices (and apparently the iPad, too, if you’re into that sort of thing). The app is great for playing with a design, with many simple components to choose from. I loved that the Windows Phone version had sample circuits available to play with. And of course one of the samples was, you guessed it, a monostable multivibrator using a 555. The one disappointing thing is that pin numbers aren’t given in iCircuit. I loved how you can put a scope at any spot in the design and see exactly what’s going on. Very informational and educational.

Good, I got a sample working in iCircuit. But now how do I get this to give a 20min pulse and then shut off? From looking at the datasheet for a 555, we see that the formula is approximately: t = 1.1 * R * C (where t is in seconds). Well damn, that’s easy enough. I know that I have a bunch of 100u capacitors. So using a 100u cap, let’s see what resistor I would need to yield 20 minutes:

20min * 60 sec/min = 1.1 * R * 100u

1200 = 1.1 * R * 0.0001

1200 / 1.1 / 0.0001 = R

10.9M ohms = R

Yikes! That’s big. And I don’t seem to have any in my supply. But I did have 5 1M ohm resistors. Let’s see how close this comes:

t = 1.1 * 5M ohm * 0.0001F

t = 550 sec

Hmm, 9 minutes – close enough. I wired this up in iCircuit and it worked like a champ. I also added a simple SPST switch to act as the trigger. Here’s the result:

Next step – wire it up on a solderless breadboard. The biggest challenge was to figure out the pin out of the 555. Again, I referred back to the 555 datasheet. Initially I used a small resistor of 150K and an LED for the output just to make sure things would work without having to wait the entire time. Everything was wired up. I put the batteries in. The LED turned on. Yay! I hit the switch to trigger it again. And…

Nothing.

Nada.

I double checked my diagram, resistor and capacitor values, swapped out my test LED. Nothing. Sad face. But then I triple checked the datasheet. Something smelled fishy.

If you look at the iCircuit diagram above, we see that the trigger is negated. But according to the datasheet for my NE555 component, the trigger on pin 2 is not inverted!

Facepalm!

Simply wiring the switch to connect to GND completely resolved the issue. Here’s a picture of the breadboard with the completed working circuit:

The next step is to solder this altogether and nicely package this up with the dragonfly light attached. We’ll see how this pans out.

Posted in Projects | 3 Comments

Using UISpy to Handle the Windows Security Dialog in Windows 7

In the previous post, I gave a solution for automating the dreaded “Windows Security” dialog in IE using UIAutomation. Reader Ven left this comment:

“Hey, This is exactly what I’ve been looking for, but when I seem to run it on IE, the code breaks when it reaches
AutomationElement userList = windowsSecurityDialog.FindFirst(TreeScope.Children, userListCondition);
and gives me a null value, I’m not entirely sure why, but I was also wondering where did you obtain these parameters from?”

Turns out that my code works great on Windows 8, but fails on Windows 7 (Ven’s config). Originally I mentioned that I would fix up my code and update the gist. But it got me thinking: I should probably explain how I even figured out the code to begin with. And what better way to do it, than to show how I used UISpy.exe

The key to figuring out the code was to use a tool called UISpy. This tool is very similar to Spy++ in that both navigate the UI elements within a window. Here’s a good explanation from the MSDN forums:

1) Is there any difference between UISpy.exe and Spy++ ?

UISpy uses UI Automation API to acquire user interface accessibility information. Spy++ uses other Win32 API to acquire window (HWND) object information on the desktop. The level of details and types of information are different because of the different information source. You can learn more about UISpy at http://msdn.microsoft.com/en-us/library/ms727247.aspx and Spy++ at http://msdn.microsoft.com/en-us/library/aa242713(VS.60).aspx and  http://blogs.msdn.com/vcblog/archive/2007/01/16/spy-internals.aspx .

The first problem is acquiring UISpy.exe. You can try to find it in one of the older Windows SDKs (supposedly), or you can download from my SkyDrive: http://sdrv.ms/13Ky16o. It’s just an executable and doesn’t require any installation.

There are lots of really good tutorials on using UISpy, so I’ll just be presenting highlights in this post. I highly recommend visiting the docs on MSDN or just Bing it. This post is going to show how I used UISpy on Win8 to solve the original problem, and then augment my previous code to work with Windows 7. So let’s start by navigating IE to my page under test and fire up UISpy on Win8:

image

The UI is quite straightforward. The Content View shows a tree control where the root node is the Desktop, and the children are all the child windows from the Desktop. The Properties window shows all of the UIAutomation properties of the item selected in the Content View tree.

Now, let try to find the “Windows Security” dialog in the UISpy content tree. To do this, make sure the “Focus Tracking Mode” button (the icon looks like a keyboard) is selected. This allows you to click on any control in any window and have the tree automatically select the item that has focus. When I select the User Name edit control in the Windows Security dialog, UISpy now looks like this:

image

As you can see, the Content View now has “edit” “User name” selected and there’s a red outline around the User name control in the security dialog. If you click around to different controls (or windows) you’ll see the same behavior where it gets a red outline and is selected in UISpy. So what are we really looking at?

UISpy is showing us the element hierarchy in the window. We see that Windows Internet Explorer is a “pane” that has a child “dialog” called “Windows Security”, that has a child “list”, that has a child “list item” called “Use another account”, which has a child “edit” control called “User name”.

At this point, I’d like to refer to the excellent tutorial on http://www.mathpirate.net/log/2009/09/27/swa-straight-outta-redmond/. He walks thru using UISpy to automate calc with UIAutomation. Go take a few minutes and read that, then come back here. Go on, I’ll wait patiently…

Great, you’re back! Good read,eh? Let’s see how that relates to my Gist.

We need to set up the Condition objects in order to create the queries for navigating the elements. As described above, we need to find IE –> Windows Security –> List –> ListItem called “Use another account” –> edit controls –> OK button. Once again, here’s the gist:

Now let’s take a look at Ven’s comment from the previous post again:

“…the code breaks when it reaches
AutomationElement userList = windowsSecurityDialog.FindFirst(TreeScope.Children, userListCondition);
and gives me a null value”

 

Interesting. This implies that there’s no user list.

So, let’s start my Win7 VM and fire up UISpy.exe. Once I navigate my IE to my website under test, UISpy looks like the following:

UISpy

Ok, this is basically the same as Windows 8.

Now, let try to find the “Windows Security” dialog in the UISpy content tree. When I select the User Name edit control in the Windows Security dialog, UISpy now looks like this:UISpy_UserName_Selected

We see that now Windows Internet Explorer has a child Dialog called “Windows Security”, that has a child “list item” called “Use another account”, which has a child “edit” control called “User name”. And how does this compare to Windows 8?

Win7: IE “pane” –> Windows Security “dialog” –>  ListItem called “Use another account” –> edit controls –> OK button

Win8: IE “pane” –> Windows Security “dialog” –> List –> ListItem called “Use another account” –> edit controls –> OK button

Doh! The Windows 8 version of the dialog has the “User another account” listitem as a child of a “List” element, whereas on Windows 7 the listitem is a direct child of the dialog itself. If we dig in a little more, there are also a few other differences:

  1. The class name for the “Use another account” listitem is different
  2. The class name and AutomationId for the username edit box is different
  3. The class name and AutomationId for the password edit box is different

Armed with this info, let’s fix up the code to work on Windows 8 and Windows 7. The new code is as follows:

So with a little investigation, we were able to make the code work across Windows 8 and Windows 7. I’ll leave Vista and XP as an exercise to the reader. 🙂

Posted in testing, Work | Tagged , , , , , , | 7 Comments

Automating the “Windows Security” Dialog with UIAutomation

I decided to finally learn some Selenium in order to test an internal Line of Business (LOB) web application. After a quick crash course on Selenium automation, I got a prototype for my test initialization working, but quickly hit this:

image

I was a bit miffed to learn that Selenium does not natively handle the authentication dialog. Bummer. After fighting with UIAutomation for several hours, here’s a solution I came up with. Hopefully this will save some other people time.

Note that this code only works with IE. WebKit browsers implement the authentication dialog in their own funky ways. But for my purposes, our LOB app is IE based only. #winning

Posted in testing, Work | Tagged , , , , , | 9 Comments

How’s Your Regression Test Suite Working for Ya?

One of my favorite phone screen questions goes something like this:

You’ve just joined a team that has a large test automation suite. Say something like 3000 tests. These tests are run every night against the latest build. You’ve noticed that over the past month the pass rate for the runs vary anywhere between 70% and 90%. In other words, one day the pass rate will be 72%, the next day 84%, the next day 75%, the next day 71%, etc, different everyday. How do you go about analyzing the test suite stability to get the pass rate up?

Depending on the experiences of the tester, this question can go in several directions. Naive testers will just start digging in and debug the first failure without any context, without prioritization, without understanding what they are doing. This is the most common interviewee, unfortunately. Excellent “Senior” well established testers, or those who are relatively active currently in the test community may question why we even care about pass rate; and then we’ll have a discussion about what actually is a test suite and why are so many tests run every night (e.g. why not utilize code coverage to determine which tests to run based on what code changed). In the 2 years that I’ve asked this question, I’ve never had this discussion. 😦 Good testers will start by asking lots of questions to understand the test suite better, what is getting run, whether the builds are changing each night or is the same code, are the tests prioritized, etc. Good testers will dissect the problem down to get data and methodically analyze the data in order to make sense of the chaos.

I ask this interview question because I keep running into this scenario – I keep joining teams that have large automation suites with a “some amount” of test instability. I say “some amount” in quotes because, too often, the people who own the tests don’t take the time (or don’t know how) to understand what specifically is failing and why. Sometimes, the test suites are huge (10,000 tests). But even with a suite of 500 tests, when you see about 20% of your tests failing with every run, it’s human nature to throw up your hands and move on to something else more exciting, because you’ve got other shit piling up that needs to get done for this sprint. Sure, one mitigation would be to schedule time in the sprint to address the tests. But often, people don’t know where to start in order to figure out how much time needs to be pre-allocated. This gives me sad face.

Let’s assume for purposes of this blog, that we have a test suite of regression scenarios, all are of equivalent priority (e.g. Priority 1 tests) and, for whatever reason (lack of sophistication, tooling, end of sprint full regression test, etc) we need to run all of these tests.

So what do you do? You know you have a big test suite. You know that some number of tests are always failing, some number are sometimes failing, some are always passing (and some are inconclusive/timing out). Where do you begin with your analysis? How do you prioritize your work in order to budget time for fixing stuff?

Step 1: Get the Data

If you don’t have data, you don’t have shit. If you’re not using a test runner or automation system that automatically captures test result data and stores them somewhere (SQL is a fine choice), then build one. On my current team, we use SpecFlow and run the tests via MSTest in Visual Studio. We parse the TRX files and import the data into a custom SQL database that captures some very simple data. Here’s a quick and dirty schema we whipped up to capture the basic data we need (yes, this could be greatly improved, but we wanted something simple and fast):

AutomationDB

Step 2: Analyze Data

Now you get to figure out where you want to begin. All things being equal, attack the worst tests first. If you have tests that are higher priority for acceptance, attack those first. Figure out the criteria that you need in order to get the biggest bang for the buck. In many cases, the Pareto principle applies: 80% of the failures come from 20% of the tests. Here’s a [sanitized] graph of the data we’re currently seeing in a certain test suite:

TestFailures

This isn’t the first time I’ve seen this graph. I’ve seen this on literally every single team I’ve worked on in my 15year career.

Step 3: Get Off Your Ass and Fix It

You have the data. You have the graphs. You know what tests suck. So do something about it! The graph above pretty clearly shows that about 20% of the tests in this dataset suck, with about 5% sucking hard (i.e. failing all the time). And then there’s a long tail of tests that fail about 20% of the time. This may require more analysis time.

Tips for Analysis:

1. Collect meta data about the test runs. Any sort of meta data about the test run, tests, environment, configuration can help in your analysis. For example, I was on a team where we were down to analyzing failures in the ‘long tail’ of the graph. A particular set of tests would always fail on the 2nd Tuesday of the month. After digging into the test run properties, we determined that the tests were run on a German build of Windows 2008 R2 only the 2nd Tuesday of the month. Bingo! Sure enough, when we manually ran the test on German Win2K8R2, it would fail. Every. Time.

2. Make the time to fix the tests. If your team is spending the time to automate regression tests, then having them sit around constantly failing for unknown reasons is a complete waste of the team’s time. That’s time that could have been spent doing more exploratory testing, perf testing, scalability testing, etc. Stop wasting time and money.

Now the uber question is why does this all matter? What do test pass rates really tell you anyway? Well, it depends on how your team tests. You could fix all of the automated tests and make sure they pass 100% of the time, making the techniques above moot. Great. But what scenarios are you missing? What code isn’t getting covered by your tests? How much time are you spending exploring the codebase for issues? How’s the perf? The automated tests by themselves are completely pointless if the team isn’t doing testing. The regression tests are just another weapon in the testing arsenal. Make them work and work well so that the team can spend time doing some actual testing.

Posted in testing, Uncategorized, Work | Leave a comment

Visual Studio 2012: Can’t Debug Tests When Code Coverage is Enabled

We ran into a nasty problem yesterday at work where we couldn’t reliably debug into code when debugging a test in Visual Studio 2012. Breakpoints weren’t getting hit, lines were jumping around as if the code was optimized, all around a whole lot of nonsense. We finally came across this forum post, “Debugging unit tests in VS 2012 doesn’t work,” pinpointing the problem: when code coverage is enabled in your testsettings file, debugging tests may not work.

Sure enough, disabling code coverage from the test settings file resolved the issue.

image

Posted in testing, Work | Tagged , , , | Leave a comment

Using SpecFlow for Testing

Some time ago, I was asked to put together a talk on how we’ve embraced BDD with SpecFlow here at Mimeo. The original deck has been shared out on SkyDrive. Here’s a sort of transcript of the talk I prepared for.

 


clip_image001Before joining Mimeo, my only experience was working on teams who practiced big up front planning. Releases would typically be 1 – 1 1/2 years (at best) with milestones usually lasting 4-6 months, if not longer. We would spend weeks writing dev specs and test plans, getting them reviewed, bought off, updated, etc. These were supposed to be “living documents.” But in practice, once the document was signed off, they were seldom, if ever, updated at all. So when I found out that Mimeo did releases every 4-6 weeks, I was crapping my pants wondering how the hell am I supposed to plan my testing so quickly?

clip_image002I fell back to what I knew – write a quick test plan. I created a simple “one pager” (literally 1 page, not 10 pages as many “one-pagers” at Microsoft were) to help me layout the test methodology and high level test cases. Since we were agile, I felt that this could truly be a living doc and could be updated from sprint to sprint. Yeah, no. Again, this was dead. It was hard to get folks to read it, review it, or understand the tests.

clip_image003

The other problem: Tests are hard. Tests are hard to design; hard to develop; hard for people to read and understand what they were really intending to test. When reviewing the one pager with dev and project managers, it was hard to relate the value of these atomic functional tests to the stories.

Then we learned about the concept of Behavior Driven Design (BDD). The fundamental concept is to “write software that matters.” Write only that which pertains to the user stories for a particular sprint that satisfy the scenarios of the stakeholders. BDD breaks down a user story into the high level tests scenarios needed for acceptance. With BDD, test scenarios (behaviors) should be written that map closely to the user stories and written in a plain ubiquitous language that everyone can understand.

“’Behaviour’ is a more useful word than ‘test’” [Dan North]

At Mimeo, we adopted SpecFlow. SpecFlow is essentially the .NET version of Cucumber, which has been popular in the Ruby world. Like Cucumber, SpecFlow utilizes the Gherkin syntax for writing scenarios. At the core of Gherkin are the Given/When/Then step definitions. For those familiar with TDD, this maps directly to the 3-A’s pattern of Arrange/Act/Assert. The Given is your Arrange, where you setup your test. The When is your Act, where you perform an action under test. Finally, Then is where you Assert or validate your output.

Here’s an example feature (user story) and scenario directly taken from SpecFlow’s template:

Feature: Addition In order to avoid silly mistakes As a lazy mathematician with a calculator I want to be told the sum of two numbers Scenario: Add two numbers Given I have entered 50 into the calculator And I have entered 70 into the calculator When I press add Then the result should be 120 on the screen

What you see here is an actual working executing scenario. The magic lies in the SpecFlow runtime, code generation, and for the test developer to “wire up” these steps into automation (visit specflow.org for details). But this example has some powerful characteristics:

1. Scenarios are easy to understand by anyone. Developers, testers, project planners, business owners, customers – almost anyone can look at the above scenario and determine exactly what it is doing. There is an immediate appreciation for the value of this test to the user story.

2. Pushes quality upstream. If teams can take user stories in their sprint planning meetings and start breaking down scenarios using the Gherkin syntax, then the team has just gone thru and bootstrapped the testing effort before the planning was even done.

3. Promotes team collaboration. When the team is communicating using the same language that is easy to understand, it’s so much easier for test, dev, PM to have discussions for whether a particular set of scenarios makes sense.

4. Gets the test team involved early.

The last point has been an important one for me. When joining Mimeo, testers were routinely lagging behind the devs in story acceptance. In a typical 4 week sprint, this meant that tests that should have been automated were instead manually tested, with the hopes of automating them getting pushed to sometime in the future. Teams also didn’t have a good understanding of the tests that were performed (automated or manual), thus making confidence ambiguous and increasing risk and uncertainty. Since moving to SpecFlow, testers have been in lock step with development, sometimes even ahead of the game. Teams generally have better certainty and confidence when accepting a story because they are on the same page for what is actually being verified.

Lessons Learned

It’s been about a year since we’ve been using SpecFlow, so naturally there were a few lessons that we’ve learned the hard way.

1. Think about the scenarios and plan them out a little before you start developing them. Or at least try.

clip_image004

I was lucky in that the areas that I began with had no tests at all. I started to just get in there a little and tried writing my step definition and kept hitting brick walls in the Given. There was a significant amount of work that I needed to do for just setting up the test environment and components to get them into a state for testing. Try to think about the things you need to do to arrange or setup yourself for the scenarios to work. Do you need databases? Mocks? Test data? How will get these? How will you utilize these?

2. Scenario development is highly iterative. Have patience. clip_image005

As you develop new scenarios, get new user stories, get feedback about debugging, you will find the need to refactor your scenarios and step definitions. I’ve refactored my tests several times so far, each time making it easier for new scenarios to be developed.

3. You may not succeed the first time. And that’s ok.clip_image006

There’s an initial learning curve to get started with BDD, so you’ll start “somewhere.” You may quickly see that the first approach sucked and it’s time for a different approach. Again, have patience.

Posted in SpecFlow, testing, Work | Leave a comment

Halloween Robot Costume

It was a very strange Halloween this year. Sandy wreaked havoc across the region and many left without basic necessities like power, food, and gas. We were left without power for 6 days (this draft was originally written on day 5, and then was subsequently eaten by OneNote, but that’s another story). With Halloween falling on the second day without power, many communities decides to cancel Trick or Treating. But not in our neighborhood. The children started around 3pm, while there was still daylight, and spent only an hour or two before the darkness fell.

A couple of months ago, our 4 year old decided that he wanted to be a robot for Halloween. So this meant that it was my turn to make the costume. And of course, a simple cardboard box wouldn’t do. We needed a cardboard box with bling.

I got a lot of inspiration for the costume from Adafruit Forum poster jerrya (he did a phenomenal job). But I wasn’t nearly as ambitious and just wanted some simple blinky blinky. I knew right away that we needed to dust off the Netduino for this project. Yeah, this could have been done with a 555 or an ATTiny, but I wanted something large and in charge sitting prominently smack in the front of the costume saying, “Step back bitches, this here is fo realz.” After digging around on the Adafruit and SparkFun sites, and considering the amount of time I actually had for the build, I landed on the following BOM:

Bill of Materials

The plan was to have the Netduino randomly blink 3 LEDs rapidly, sweep the analog panel meter to random values on a random interval, and use a sort of Cylon/Knightrider pattern for the light strip.

The wiring was quite straightforward as seen the schematic below:

The main challenge I had was power. The current restrictions of the Netduino meant that the LEDs would have to be switched on rather than powered from the Netduino itself. In addition, the LPD8806 RGB light strip required more than 4v and strictly less than 6V. After posting on the Adafruit and Netduino forums, it became clear that I simply needed 3 AA alkaline batteries. This gave me 4.5V which met all of the criteria to drive what I needed. I filled the battery holder with 3 AAs and shoved a wire between the leads of the 4th battery slot to short the connection. Voila.

Let me also note that the Adafruit Protoshield came in incredibly handy. This eliminated the need for soldering, meaning that I could reuse virtually everything in this project. I didn’t mention this stuff was stuck to a cardboard box, right? This ain’t gonna last forever. The added bonus of the Protoshield was that all of the wiring was completely exposed adding to the coolness factor. And it leant itself to some quick debugging and repairs on the fly (and yes, it needed some minor repairs while trick or treating).

Robot Costume

My son helped out by gluing a few knobs, scrap boards, gears, and switches in various places on the costume and helmet. We could have done a lot more with extra wires, marking, etc, but I had to compete with a 4 year old’s short attention span.

     

In all, the costume came out pretty good. But most of all, my son was really happy and excited. And it melted my heart to hear him respond to complements with, “Thanks! My Daddy made it for me.” He was instantly the cool kid on the block.

It was great to see the neighborhood kids have a little fun during the tough time without power. Even though a small percentage of houses actually gave out candy, the kids were able to maintain some semblance of normalcy. And it was nice to see my son shine a little light in the darkness.

Posted in Netduino, Projects | Tagged | 1 Comment