Social Icons

Pages

Sunday, September 25, 2016

What secrets are your dev tools keeping from you?

Have you ever been using your favorite IDE or perhaps the stand-alone code editor you fire up day-to-day and accidentally chosen a menu option that you had known was there but had never explored, only to have its usefulness blow your mind?

Or maybe you've come across documentation on a feature, class, or method that had been in your favorite programming language for ages, and which provided an easy way to do something that had previously been a real struggle for you?  Maybe you almost felt silly when you looked at how you had been doing it before.
Do you know how to use your dev tools
appropriately and effectively?


Today, I'm going to cover some simple techniques I use to continually find new facets of my standard tool set and reap the benefits.

I find that, primarily, these boil down to 3 main activities:
  • Exploring the edges
  • Gap analysis
  • Task-driven research
Are you using your development tools to their fullest potential?

In technical work, it's very easy to fall into the rut of only working with the tools and features you know.  After all, they've served you well.  You've done some darn good work with those tools.  While sticking only with you know might work in the short term, it's ultimately going to prevent you from improving at your craft.

I'm not just talking about obsolescence.  Sure, if you only use the same old tools for long enough, you'll find yourself being treated like a Cobol programmer in a roomful of Node.js fanatics.  No, this is about more immediate missed opportunities.  The old saying about having only a hammer makes everything look like a nail definitely applies here.

If you don't take some time periodically to discover what's available to you in the tools you use everyday, you are missing out on techniques and abilities that could have saved you time and maybe even increased your work product's performance.  You are short-changing yourself in ways you may never know.

This brings us to the first of the three activities that I nowadays practice almost habitually:  Exploring the edges.
 
Taking some time periodically to simply "explore the edges" of what you know about your tools can pay huge dividends.  The time-savings of discovering the way your programming language of choice has for processing XML, for example, can allow you to choose the best option for a particular situation.  Didn't know that most languages typically have several different methods for XML processing, each optimized for a certain context?  It's time to check them out.

Learning the more advanced code navigation and code formatting features of your favorite editor can save you hours of work time cumulatively over the course of a project.  Have you ever tried holding down the Alt-key and selecting text in your editor?  Some editors have a pleasant surprise.  Can you say "Column Select"? (More on that here: http://stackoverflow.com/questions/1802616/how-to-select-columns-in-editors-notepad-kate-vim-sublime-textpad-etc-an)

This isn't limited to technical tools actually, but to just about anything. The other day I showed a co-worker the Title Case text feature hidden in Word's format menu.  Another team member heard us talking and dropped in to see it also.  These little time-savers add up to big productivity gains and boost our confidence with our tools, as well.

So what's the best way to explore the edges?

Truly, the best way to dig into those areas you don't know yet is to just start poking around.  Go to those menus that you never use. You know, the ones you are always too busy to click on.  Check out the options and see if you can learn what they are for through context.  If they suggest a deeper capability, why not read up on it? Indulge your curiosity.

Having explored the edges for a while and seen what lies hidden in plain sight, you can then move on to the second of my techniques for flushing out new and interesting features of familiar tools:  Gap analysis.

The concept of gap analysis is usually applied to business results as compared to a benchmark.  The "gap" in this case refers to the difference between what we are getting and what we want, as regards a particular system or business process.  If you happen to be familiar with more than one tool of a particular type or more than one programming language, it can also be helpful to use "gap analysis" by using your knowledge of one tool as a benchmark against which you can compare the other.  If language X provides feature Y, for example, then where is feature Y in language Z?

For example, I was recently working with some Javascript code involving several lists of objects and was attempting to process them in certain ways.  While we use JS frameworks that make this sort of thing easier (JQuery/KnockoutJS), I found it odd that I didn't know of any way to process an array of JS objects in plain Javascript apart from a simple for-each loop. My Javascript experience runs back to the mid-nineties when it was originally shipped with Netscape browser as "Livescript," and I've periodically updated my knowledge of it since then.  Somehow, though, I missed the introduction of Array.forEach with EcmaScript 5 which accomplishes just what I wanted.

So, how did I know to even look for this?  Primarily, it was my knowledge of other programming languages, namely Perl, Java, and C#, all of which have either built-in functionality or readily available tools that make just this kind of list processing really easy.  I often find that I hesitate to code something when I get a feeling that there just has to be an easier way.  I start casting about, trying to find the bit of missing knowledge.  Finding the easier way may take me a few minutes more, but the pay-off in future productivity easily earns back the lost time plus interest.

Finally, having Explored the Edges and tried Gap Analysis for a bit, it's almost automatic to pick the third and final habit:  Task-driven research.

Once you've exhausted the options right in front of you, task-driven research allows your day-to-day work to suggest new explorations.  Don't you wonder if there's an easier way to do that thing you just got assigned and are planning to code from scratch?  Surely someone's done it before.  Maybe there's even a way to do it hidden in the lesser known reaches of the framework you're already using.

Aren't you sick of copying and pasting the same boilerplate code over and over again?  Maybe your editor has a templating ability you haven't discovered yet.  Or maybe there's a templating technique that doesn't rely on a specific editor at all.  Let your current needs and interests suggest where you should explore next.  Use this as an opportunity to scratch an itch.

I've advocated "tinkering for fun and profit" before, but this is a bit different.  "Tinkering" implies experimenting and exploring with no defined end goal and usually no direct application to your daily work.  This exploration is a purposeful activity, even if the process is a bit organic and seemingly haphazard.  The goal is increase your knowledge and skills related to your tool set and its applicability to your work is guaranteed, because you are enhancing your understanding of the things you use every day.

Let's recap the main points:
  •  Sticking with what you already know well about your tools can hold you back from learning new and better ways of working
  • You can easily learn more about your everyday tools by:
    • Exploring the edges and digging deeper into what's right in front of you
    • Performing a gap analysis to let your knowledge of one tool suggest the existence of solutions in another
    • Use task-driven research to let your work point you toward new horizons that you can explore to find even more new techniques and features
Schedule some time to do this each week, even if it's only a few minutes, and you'll begin to see payoffs almost immediately.  I can't predict what secrets you will uncover, but I am confident that it will "up your game" and be yet another step toward becoming an Above Average Programmer.

Friday, April 1, 2016

Version Control Concepts - An Overview for Team Leads, Managers, and Business Owners

I gave a talk on this past Wednesday at TechExpo, the local annual tech conference here in Tallahassee.  It was an overview of version control systems, their use in individual and team environments, and a few of the things that managers and business owners should keep in mind related to them. 

For anyone interested in the presentation, it's available via Google Slides here.

If you're on the fence about trying version control for your project, you might also find my blog post Use A Time Machine to be a source of inspiration.

Sunday, February 14, 2016

Debug Your Code Like Sherlock Holmes


As a child I was an advanced reader.  A lack of athletic interest and a kindergarten teacher interested in early reading development contributed to what turned into a rather odd situation.  The small-town school I attended was K-12, resulting in a very limited, combined library which was segmented into "elementary reading" and "upper reading."  In elementary school, during our library trips, I found myself begging for permission to visit the "High School" side to find books that would challenge me a bit more than Dr. Seuss and "Choose Your Own Adventure."  I never quite understood why the dowdy, grumpy librarian who ruled that universe was so reluctant to give me access, but I was relentless and swayed her in the end.

It was on one of my forays into these wonderfully mysterious shelves of the library, stuffed with thick, musty smelling volumes bound in cardboard and canvas, that I stumbled upon Arthur Conan Doyle's Sherlock Holmes.  I knew about Holmes already, of course, and had even read a few children's books based on some of the stories.  I'd probably even seen a movie or two, but I hadn't read the originals.  I tumbled into these books like Holmes and Moriarty going over Reichenbach Falls and never quite crawled back out of them.  They had a profound effect on my view of the world and the way I interacted with it.

Probably the most influential concept I found in Sherlock Holmes' worldview was the simple Victorian belief that anything in the universe could be understood given a proper application of the human mind and senses.  If you couldn't understand something, it was because you were approaching it from the wrong angle or you didn't have enough information.  This belief, simultaneously deterministic and optimistic, meant that I could achieve anything, solve any mystery, if I just focused hard enough and applied the right methods and didn't give up.

This rather lengthy preface is a lead-in to what is, for me, only a recent revelation:  I probably owe Arthur Conan Doyle quite a bit of credit for preparing me for the world of software development, specifically for the arduous task of tracking down bugs in what is often someone else's code.

While I'm completely inadequate for the task of summarizing the skills that Sherlock Holmes can bring to bear on a case, I have managed to identify four main things you can do to help boost your own code-oriented investigations:
  • Work backwards from the scene of the crime
  • Use a magnifying glass
  • Separate the wheat from the chaff
  • Bring in a sidekick
Each of these deserves their own explanation and a quote from the legendary sleuth himself, so let's start with the first.

Backwards from the Scene of the Crime

In solving a problem of this sort, the grand thing is to be able to reason backwards. That is a very useful accomplishment, and a very easy one, but people do not practice it much. In the everyday affairs of life, it is more useful to reason forwards, and so the other comes to be neglected. There are fifty who can reason synthetically for one who can reason analytically.
Sherlock Holmes - A Study in Scarlet

As Holmes states, it's a rare ability to work backwards from the end result of something.  This is never more evident when you're staring at the typical stacktrace a crashed application vomits onto your console.  Most coders, when faced with such a mess, reach immediately for the debugger and try to re-run everything in attempt to get the compiler to give them the answers.  It's worth taking a moment, however, to think about what could have led to the situation first.  It just might save you significant time and trouble.

The ability to "think like" the compiler or the interpreter is an invaluable skill that for the most part can only be acquired through practice.  I picked up this ability by naively stumbling into software development as a tinkerer in the mid-90's, when IDE's were expensive and rare.  Plus, since I was building web sites, Perl in a Unix environment was the language of choice, and there were few dev tools for that combination. It was just me, my text editor, and the Perl runtime binary.

With the web still in its infancy, and having no money for books, I essentially learned Perl from the "man" pages found in Unix and a few poorly written example scripts I found on FTP sites.  This was an extremely frustrating way to learn a programming language, and consisted of me writing a few lines of code, attempting to run it with the Perl runtime binary, and examining the output of the syntax checker to see what I'd done wrong.  Slowly and painfully, I began to understand how things worked, what was expected by the compiler, what would actually execute.

Going through this pain helped me to learn to trace through my code piece by piece, mutation by mutation, always keeping in mind what the compiler was doing in response to my code.  Nowadays, this is rarely necessary, as the IDE will immediately show you any syntax error, and there are probably a dozen analyzers you could use that will tell you why that "for" loop is better off as a LINQ query or why the FramJib library's flibbertyjibbet() call is deprecated.

This is why I believe pausing to mentally route your code's execution is more important than ever for gaining the ability to troubleshoot and debug.  If we don't disengage from our development tool's assistance every once in a while, our minds go soft.  It's like going for a run instead of a drive to keep your muscles from going flabby.  Those analyzers will only go so far, and if that isn't far enough to solve your bug, you are on your own, buddy.

Once you've got your debugging muscle flexed and ready to go, how do you proceed?  Start from the scene of the crime:  Find the line of code that experienced the error and work your way backwards.  What went wrong?  Is it a simple memory overflow or null pointer exception?  Once you have that in mind, try to imagine what sequence of events could have led to the error.  If it's not immediately obvious, you may need to invoke the next step of Holmesian debugging....

Where's My Magnifying Glass?

"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Sherlock Holmes -The Adventure of the Copper Beeches

If simply viewing the scene of the crime itself does not reveal any immediate leads, you may need to look more closely at the surrounding area.  For Sherlock Holmes, this meant whipping out the old magnifying glass and gathering clues, poring over every inch of the crime scene.  In programming, this often means using the debugger of your IDE to check the values of the various variables and object properties that are effective at the time the error occurred.  Unfortunately, if we are avoiding the crutch of an IDE debugger, or if we are troubleshooting a production-level runtime error where IDEs cannot be used, we may not have that luxury.  In this instance, we may need to fall back on more primitive, but still effective, techniques.

In the days of Perl CGI programming, the "print" statement was a common debugging technique.  You would print the value of whatever variable you were interested in to the "console" and it would appear in the output of your program.  Et voila, poor man's debugger.  Nowadays, runtime logging has been raised to an art form, and there is almost certainly a logging library or ten that you can implement in your program to get realtime logging output.  While this takes time and careful implementation, the benefits of being able to "turn on the firehose" of data in a runtime environment when you want it, and turn it off when you don't, can prove invaluable in ways that an IDE debugger just can't.

There are many sources for clues.  Some of these are offered by the environment you are working in and may go unnoticed.  For example, if you are tracking down a bug in a web application, how often have you gone to the actual web server's HTTP logs and analyzed the traffic going back and forth?  Tools like WireShark and Fiddler can provide a dynamic view of this same information as it happens and more, but if they weren't running at the time of the exception, the web server logs can provide some possibly crucial insight.  Cross-referencing the times of the log entries there with the times of the information in your debug log can be very enlightening.

In a more complex situation, the server's main logs may also hold some nuggets of information.  On Linux the syslog file or on Windows the Event Viewer.  Again, cross-references the times with other data you have helps you put together a picture of what was going on at the time of the exception.

Other sources of data include:  The sysadmins ("What was changed recently on the server?"), the user who experienced the error ("What had you done just before the error?  Any recent changes on your computer?"), and your source control system (you do use source control, right?) to find out what recently changed in your own code.

Perhaps in gathering all this data, you may identify something that looks out of place.  A missing value where one was expected.  A string that looks a little too long.  An object whose properties are not fully populated.  A line of code that was recently changed for what seems to be no good reason.  These are the "suspects" you can approach first.  If the clues fit into one of the potential scenarios you concocted while working backwards from the scene of the crime, you may have just found your culprit, or at least have narrowed down your search.

If instead you end up with a huge load of data and no clear leads, you may need to try another of Holmes's sleuthing techniques....

Separating the Wheat from the Chaff

"How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?"
Sherlock Holmes - The Sign of Four

I often annoy my co-workers when they present me with a coding issue and a potential cause for the issue and I say "That's impossible."  I'm not purposefully trying to irritate them when I say this (although I should try to kick the habit of just saying it outright).  Rather, I'm trying to state that unless I am mistaken in some very basic understanding of the execution environment (at a level which would probably require me to relearn my job from scratch) what they believe is happening is literally not possible.  I'll give an example:

Once one of my co-workers came to me with a very bothersome problem.  He was changing some JavaScript on a page and loading it in his web browser, but the effects he wanted from his code were not appearing on the page.  He had been changing code for several minutes, but was seeing no effect.

He showed me some of his code, which involved dynamically modifying the display of the web page based on the properties of an object using KnockoutJS (an excellent library for this kind of thing.

I could see a place where the value of the object's property named "Type" was to be displayed.  When we viewed the page, a value appeared.  Then I saw in the JavaScript that the object's property name was "DocumentType" and no "Type" property was visible.  I was immediately skeptical.

"The web page you are showing me could not have been created with this code,"  I said.  After recovering from his justified annoyance from my statement, my colleague took another look and found that, yes indeed, his web server was focused on a different version of the files in question and none of his modifications were being used.  Once we fixed the configuration issue, he quickly corrected the code.

Alan Watts, a philosopher who gained popularity in the 70s, liked to say "Problems which remain persistently insoluble should always be suspected as questions asked in the wrong way."  I try to keep this in mind when I find myself hitting a brick wall when debugging something.  I go back to "first principles" and build from there:  

  • Is the event handler responding to the user's button click actually the one I think it is?
  • Is my browser executing my code and not a cached version?  
  • Is the web server running my code and not something else or a cached version of it?  
  • Am I connecting to the right database and not some other copy of it?  
I work my way through the stream of execution, from the whatever the user is doing to start the chain of events, a button click, etc., all the way to the rendering of the final result (assuming we got that far before the crash).

Often this exercise will lead to insight or I may just stumble over the culprit.  There's something to be said for plain old dumb luck, and believe me, I've been its beneficiary more times than I can count.

If, however, all this detective work still leaves you without a final answer, you can try one additional trick that Holmes employed constantly.  It's one which helped him far more than he might have wanted to admit....

Bring in a Sidekick

"Come, Watson, come!’ he cried. ‘The game is afoot. Not a word! Into your clothes and come!"
Sherlock Holmes - The Adventure of the Abbey Grange

There is a reason that Sherlock craves Watson's participation in his adventures despite the pointed jabs he delivers regarding Watson's lack of deductive abilities.  As hard as it may be for an introvert like me to admit it, there is in my view nothing more helpful to the aspiring detective (or troubleshooter) than having someone to talk to.  Quite often, a member of our team will pull me aside to show me a problem they are struggling with and in the mere act of describing it, they discover the problem.  What's more, sharing the mystery with someone else brings in new perspective and fresh ideas.  Pair programming is in many ways a tacit confirmation that this approach is effective.  New ideas and angles can power you forward to a final answer.  If nothing else, at least you'll have affirmation that you're not crazy and this problem really is a hard one.  Mystery loves company. (Sorry, I couldn't resist)

The next time a particularly criminal bug gives you the slip, leaving only a stacktrace as its fingerprint, see if these techniques, espoused by the most famous detective of all time, help you track down the scoundrel.  And if they do, why not post a comment here to let us other Above Average Programmers know how it worked for you?  As Holmes himself said, "Nothing clears up a case so much as stating it to another person."

Tuesday, January 12, 2016

Coding Without a Net - The Week of No IDE, Days 3 - 5



This is the third in a series regarding my experiences in dropping the use of an IDE for my day-to-day programming tasks.  For some background, you may want to check out my initial post where I lay out my plans and the post covering Days 1 & 2.

The third consecutive day of the Week of No IDE found us fully tooled up and with an understanding of how we would interact with our code and the project.

One major challenge we had encountered in Days 1 & 2 involved the adding of new source code files to our project.  Reviewing this problem took up more than a little time.

When you write code in Visual Studio, each individual source code file must be referenced by the common project file in order to become part of the build.  The project file itself may actually, in turn, be part of a larger "solution" file containing references to multiple projects which can be part of a sort of mega-build.  These references to files are not just the location of the files, but also information on what role they play in the project.  For example, a Typescript file will be referenced differently from an HTML or C# code file.  While this may seem great for organization and make it easy for Visual Studio to create pretty displays, Microsoft decided that there was no need to make it easy to add these references without using Visual Studio itself.

The end result was that Todd and I found ourselves struggling to efficiently add a new Typescript file to our project.  The process looked something like this:
  1. Create the Typescript file in the location where we want it to reside
  2. Open the main project file, which is thankfully in XML format, in Vim
  3. Locate an existing Typescript file reference in the mass of XML
  4. Duplicate it, placing the copy below the original
  5. Change the path in the copy so that it references our new file
  6. Save the project
  7. Build the project and ensure that our new file is included in the build
The same process worked for C# and HTML files, as well as others.  Although the process worked, it was not pretty or efficient.  I considered writing a quick utility that would do the dirty work for us, but we'd already lost time and I was more than a little annoyed that no one had already done this.  A Google search turned up nothing apart from StudioShell, which does not work outside of Visual Studio (no matter what its docs may say to the contrary), and a promising post on StackOverflow. All fell short of my goal, so I just stuck with editing the XML.   I may yet revisit this and do it myself.

(Don't get me wrong about StudioShell, however.  We already use it and it's a really great tool once you learn it.  You can do some very powerful stuff with it in Visual Studio.)

File-management challenges aside, Day 3 turned out to be relatively productive for Todd but less so for me due to administrative duties kicking in.  The good news is, we managed to move several items on the Kanban board during our first few days.

Both Todd and I took a week long break for Christmas in the midst of this experiment, so Day 3 was actually the end of our first "week."   Upon our return, we did not go back into pair programming mode.  Although our team members often pair up, we don't practice continual pair programming, so our final 2 days were executed solo.

During this time, I continued to work through difficulties with using my editor and the APIs, but I did manage to get into flow a few times, so much so that I neglected to tweet what I was up to.  Once I got past the desire for the editor to catch all my mistakes and instead let the build process point me to the problems, I found I was doing ok.

An interesting side effect of "build-based debugging" is that your time-to-build becomes critical.  I actually tracked down a few issues that had been causing our project builds to run longer than they needed to and shaved a few seconds off each build, which was huge for my new workflow.

Using the build as your syntax check also forces you to focus.  Visual Studio, like most IDEs, will analyze your entire project and show you a list of errors.  This is usually considered a good thing, but it can make you lazy, allowing you to jump around from file to file making wanton edits, without concern about their effect on other files in the project because the IDE's error list will warn you when you run off the rails.

In my case, only when you build do you get the news and the news is often not pretty.  So I found myself building far more often to ensure I hadn't gone too far astray.  This greatly focused my attention on where I was working at the time, and I do think it helped not only my concentration, but also my awareness of the effects of my changes.  I became a little more conservative in my changes to public method signatures, for example, and was more inclined to make my methods private, allowing external needs to force them into the public space.  This is, of course, good coding practice anyway, but I did feel new, positive influences on my behavior as a result of my workflow.

Overall, I think it was an interesting exercise which gave us some insights into our knowledge of our toolset, exposed us to some old but still excellent utilities, and which exposed the incredible lengths to which IDEs have become embedded in the workflow of software development.  I don't think our brief experience truly answered the question of whether IDE-reliance is good, bad, or just plain ugly, but it was a fun diversion and I think I did come away with a little more situational awareness of my coding activities.

Saturday, January 9, 2016

Coding Without a Net - The Week of No IDE, Days 1 & 2

A few weeks ago, I posted an entry speculating on the effects of the use of Integrated Development Environments (IDEs) on programming skills and productivity.  At the end of December, a colleague of mine and I decided to give coding without an IDE a full test-drive.  Our flagship product's code-base is written primarily in C# and Typescript (which transpiles into JavaScript at build time).

My colleague Todd and I had decided to take on our "Non-IDE" work in pair programming mode.  This was primarily because we suspected that our combined knowledge of the programming APIs that we use on a daily basis would be more complete than what we knew individually.  It was also, I must admit on my part, a little bit of insurance that I would stay the course for at least a few days.  Nothing increases accountability like someone else doing the hard stuff with you.

Having Todd work with me also gave me the ability to occasionally tweet regarding our experiences.  I collected those tweets in a Storify post, although I was too scattered and intermittent to make them a good document of the experience.  I did leave a few technical challenges we encountered solely documented in the tweets, however, so they are worth taking a look.

The first thing we noticed when we were preparing for our first day was that our selected editor, Vim for Windows, was going to be a challenge in itself.  I consider choosing Vim as something of a mistake on our part.  While the goal of this week was to free of the IDE, it was not my intent to sacrifice even more productivity to learning a new editor.  Looking back, I would probably have avoided Vim for this week, but I have to say I fell in love with Vim and its power.  So perhaps the mistake was a good one to make.

After blowing far too much of our first day on getting our Vim editor configured the way we wanted it, and finding syntax colorizing for C# and Typescript, we finally got to work.  What we found was that Vim has some pretty good multi-file editing capabilities, even offering tabs and file system browsing.  This took some getting used to, but in the end, it sufficed.

We also found that our methods of building and performing source control operations required us to keep a few command line windows open. I use ConsoleZ,which I have found to be a top-notch command console for Windows.  There were also a few times we utilized Cygwin's console to give us access to familiar Unix commands like rgrep and tail, which allowed us to recursively search all our code files for keywords and to actively monitor log outputs, respectively.  These are functions that are normally handled by the global search and debugger functions of our IDE, Microsoft Visual Studio.

On the topic of debuggers, I had previously found a command-line debugger for .NET from Microsoft called Msdbg, but we decided that if we were going old-school giving up the debugger entirely was appropriate.  Our app uses the Log4Net logging library, so we leveraged that, along with "tail -f" in Cygwin, to monitor the execution of our code.  This worked pretty well, especially with ConsoleZ's buffer search feature.  The ability to turn on and off sections of logging was useful in tuning the output.

My two biggest challenges during this period were resurrecting my old Vim skills and recalling the .NET API well enough to code effectively.  Both became less of an issue as time went by, but I did not fully get back up my typical speed on either count.

Todd had less of a challenge with Vim.  He is an old Unix hand and spent many more hours using Vim than I had.  Vim is driven by keystrokes and his muscle memory was impressive.  Despite having been forced to use it in Unix environments for years, I never really embraced it.  Several cheat sheets by my keyboard kept me moving, but it was tough going.  Despite the difficulty, I stuck with it and did get better.  The end result:  I like Vim and its focus on using simple keystroke combinations to navigate and edit.  I think I'll use it from now on as my go-to all-purpose editor when its available.

The .NET API was where my advantage lay, as I have several years more invested in C# and .NET than Todd.  Even I found it difficult, however, to recall the less often used methods and their parameter requirements.  I found myself relying on the "type, compile, correct" method when I had an inkling of what to do, and relied on Google for the rest.  In the old days, I would have had a book by my side to look things up.  Fortunately, we are beyond that now.

My memory of .NET's APIs as well as our own internal APIs did become deeper as we worked, and I discovered I was inspecting the code surrounding my edits a bit more closely than usual.  This was, of course, because the IDE typically will flag not only syntax errors or API misuses, but thanks to our use of Resharper, we also get excellent stylistic assistance when coding.  I have to admit that I missed Resharper more than I missed my IDE in general.

All in all, the first couple of days was a period of self-discovery and frustration, with little actual work getting done.

In my next post, I'll pick up from Day 3, in which our spinning wheels actually catch some traction.