• Software Carpentry at UNH

    This summer we held a Software Carpentry (SWC) scientific computing workshop at the UNH School of Marine Science and Ocean Engineering (SMSOE). The goal of the workshop—and of Software Carpentry in general—was to improve the efficiency of researchers by exposing them to computing tools and workflows they may not have seen in their undergrad training. The workshop took place over two days (August 27–28) at the Chase Ocean Engineering Laboratory, where grad students, research staff, end even professors brought in their laptops and actively learned Bash shell commands, Python programming, and Git version control. See the backstory to read why and how things came together.

    SWC found us two great volunteer instructors: Byron Smith, a PhD candidate in ecology and evolutionary biology at the University of Michigan, and Ivan Gonzalez, a physicist and programmer at the Martinos Center for Biomedical Imaging in Boston, and a veteran Software Carpentry instructor. We also had another workshop helper, Daniel Hocking, a UNH PhD alum in Natural Resources and Environmental Studies currently working at the USGS Conte Anadromous Fish Research Center.

    27 out of 29 registered showed up bright and early at 8 AM the first day, filling the classroom just about to capacity. Despite some minor issues—mainly due to the Nano text editor and the new release of Git for Windows—everyone’s laptop was ready to go in a half hour or so.

    In the first lesson, Ivan taught automating tasks with the Bash shell. Learners typed along, using the command line for automating tasks like copying and renaming files, searching, and even writing scripts to backup data; things that take a long time to do manually.

    Byron took over for the first day’s afternoon session, diving into Python by introducing the amazingly useful IPython Notebook. The lesson started from the absolute basics, to accommodate the learners’ large range of experience, though even the MATLAB experts were engaged by the foreign (in my opinion, nicer!) syntax. The crowd went wild then they realized it was possible to create a document with runnable code, which also includes equations rendered with LaTeX.

    The second day had a lower turnout—23 versus the first day’s 27—which can be partially attributed to scheduling conflicts. It was a challenge to find a full two day block that worked for everyone, especially when they’re from a few different organizations. It was also freshman move-in day, so these people were extra brave to be on campus!

    Ivan kicked off day 2 by introducing learners to Git, the most popular tool for tracking versions and collaborating on text files—especially code. Git can be somewhat hard to grasp conceptually, so Ivan explained the various processes with some diagrams on the chalkboard. I knew this was an effective technique when my advisor (a workshop attendee) later told me he was putting changes to a paper we’re writing in the “staging area.”

    After lunch, Byron picked up where we left off with Python, teaching how to modularize programs with functions, and how to write tests to ensure code does what we think it will—before it has to. Ivan took the last hour to present a capstone project to sort of wrap all the tools into a single workflow. Despite being a little crunched for time, most learners were able to fork a repository on GitHub (probably the most common mode of software collaboration these days), clone this to their local machine, then write some Python to download and visualize data from NOAA’s National Buoy Data System, some of which is collected by researchers affiliated with the SMSOE!

    To visualize how things went, and show some more Python programming examples, I put together a “word cloud,” shown in the upper right, from the learners’ positive feedback, and some stacked bar charts (below) from the participation data—consisting of responses to the initial interest assessment emails, registrations, and daily sign-in sheets. The code and data to reproduce these figures are now available in the site repository.

    As expected, a significant number of participants came from Mechanical Engineering (ME), with the remainder coming from other departments tied to the SMSOE. Learners from Earth Sciences didn’t seem as interested in the workshop at first (or they don’t like to reply to emails); though quite a few registered, and a few dropped off each step from there onward. The Molecular, Cellular, and Biomedical Science (MCBS) department was consistently represented all the way through, while it seems someone from ME may have forgotten to sign in on the first day.

    Workshop participation by department.
    Workshop participation by department.

    Grad students made up the majority of participants, as expected, but we ended up losing a couple along the way. We had two research staff and three professors participate, with the number of professors remaining steady at each stage along the way. Way to commit, profs!

    Workshop participation by job title.
    Workshop participation by job title.

    Overall, the workshop went very well. Byron commented on how an event like this is about the best teaching environment you can ask for, since everyone wants to be there. Most importantly, quite a few learners let us know that they picked up some new skills that will help them do their research more efficiently and effectively.


    None of this would have been possible without the support of a handful of people; first and foremost, Ivan, Byron, and Daniel who donated so much of their time. Thanks to Greg Wilson for caring enough about the cause to start the Software Carpentry Foundation. Professor Brad Kinsey, chair of the ME department, and the SMSOE Executive Committee provided funding, for which they deserve a huge thanks. Finally, thanks to Sally Nelson and Jim Szmyt, who helped with organization and logistics. The first ever Software Carpentry workshop at UNH was success and a lot of fun. Hopefully someone will organize another one soon!

  • Software Carpentry at UNH: The backstory

    I first heard of Software Carpentry in a YouTube video of its founder Greg Wilson’s talk at the SciPy 2014 conference:

    The talk really impressed me. I was convinced that their work was absolutely crucial for an open and more productive research community to exist. This was a mostly selfish thought, by the way. I was tired of reading papers with seemingly awesome results, but not having access to the source code to reproduce (and maybe build upon) those results. I was also fed up with emailed Word documents and the resulting lost changes and half-baked version-control-via-file-name. I was never taught any better though, which is exactly why we need groups like Software Carpentry!

    Shortly after watching the talk, I submitted a form on the Software Carpentry website requesting more information regarding hosting a workshop at UNH, where I am currently wrapping up a PhD in mechanical engineering. I ended up getting some information, but decided I had enough on my plate with research and put the idea back on the shelf.

    Fast-forward six months or so. I received an email from the man himself, Greg Wilson, asking what I thought about hosting a workshop at UNH. Almost star-struck, I was reinvigorated and decided to try to get the ball rolling. Besides, graduating is overrated anyway.

    I had some emails sent around my department and the School of Marine Science and Engineering (SMSOE) to gauge interest. To my surprise, more than 20 people responded, which was enough to move forward. The next task was to find funding for the administrative fee and travel and accommodations for the volunteer instructors. A couple proposals later, the SMSOE and ME department agreed to fund the workshop at an approximate 80/20 ratio.

    The workshop was to take place during the summer, which made finding instructors within close proximity (to save on travel costs) a little tricky. A few rounds of prospective candidates later, Byron Smith, a PhD candidate in Ecology and Evolutionary Biology at the University of Michigan, and Ivan Gonzalez, a physicist and programmer at the Martinos Center for Biomedical Imaging in Boston agreed to teach our workshop. Software Carpentry also found us another helper, Daniel Hocking, who is a UNH PhD alum in Natural Resources and Environmental Studies currently working at the USGS Conte Anadromous Fish Research Center. Being volunteers for such a great organization, there was no question that these guys were going to do an awesome job.

    I was excited. We had the funding, the room, the instructors, and the dates set. The only thing left was to get some learners registered and make it happen!

  • From WordPress to Jekyll

    After reading many articles and comments extolling the virtues of static HTML (speed, efficiency, simplicity, etc.) and being a very happy GitHub user for a while now, I decided to migrate my personal website from WordPress to Jekyll and use GitHub Pages for hosting.

    After an unsuccessful attempt to use jekyll-import with my old database credentials, I used the export tool built into WordPress to generate an XML version of the site and ran that through jekyll-import, but didn’t like that posts didn’t translate to Markdown. I finally ended up using Exitwp, which generated Markdown with YAML front matter. Exitwp did a reasonable job, though the amount of tweaking I had to do would have been prohibitive for anything above ~20 posts. I manually downloaded and replaced all images, tweaked YAML metadata, had to reformat all tables (originally generated with a WordPress plugin), and had to change some LaTeX syntax, but got it done in a few hours.

    One unexpected snag was that Kramdown, the default Markdown renderer, would not allow for GitHub-flavored syntax highlighting in fenced code blocks, e.g.,

    print("hello world")

    This was remedied by switching over to Redcarpet, which was as simple as editing the markdown entry in _config.yml.

    I’ve only made a couple visual tweaks to the default Jekyll theme so far and I’m actually quite pleased with it. So far Jekyll seems like a winner, allowing me to easily work offline in Markdown, ditch my old web host, and hopefully write more frequently!

  • Preparing SolidWorks CAD models for OpenFOAM's snappyHexMesh

    This post is a quick tutorial for preparing geometry in SolidWorks for meshing with OpenFOAM’s snappyHexMesh. Note that these tips are mainly for external flows, but should generally carry over to internal geometries.

    1. Clean it up

    This step is common for all types of simulations. Unnecessary details should be removed from the CAD model such that the behavior can be simulated with minimal geometric complexity—use your judgement. For example, if simulating aerodynamics around a car body, bolt heads may be unnecessary. If working with an assembly, I prefer to save as single part first. Small features can then be removed, and the part can be simplified such that it is a single watertight object.

    2. Get the coordinate system right

    Make sure the SolidWorks model is located where you want it to be in the simulation, i.e., it is oriented properly with respect to the coordinate system of the OpenFOAM mesh. SolidWorks defines the positive z-direction normal to the front plane, whereas I prefer it normal to the top. Instead of using move/copy features, you can alternatively create a new coordinate system to select when exporting from SolidWorks.

    3. Export to STL

    SolidWorks STL export options.
    SolidWorks STL export options.

    Once the model is ready, select “File->Save As…” and pick “*.STL” under “Save as type:”. Next, click the options button. Make sure the option to not move the model into positive space is checked, that the units are correct (OpenFOAM works in meters), and that you are saving as an ASCII, not binary, STL. Note the capital letters SolidWorks uses in the file extension by default, and that Unix file systems are case-sensitive.

    You should now have an STL compatible with snappyHexMesh, and are ready to embark on the sometimes treacherous path towards its optimal meshing settings.

  • Track changes to OpenFOAM case files with Git

    OpenFOAM cases are setup through a hierarchy of text files. Git is made to track changes in code, which is text. This makes Git a perfect candidate for tracking changes in simulation settings and parameters. In this post I will not really give a full picture of Git’s operation and capabilities, but detail how I use Git with OpenFOAM cases.

    Why track changes?

    Setting up an OpenFOAM case can be a real headache, since there are so many settings and parameters to change. In my experience, it’s nice to be able to keep track of settings that worked, in case I change something and the simulation starts crashing. With Git it’s easy to “check out” the last configuration that worked, and also keep track of what changes had favorable or negative effects, using commit messages.


    Git has the ability to create branches, that is, versions of files that one may switch between and edit independently. This is useful for OpenFOAM cases, because sometimes it’s desirable to run multiple cases with some parameter variation. For example, one might want to run one simulation with a low Reynolds number mesh and one with a high Reynolds number mesh. One may also want a case that can be run with different versions of OpenFOAM. By putting these files on different branches, one can switch between these two sets of parameters with a single checkout command.

    Sharing, collaboration, and storage

    Git has become the most popular way to share and collaborate on code, through the website GitHub. Using GitHub, another user may “fork” a set of case files, make some changes, and submit a “pull request” back to the main repository. This can be much more convenient than simply sharing zipped up folders of entire cases, as changes are more visible and reversible. Putting case files in a remote repository (private repositories can be hosted for free on BitBucket) is also a nice way to keep things backed up.

    How to set up a Git repository inside a case

    The first important step is to create a .gitignore file in the top level case directory. Inside this file, you tell Git the file names or name patterns to not track. For OpenFOAM simulations, it is probably undesirable to track the actual simulation results, since this will grow the size of the repository significantly every time the simulation results are committed, i.e., all committed results would be saved for all time. My sample .gitignore file is shown below:


    Note that I also ignore application logs, though these may be valuable troubleshooting tools. For example, one could checkout a previous commit, look at a log and compare with a newer version. However, these logs can get big, so I leave general performance information to the commit messages.

    After a proper .gitignore file has been created, a repository can be initialized by running git init in a terminal (inside the top level case directory). Running git status will then list all the files that are not yet tracked. To add all of these files and create the first commit, run

    git add -A
    git commit -m "Initial commit message goes here"

    Next, you may want to create a remote repository and set the remote location for the local repository. If using GitHub, create a new empty repository and follow these instructions for pointing your repository there.

    Next, “push” your local changes to the remote by running

    git push origin master

    This pushes the changes to the master branch of the remote repo.

    The case is now set to be tracked with Git. You can now continue on, and save your path towards, the perfect OpenFOAM case!

  • Diesel Explorer: Photo gallery

    Here’s an album of a bunch of photos taken during the project:

    Diesel Explorer

More posts »

subscribe via RSS