Simplexcel 2.0.0

A couple of years ago, I created a simple .net Library to create Excel .xlsx sheets without the need to do any COM Interop or similar nonsense, so that it can be used on a web server.

I just pushed a major new version, Simplexcel 2.0.0 to NuGet. This is now targeting both .net Framework 4.5+ and .NET Standard 1.3+, which means it can now also be used in Cross-Platform applications, or ASP.net Core.

There are a few breaking changes, most notably the new Simplexcel.Color struct that is used instead of System.Drawing.Color and the change of CompressionLevel from an enum to a bool, but in general, this should be a very simple upgrade. Unless you still need to target .net Framework 4 instead of 4.5+, stay on Version 1.0.5 for that.

RAM and CPU Cycles are there to be used.

What is wrong with this picture?

Answer: The white area in the top of the CPU and Memory graphs. These indicate that I spent money on something that I’m not using.

One of the complaints that are often brought forward how certain applications (especially browsers) are “memory hogs”. As I’m writing this, Chrome uses 238.1 MB of RAM, and a separate Opera uses 129.8 MB. Oh my, remember when 4 MB were enough to run an entire operating system?

Now, here’s the thing about RAM and CPU Cycles: I spend my (or someone elses) hard earned cash on it in order to speed up my computer use. That’s literally why it exists – to make stuff go faster, always.

Having 32 GB of RAM cost about $200. In the above screenshot, about $135 of those hard earned dollars are just doing noting. It’s like hiring (and paying) an employee full-time and only giving them 3 hours of work every day. That CPU? It’s Quad-Core, Eight Thread, with 2.6 Billion Cycles per second – that’s between 10 and 20 Billion total cycles each second. And yet, it’s sitting at 16% in that screenshot. For a CPU that’s priced at $378, that’s $317 not being used.

There are priorities and trade-offs that everyone needs to make. In a Laptop, maybe you don’t want the CPU to be constantly close to 100%, because it drains the battery and the whirring cooling fan is annoying. But maybe you bought such a high end laptop specifically because you want to use that much CPU power and RAM. I certainly did.

My primary application is Visual Studio, which has been making some really good strides in 2017 to be faster. Find all References is pretty fast, Go To Types is pretty fast. Just “Find in Files” could be faster because it still seems to hit the disk. The cost for that? Currently 520 MB RAM usage. I’ll take that. In fact, if I could more speed at the expense of more RAM, I’d take that as well. In fact, I would love for Visual Studio to find a way to reduce the 45 Second build time – as you see in the graph, the CPU only briefly spikes. Why is it not constant 100% when I click the Build button? Is there a way to just have everything needed to compile constantly in RAM? (and yes, I know about RAM disks, and I do have a SSD that does 3 GB/s – but the point is for applications to be more greedy)

Two applications that I run that really make great use of my Computer are SQL Server and Hyper-V. SQL Server is currently sitting at 3.5 GB and will grow to whatever it needs. And Hyper-V will use whatever it needs as well. Both application also do respect my limits if I set them.

But they don’t prematurely limit themselves. Some people are complaining about Spotify’s memory usage. Is that too much for a media player? Depends. I’m not using Spotify, but I use iTunes. Sometimes I just want to play a specific song or Album, or just browse an Artist to find something I’m in the mood for. Have you ever had an application where you scroll a long list and halfway through it lags because it has to load more data? Or where you search/filter and it takes a bit to display the results? I find that infuriating. My music library is only about ~16000 tracks – can I please trade some RAM to make the times that I do interact with it as quick as possible? YMMV, but for me, spending 500 MB on a background app for it to be super-responsive every time I interact with it would be a great tradeoff. Maybe for you, it’s different, but for me, iTunes does stay fast at the expense of my computer resources.

Yeah, some apps may take that too far, or do wrong behaviors like trashing your SSD. Some apps use too much RAM because they’re coded inefficiently, or because there is an actual bug. It should always be a goal to reduce resource usage as much as possible.

But that should just be one of the goals. Another goal should be to maximize performance and productivity. And when your application sees a machine with 8, 16 or even 32 GB RAM, it should maybe ask itself if it should just use some of that for productivity reasons. I’d certainly be willing to trade some of that white space in my task manager for productivity. And when I do need it for Hyper-V or SQL Server, then other apps can start treating RAM like it’s some sort of scarce resource. Or when I want to be in battery-saver mode, prioritizing 8 hours of slow work over 4 hours of fast work.

But right now, giving a couple of hundreds of megs to my Web Browsers and Productivity Apps is a great investment.

git rebase on pull, and squash on merge

Here’s a few git settings that I prefer to set, now that I actually work on projects with other people and care for a useful history.

git config --global branch.master.mergeoptions "--squash"
This always squashes merges into master. I think that work big enough to require multiple commits should be done in a branch, then squash-merged into master. That way, master becomes a great overview of individual features rather than the nitty-gritty of all the back and forth of a feature.

git config --global pull.rebase true
This always rebases your local commits, or, in plain english: It always puts your local, unpushed commits to the top of the history when you pull changes from the server. I find this useful because if I’m working on something for a while, I can regularly pull in other people’s changes without fracturing my history. Yes, this is history-rewriting, but I care more for a useful rather than a “pure” history.

Combined with git repository hosting (GitLab, GitHub Enterprise, etc.), I found that browsing history is a really useful tool to keep up code changes (especially across timezones), provided that the history is actually useful.

When nslookup works but you can’t ping it, NetBIOS may be missing

I have a custom DNS Server (running Linux) and I also have a server running Linux (“MyServer”). The DNS Server has an entry in /etc/hosts for MyServer.

On my Windows machines, I can nslookup MyServer and get the IP back, but when I try to access the machine through ping or any of the services it offers, the name doesn’t resolve. Access via the IP Address works fine though.

What’s interesting is that if I add a dot at the end (ping MyServer.) then it suddenly works. What’s happening?!

What’s happening is that Windows doesn’t use DNS but NetBIOS for simple name resolution. nslookup talks to the DNS Server, but anything else doesn’t use DNS.

The trick was to install Samba on MyServer, because it includes a NetBIOS Server (nmbd). On Ubuntu 16.04, just running sudo apt-get install Samba installs and auto-starts the service, and from that moment on my Windows machines could access it without issue.

There are ways to not use NetBIOS, but I didn’t want to make changes on every Windows client (since I’m using a Domain), so this was the simplest solution I could find. I still needed entries in my DNS Server so that Mac OS X can resolve it.

Why Feature Toggles enable (more) frequent deployments

As Nick Craver explained in his blog posting about our deployment process, we deploy Stack Overflow to production 5-10 times a day. Apart from the surrounding tooling (automated builds, one-click deploys, etc.) one of the reasons that is possible is because the master-branch rarely ever stays stale – we don’t feature branch a lot. That makes for few merge-nightmares or scenarios where suddenly a huge feature gets dropped into the codebase all at once.

The thing that made the whole “commit early, commit often” principle click for me was how easy it is to add new feature toggles to Stack Overlow. Feature Toggles (or Feature Flags), as described by Martin Fowler make the application [use] these toggles in order to decide whether or not to show the new feature.

The Stack Overflow code base contains a Site Settings class with (as of right now) 1302 individual settings. Some of these are slight behavior changes for different sites (all 150+ Q&A sites run off the same code base), but a lot of them are feature toggles. When the new IME Editor was built, I added another feature toggle to make it only active on a few sites. That way, any huge issue would’ve been localized to a few sites rather than breaking all Stack Exchange sites.

Feature toggles allow for a half-finished feature to live in master and to be deployed to production – in fact, I can intentionally do that if I want to test it with a limited group of users or have our community team try it before the feature gets released network-wide. (This is how the “cancel misclicked flags” feature was rolled out). But most importantly, it allows for changes to constantly go live. If there is any unintended side-effects, we notice it faster and have an easier time locating it as the relative changeset is small. Compare that to some massive merge that might introduce a whole bunch of issues all at once.

For feature toggles to work, it must be easy to add new ones. When you start out with a new project and want to add your first feature toggle, it may be tempting to just add that one new toggle, but if the code base grows bigger, having an easy mechanism really pays off. Let me show you how I add a new feature toggle to Stack Overflow:

[SiteSettings]
public partial class SiteSettings
{
    // ... other properties ...

    [DefaultValue(false)]
    [Description("Enable to allow users to retract their flags on Questions, Answers and Teams.")]
    [AvailableInJavascript]
    public bool AllowRetractingFlags { get; private set; }
}

When I recompile and run the application, I can go to a developer page and view/edit the setting:

RetractFlags

Anywhere in my code, I can gate code behind an if (SiteSettings.AllowRetractingFlags) check, and I can even use that in JavaScript if I decorate it with the [AvailableInJavascript] attribute (NĂ©stor Soriano added that feature recently, and I don’t want to miss it anymore).

Note what I did not have to do: I did not need to create any sort of admin UI, I did not need to write some CRUD logic to persist the setting in the database, I did not need to update some Javascript somewhere. All I had to do was to add a new property with some attributes to a class and recompile. What’s even better is that I can use other datatypes than bool – our code supports at least strings and ints as well, and it is possible to add custom logic to serialize/deserialize complex objects into a string. For example, my site setting can be a semi-colon separated list of ints that is entered as 1;4;63;543 on the site, but comes back as an int-array of [1,4,63,543] in both C# and JavaScript.

I wasn’t around when that feature was built and don’t know how much effort it took, but it was totally worth building it. If I don’t want a feature to be available, I just put it behind a setting without having to dedicate a bunch of time to wire up the management of the new setting.

Feature Toggles. Use them liberally, by making it easy to add new ones.

Handling IME events in JavaScript

Stack Overflow has been expanding past the English-speaking community for a while, and with the launch of both a Japanese version of Stack Overflow and a Japanese Language Stack Exchange (for English speakers interested in learning Japanese) we now have people using IME input regularly.

For those unfamiliar with IME (like I was a week ago), it’s an input help where you compose words with the help of the operating system:
IME
In this clip, I’m using the cursor keys to go up/down through the suggestions list, and I can use the Enter key to select a suggestion.

The problem here is that doing this actually sends keyup and keydown events, and so does pressing Enter. Interestingly enough, IME does not send keypress events. Since Enter also submits Comments on Stack Overflow, the issue was that selecting an IME suggestion also submits the comment, which was hugely disruptive when writing Japanese.

Browsers these days emit Events for IME composition, which allowed us to handle this properly now. There are three events: compositionstart, compositionupdate and compositionend.

Of course, different browsers handle these events slightly differently (especially compositionupdate), and also behave differently in how they treat keyboard events.

  • Internet Explorer 11, Firefox and Safari emit a keyup event after compositionend
  • Chrome and Edge do not emit a keyup event after compositionend
  • Safari additionally emits a keydown event (event.which is 229)

So the fix is relatively simple: When you’re composing a Word, we should not have Enter submit the form. The tricky part was really just to find out when you’re done composing, which requires swallowing the keyup event that follows compositionend on browsers that emit it, without requiring people on browsers that do not emit the event to press Enter an additional time.

The code that I ended up writing uses two boolean variables to keep track if we’re currently composing, and if composition just ended. In the latter case, we swallow the next keyup event unless there’s a keydown event first, and only if that keydown event is not Safari’s 229. That’s a lot of if’s, but so far it seems to work as expected.

submitFormOnEnterPress: function ($form) {
    var $txt = $form.find('textarea');
    var isComposing = false; // IME Composing going on
    var hasCompositionJustEnded = false; // Used to swallow keyup event related to compositionend

    $txt.keyup(function(event) {
        if (isComposing || hasCompositionJustEnded) {
            // IME composing fires keydown/keyup events
            hasCompositionJustEnded = false;
            return;
        }

        if (event.which === 13) {
            $form.submit();
        }
    });

    $txt.on("compositionstart",
            function(event) {
                isComposing = true;
            })
        .on("compositionend",
            function(event) {
                isComposing = false;
                // some browsers (IE, Firefox, Safari) send a keyup event after
                //  compositionend, some (Chrome, Edge) don't. This is to swallow
                // the next keyup event, unless a keydown event happens first
                hasCompositionJustEnded = true;
            })
        .on("keydown",
            function(event) {
                // Safari on OS X may send a keydown of 229 after compositionend
                if (event.which !== 229) {
                    hasCompositionJustEnded = false;
                }
            });
},

Here’s a jsfiddle to see the keyboard events that are emitted.

.net Framework 4.6.2 adds support to sign XML Documents using RSA-SHA256

One of the hidden useful gems in the .net Framework is the System.Security.Cryptography.Xml.SignedXml class, which allows to sign XML documents, and validate the signature of signed XML documents.

In the process of implementing both a SAML 2.0 Service Provider library and an Identity Provider, I found that RSA-SHA256 signatures are common, but not straight forward. Validating them is relatively easy, add a reference to System.Deployment and run this on app startup:

CryptoConfig.AddAlgorithm(
    typeof(RSAPKCS1SHA256SignatureDescription),
    "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256");

However, signing documents with a RSA-SHA256 private key yields a NotSupportedException when calling SignedXml.ComputeSignature(). Turns out that only .net Framework 4.6.2 will add support for the SHA2-family:

X509 Certificates Now Support FIPS 186-3 DSA

The .NET Framework 4.6.2 adds support for DSA (Digital Signature Algorithm) X509 certificates whose keys exceed the FIPS 186-2 limit of 1024-bit.

In addition to supporting the larger key sizes of FIPS 186-3, the .NET Framework 4.6.2 allows computing signatures with the SHA-2 family of hash algorithms (SHA256, SHA384, and SHA512). The FIPS 186-3 support is provided by the new DSACng class.

Keeping in line with recent changes to RSA (.NET Framework 4.6) and ECDsa (.NET Framework 4.6.1), the DSA abstract base class has additional methods to allow callers to make use of this functionality without casting.

After updating my system to the 4.6.2 preview, signing XML documents works flawlessly:

// exported is a byte[] that contains an exported cert incl. private key
var myCert = new X509Certificate2(exported);
var certPrivateKey = myCert.GetRSAPrivateKey();

var doc = new XmlDocument();
doc.LoadXml("<root><test1>Foo</test1><test2><bar baz=\"boom\">Real?</bar></test2></root>");

var signedXml = new SignedXml(doc);
signedXml.SigningKey = certPrivateKey;

Reference reference = new Reference();
reference.Uri = "";
XmlDsigEnvelopedSignatureTransform env = new XmlDsigEnvelopedSignatureTransform();
reference.AddTransform(env);
signedXml.AddReference(reference);

signedXml.ComputeSignature();
XmlElement xmlDigitalSignature = signedXml.GetXml();
doc.DocumentElement.AppendChild(doc.ImportNode(xmlDigitalSignature, true));

// doc is now a Signed XML document

Building a NAS with OpenBSD

Over a recent long weekend, I’ve decided to build a small NAS for home use, mainly to have some of my data backed up and to have an archive of old stuff I don’t need all the time. Both of my Laptops have 256 GB SSDs, and while that’s usually enough, it’s good to have some extra headroom sitting around.

The idea was to:

  • Have a place to backup my stuff
  • Have a machine that can do BitTorrent downloads on its own
  • Have a machine that allows my to access big files from multiple other PCs
  • Have a machine that works as a local git server

The Hardware

I bought the motherboard and case a few years ago for something else, so I think better options are available now.

The desired setup:

  • Use the 128 GB SSD as the boot drive – because it’s mSATA it fits directly on the motherboard, and doesn’t take up space for mounting drives
  • Use the two 2.5″ 1 TB drives as a RAID 1 – that way, I’m protected against hard drive failure. Do note that RAID 1 is more an availability than a safety thing because viruses or accidential deletion of files isn’t something a RAID can help with
  • Use the one 3.5″ 3 TB drive as a big store for non-critical stuff, like backups of my Steam games or temporary BitTorrent files

The case doesn’t have much space for drives, even though the motherboard has plenty of S-ATA ports.

For the operating system, I went with OpenBSD 5.7 x64. I prefer OpenBSDs very minimalistic approach of offering a tiny base system, and then allowing me to add exactly the pieces of software that I need. I’m not going to give a full rundown of how OpenBSD works, because if you’re really interested you should definitely read Absolute OpenBSD.

Basic System Setup

Do setup a user during setup – in my case, I called him User.

My 128 GB SSD is partitioned as follows:

#                size           offset  fstype [fsize bsize  cpg]
  a:             2.0G               64  4.2BSD   2048 16384    1 # /
  b:             8.2G          4209024    swap                   # none
  c:           119.2G                0  unused                   
  d:             4.0G         21398592  4.2BSD   2048 16384    1 # /tmp
  e:            15.0G         29800544  4.2BSD   2048 16384    1 # /var
  f:             8.0G         61255840  4.2BSD   2048 16384    1 # /usr
  g:             2.0G         78027680  4.2BSD   2048 16384    1 # /usr/X11R6
  h:            15.0G         82220640  4.2BSD   2048 16384    1 # /usr/local
  i:             3.0G        113675936  4.2BSD   2048 16384    1 # /usr/src
  j:             3.0G        119957344  4.2BSD   2048 16384    1 # /usr/obj
  k:            59.0G        126238752  4.2BSD   2048 16384    1 # /home

The best setup varies on preference of course, in my case I stuck mostly to the OpenBSD defaults and only gave /usr/src and /usr/obj some extra space.

After the system boots up for the first time, add powerdown=YES to /etc/rc.shutdown. This turns off the machine when shutdown -h now is called. Do note that halt doesn’t seem to respect that, and needs to be invoked with halt -p. To my delight, pushing the power button on the case turns off the machine properly – hooray for working ACPI support!

The first thing before installing any software should be to follow -stable, recompiling the kernel, userland, and xenocara.

# cd /usr
# export CVSROOT=anoncvs@anoncvs.usa.openbsd.org:/cvs
# cvs -d$CVSROOT checkout -rOPENBSD_5_7 -P src ports xenocara

# cd /usr/src/sys/arch/amd64/conf
# config GENERIC.MP
# cd ../compile/GENERIC.MP
# make clean && make
# make install
# reboot

# rm -rf /usr/obj/*
# cd /usr/src
# make obj
# cd /usr/src/etc && env DESTDIR=/ make distrib-dirs
# cd /usr/src
# make build
# cd /usr/xenocara
# rm -rf /usr/xobj/*
# make bootstrap
# make obj
# make build
# reboot

This takes a long time, over an hour on this machine. After that, it’s time to do package setup

Add FETCH_PACKAGES=yes to /etc/mk.conf, and export PKG_PATH=ftp://ftp5.usa.openbsd.org/pub/OpenBSD/5.7/packages/amd64/to begin installing packages.

The OpenBSD packages and ports system is a bit interesting, because it seems that packages are built only once when a new OpenBSD version is released, and then never updated. You have to manually compile newer versions of software. That’s not that big of a deal, because with FETCH_PACKAGES enabled, the system will fetch packages if they are still the correct version and only build ports where needed.

Setting up a data drives, incl. RAID 1

I decided that my data drives should live under /var/netshared, so I created this and two subdirectories – data and glacier. I will set permissions later.

I have 2x 1 TB hard drives, from which I want to build a RAID 1. First, setup disklabels for both drives (disklabel -E sd0, then sd1), making sure that the partition type is RAID instead of the default 4.2BSD.

OpenBSD area: 0-1953525168; size: 931.5G; free: 0.0G
#                size           offset  fstype [fsize bsize  cpg]
  a:           931.5G                0    RAID                   
  c:           931.5G                0  unused

Then, run bioctl -c 1 -l sd0a,sd1a softraid0 to create the RAID. The -c 1 flag sets the RAID level (RAID 1 = mirroring), and -l (lowercase L) is a list of partitions that form the raid. The softraid0 at the end is an internal identifier – it must start with softraid. bioctl will then create a new device that will appear like a hard drive and can be used as such.

The actual device will be something like /dev/sd4. You need to run disklabel on the new device to create a partition, this time of the usual 4.2BSD type. In order to add it to /etc/fstab, you need to get the duid, which you can get by running disklabel sd4:

# /dev/rsd4c:
type: SCSI
disk: SCSI disk
label: SR RAID 1
duid: cc029b4fe2ac54dd

(I do note that using duids in fstab is optional, but I highly recommend it as it makes you independent of device name changes as long as the actual drive is the same)

Remember to run newfs /dev/sd4a to create a file system. OpenBSD will pick FFS for drives smaller than 1 TB, and FFS2 for drives bigger than 1 TB. Check man newfs for options.

Here’s how my fstab looks:

e8bd5e30aba4f036.b none swap sw
e8bd5e30aba4f036.a / ffs rw 1 1
e8bd5e30aba4f036.k /home ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.d /tmp ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.f /usr ffs rw,nodev 1 2
e8bd5e30aba4f036.g /usr/X11R6 ffs rw,nodev 1 2
e8bd5e30aba4f036.h /usr/local ffs rw,nodev 1 2
e8bd5e30aba4f036.j /usr/obj ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.i /usr/src ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.e /var ffs rw,nodev,nosuid 1 2
cc029b4fe2ac54dd.a /var/netshared/data ffs rw,nodev,nosuid,noexec,noatime 1 2
f4540651dabd448d.a /var/netshared/glacier ffs rw,nodev,nosuid,noexec,noatime 1 2

Notice the nosuid,noexec,noatime,nodev flags on the two data drives. This is just some precaution against malicious files, and noatime is just to reduce disk wear by a tiny fraction. Check the manpage of mount for more information.

Setting up a user

During OpenBSD Setup, a user should’ve been setup. If you decided not to, use useradd to create one now.

Create a group for access to the shared directories: groupadd netshared

Add the user to that group: user mod -G netshared User

Change owner and permissions:

chown -R User:netshared /var/netshared/* 
chmod -R 0770 /var/netshared/*

Note that the execution-bit is required to traverse directories, so chmod 0660 wouldn’t work as a permission mask. Since the file system is mounted noexec, it doesn’t matter anyways.

Installing Samba

Start by installing the samba port:

# cd /usr/ports/net/samba
# make install

Then, configure samba (thanks Pierre-Philipp Braun for the tip with sed):

cd /etc/samba/
mv smb.conf smb.conf.dist
sed '/^#/d; /^;/d; /^$/d;' smb.conf.dist > smb.conf
vi smb.conf

Here’s my smb.conf:

[global]
   workgroup = WORKGROUP
   server string = Samba Server
   security = user
   load printers = no
   log file = /var/log/samba/smbd.%m
   max log size = 50
   dns proxy = no
   printing = BSD
   unix extensions = no
   allow insecure wide links = no
[data]
   path = /var/netshared/data
   valid users = User
   writable = yes
   printable = no
[glacier]
   path = /var/netshared/glacier
   valid users = User
   writable = yes
   printable = no

If you want to give access to groups instead of individual users, prefix with an @-sign: valid users = @netshared

The manpage – man smb.conf – is very extensive. If you want to finetune permissions, take the time to browse through it.

To start samba on system startup, add this to /etc/rc.conf.local:

pkg_scripts="samba"
samba_flags=""

This should be it – start samba through /etc/rc.d/samba start and try accessing your new file shares!

Using the server as a git server

This isn’t really a NAS-specific, but git specific. If you want to install git on the server, cd /usr/ports/devel/git and make install.

Create or clone a bare repository on the NAS:

cd /var/netshared/data
mkdir myrepo.git
cd myrepo.git
git init --bare

Or clone an existing repository as a bare clone:

cd /var/netshared/data
git clone --bare https://github.com/mstum/faml.git

Then, on your machines, clone from that repository:
git clone \\nas\data\faml.git

This will automatically set up an origin remote on your local clone, so any changes you make on your laptop can be pushed to the server through git push.

Setting up a BitTorrent client

Install the port of transmission:

cd /usr/ports/net/transmission
make install

This will automatically create a _transmission user – add it to the netshared group:
user mod -G netshared _transmission

Create folders for BitTorrent:

mkdir /var/netshared/glacier/BitTorrent
mkdir /var/netshared/glacier/BitTorrent/incomplete
mkdir /var/netshared/glacier/BitTorrent/complete
mkdir /var/netshared/glacier/BitTorrent/watch
chown -R User:netshared /var/netshared/glacier/BitTorrent

Edit the /var/transmission/.config/transmission-daemon/settings.json file (if it doesn’t exist, run /etc/rc.d/transmission-daemon start and then stop it – changes to the file will be lost if you edit it while the daemon is running)
Important settings/changes:

"download-dir": "/var/netshared/glacier/BitTorrent/complete",
"incomplete-dir": "/var/netshared/glacier/BitTorrent/incomplete",
"incomplete-dir-enabled": true,
"rpc-whitelist": "127.0.0.1,192.168.1.*",
"rpc-whitelist-enabled": true,
"watch-dir": "/var/netshared/glacier/BitTorrent/watch",
"watch-dir-enabled": true

These settings make it so that any .torrent you drop into the watch directory immediately gets added and started. Downloads go into the incomplete directory while they are downloading, and are then moved to the complete directory afterwards.

rpc-whitelist is a comma-separated list of IPs that can remotely control transmission, so this should be limited to your local network. You can access the web UI on http://nas:9091/transmission/web which is pretty neat.

To auto-start transmission, edit your /etc/rc.conf.local and add transmission_daemon to the pkg_scripts. I recommend starting it before samba, so that samba gets shutdown before transmission. (OpenBSD stops services in the reverse order of startup).

Keeping up to date

Keeping OpenBSD up to date is described in the following -stable link above. Basically, CVS update all of src, ports, xenocara if needed, then recompile and reboot.

To check if your ports are up to date, you can run /usr/ports/infrastructure/bin/out-of-date, then cd into any ports and run make update.
Note that if you’ve installed a package, it’s safe to update it through make update in the ports directory – packages are really just precompiled ports, no “magic”.

Closing Remarks

This was really just a howto of how setup my NAS currently, aimed at people that already know OpenBSD. If you’re curious about a *NIX server and don’t mind spending some time to learn the system. I’m highly pleased with OpenBSD. The system is minimalist – there are not many moving parts by default – and really invites to understand stuff properly.

If you have a more sophisticated NAS setup, you may want to look at FreeNAS as well. Do note that the 8 GB minimum RAM requirement is not a joke – FreeNAS will install and seemingly run on 4 or even 2 GB, but random data loss is almost guaranteed to occur.

configSource only works on sections, not sectionGroups

I have an app.config with some custom sectionGroups:

<configSections>
	<sectionGroup name="MyApp">
		<section name="foo" type="System.Configuration.NameValueSectionHandler, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
	</sectionGroup>
</configSections>
<myApp>
	<foo>
		<add key="MySetting" value="14"></add>
	</foo>
</myApp>

I wanted to externalize that:
<MyApp configSource="myapp.config">

This yields an error:

System.Configuration.ConfigurationErrorsException: The attribute 'configSource' cannot be specified because its name starts with the reserved prefix 'config' or 'lock'.

Long story short: configSource only works on <section> elements, not on <sectionGroup>.

OS X Screen Recording and Converting to GIFs with free tools

One of the unknown features of newer versions of QuickTime (at least on OS X) is the ability to record videos (Arguably, QuickTime Player is misleading as a name), either from a connected camera or from the screen. Click File > New Screen Recording to bring up the recorder. If you want, select “Show Mouse Clicks in Recording”.

After you’re done recording, you can do some trimming right in QuickTime as well – Edit > Trim.
Now you have a QuickTime file – great, but the point is to create an animated GIF from it. For that, we’ll use two free tools: ffmpeg and gifsicle. Since we’re on OS X, homebrew will do the heavy lifting for us.

brew install ffmpeg
brew install gifsicle

With both installed, we can now convert the video:
ffmpeg -i MyRecording.mov -r 10 -f gif - | gifsicle --optimize=3 --delay=3 > MyRecording.gif
Since I want to do this often, I’ve added a shell command to my .zshrc:

function movtogif {  if [[ $# = 0 ]]
  then 
    echo "USAGE: movtogif filename.mov"
  else 
    ffmpeg -i bash -r 10 -f gif - | gifsicle --optimize=3 --delay=3 > bash:r.gif 
  fi 
}

For reference, the :r means to take the file path without extension. See the manpages of ffmpeg and gifsicle for more information about the parameters.

Thanks to Alex Dergachev for the original idea.