git rebase on pull, and squash on merge

Here’s a few git settings that I prefer to set, now that I actually work on projects with other people and care for a useful history.

git config --global branch.master.mergeoptions "--squash"
This always squashes merges into master. I think that work big enough to require multiple commits should be done in a branch, then squash-merged into master. That way, master becomes a great overview of individual features rather than the nitty-gritty of all the back and forth of a feature.

git config --global pull.rebase true
This always rebases your local commits, or, in plain english: It always puts your local, unpushed commits to the top of the history when you pull changes from the server. I find this useful because if I’m working on something for a while, I can regularly pull in other people’s changes without fracturing my history. Yes, this is history-rewriting, but I care more for a useful rather than a “pure” history.

Combined with git repository hosting (GitLab, GitHub Enterprise, etc.), I found that browsing history is a really useful tool to keep up code changes (especially across timezones), provided that the history is actually useful.

When nslookup works but you can’t ping it, NetBIOS may be missing

I have a custom DNS Server (running Linux) and I also have a server running Linux (“MyServer”). The DNS Server has an entry in /etc/hosts for MyServer.

On my Windows machines, I can nslookup MyServer and get the IP back, but when I try to access the machine through ping or any of the services it offers, the name doesn’t resolve. Access via the IP Address works fine though.

What’s interesting is that if I add a dot at the end (ping MyServer.) then it suddenly works. What’s happening?!

What’s happening is that Windows doesn’t use DNS but NetBIOS for simple name resolution. nslookup talks to the DNS Server, but anything else doesn’t use DNS.

The trick was to install Samba on MyServer, because it includes a NetBIOS Server (nmbd). On Ubuntu 16.04, just running sudo apt-get install Samba installs and auto-starts the service, and from that moment on my Windows machines could access it without issue.

There are ways to not use NetBIOS, but I didn’t want to make changes on every Windows client (since I’m using a Domain), so this was the simplest solution I could find. I still needed entries in my DNS Server so that Mac OS X can resolve it.

Why Feature Toggles enable (more) frequent deployments

As Nick Craver explained in his blog posting about our deployment process, we deploy Stack Overflow to production 5-10 times a day. Apart from the surrounding tooling (automated builds, one-click deploys, etc.) one of the reasons that is possible is because the master-branch rarely ever stays stale – we don’t feature branch a lot. That makes for few merge-nightmares or scenarios where suddenly a huge feature gets dropped into the codebase all at once.

The thing that made the whole “commit early, commit often” principle click for me was how easy it is to add new feature toggles to Stack Overlow. Feature Toggles (or Feature Flags), as described by Martin Fowler make the application [use] these toggles in order to decide whether or not to show the new feature.

The Stack Overflow code base contains a Site Settings class with (as of right now) 1302 individual settings. Some of these are slight behavior changes for different sites (all 150+ Q&A sites run off the same code base), but a lot of them are feature toggles. When the new IME Editor was built, I added another feature toggle to make it only active on a few sites. That way, any huge issue would’ve been localized to a few sites rather than breaking all Stack Exchange sites.

Feature toggles allow for a half-finished feature to live in master and to be deployed to production – in fact, I can intentionally do that if I want to test it with a limited group of users or have our community team try it before the feature gets released network-wide. (This is how the “cancel misclicked flags” feature was rolled out). But most importantly, it allows for changes to constantly go live. If there is any unintended side-effects, we notice it faster and have an easier time locating it as the relative changeset is small. Compare that to some massive merge that might introduce a whole bunch of issues all at once.

For feature toggles to work, it must be easy to add new ones. When you start out with a new project and want to add your first feature toggle, it may be tempting to just add that one new toggle, but if the code base grows bigger, having an easy mechanism really pays off. Let me show you how I add a new feature toggle to Stack Overflow:

[SiteSettings]
public partial class SiteSettings
{
    // ... other properties ...

    [DefaultValue(false)]
    [Description("Enable to allow users to retract their flags on Questions, Answers and Teams.")]
    [AvailableInJavascript]
    public bool AllowRetractingFlags { get; private set; }
}

When I recompile and run the application, I can go to a developer page and view/edit the setting:

RetractFlags

Anywhere in my code, I can gate code behind an if (SiteSettings.AllowRetractingFlags) check, and I can even use that in JavaScript if I decorate it with the [AvailableInJavascript] attribute (Néstor Soriano added that feature recently, and I don’t want to miss it anymore).

Note what I did not have to do: I did not need to create any sort of admin UI, I did not need to write some CRUD logic to persist the setting in the database, I did not need to update some Javascript somewhere. All I had to do was to add a new property with some attributes to a class and recompile. What’s even better is that I can use other datatypes than bool – our code supports at least strings and ints as well, and it is possible to add custom logic to serialize/deserialize complex objects into a string. For example, my site setting can be a semi-colon separated list of ints that is entered as 1;4;63;543 on the site, but comes back as an int-array of [1,4,63,543] in both C# and JavaScript.

I wasn’t around when that feature was built and don’t know how much effort it took, but it was totally worth building it. If I don’t want a feature to be available, I just put it behind a setting without having to dedicate a bunch of time to wire up the management of the new setting.

Feature Toggles. Use them liberally, by making it easy to add new ones.

Handling IME events in JavaScript

Stack Overflow has been expanding past the English-speaking community for a while, and with the launch of both a Japanese version of Stack Overflow and a Japanese Language Stack Exchange (for English speakers interested in learning Japanese) we now have people using IME input regularly.

For those unfamiliar with IME (like I was a week ago), it’s an input help where you compose words with the help of the operating system:
IME
In this clip, I’m using the cursor keys to go up/down through the suggestions list, and I can use the Enter key to select a suggestion.

The problem here is that doing this actually sends keyup and keydown events, and so does pressing Enter. Interestingly enough, IME does not send keypress events. Since Enter also submits Comments on Stack Overflow, the issue was that selecting an IME suggestion also submits the comment, which was hugely disruptive when writing Japanese.

Browsers these days emit Events for IME composition, which allowed us to handle this properly now. There are three events: compositionstart, compositionupdate and compositionend.

Of course, different browsers handle these events slightly differently (especially compositionupdate), and also behave differently in how they treat keyboard events.

  • Internet Explorer 11, Firefox and Safari emit a keyup event after compositionend
  • Chrome and Edge do not emit a keyup event after compositionend
  • Safari additionally emits a keydown event (event.which is 229)

So the fix is relatively simple: When you’re composing a Word, we should not have Enter submit the form. The tricky part was really just to find out when you’re done composing, which requires swallowing the keyup event that follows compositionend on browsers that emit it, without requiring people on browsers that do not emit the event to press Enter an additional time.

The code that I ended up writing uses two boolean variables to keep track if we’re currently composing, and if composition just ended. In the latter case, we swallow the next keyup event unless there’s a keydown event first, and only if that keydown event is not Safari’s 229. That’s a lot of if’s, but so far it seems to work as expected.

submitFormOnEnterPress: function ($form) {
    var $txt = $form.find('textarea');
    var isComposing = false; // IME Composing going on
    var hasCompositionJustEnded = false; // Used to swallow keyup event related to compositionend

    $txt.keyup(function(event) {
        if (isComposing || hasCompositionJustEnded) {
            // IME composing fires keydown/keyup events
            hasCompositionJustEnded = false;
            return;
        }

        if (event.which === 13) {
            $form.submit();
        }
    });

    $txt.on("compositionstart",
            function(event) {
                isComposing = true;
            })
        .on("compositionend",
            function(event) {
                isComposing = false;
                // some browsers (IE, Firefox, Safari) send a keyup event after
                //  compositionend, some (Chrome, Edge) don't. This is to swallow
                // the next keyup event, unless a keydown event happens first
                hasCompositionJustEnded = true;
            })
        .on("keydown",
            function(event) {
                // Safari on OS X may send a keydown of 229 after compositionend
                if (event.which !== 229) {
                    hasCompositionJustEnded = false;
                }
            });
},

Here’s a jsfiddle to see the keyboard events that are emitted.

.net Framework 4.6.2 adds support to sign XML Documents using RSA-SHA256

One of the hidden useful gems in the .net Framework is the System.Security.Cryptography.Xml.SignedXml class, which allows to sign XML documents, and validate the signature of signed XML documents.

In the process of implementing both a SAML 2.0 Service Provider library and an Identity Provider, I found that RSA-SHA256 signatures are common, but not straight forward. Validating them is relatively easy, add a reference to System.Deployment and run this on app startup:

CryptoConfig.AddAlgorithm(
    typeof(RSAPKCS1SHA256SignatureDescription),
    "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256");

However, signing documents with a RSA-SHA256 private key yields a NotSupportedException when calling SignedXml.ComputeSignature(). Turns out that only .net Framework 4.6.2 will add support for the SHA2-family:

X509 Certificates Now Support FIPS 186-3 DSA

The .NET Framework 4.6.2 adds support for DSA (Digital Signature Algorithm) X509 certificates whose keys exceed the FIPS 186-2 limit of 1024-bit.

In addition to supporting the larger key sizes of FIPS 186-3, the .NET Framework 4.6.2 allows computing signatures with the SHA-2 family of hash algorithms (SHA256, SHA384, and SHA512). The FIPS 186-3 support is provided by the new DSACng class.

Keeping in line with recent changes to RSA (.NET Framework 4.6) and ECDsa (.NET Framework 4.6.1), the DSA abstract base class has additional methods to allow callers to make use of this functionality without casting.

After updating my system to the 4.6.2 preview, signing XML documents works flawlessly:

// exported is a byte[] that contains an exported cert incl. private key
var myCert = new X509Certificate2(exported);
var certPrivateKey = myCert.GetRSAPrivateKey();

var doc = new XmlDocument();
doc.LoadXml("<root><test1>Foo</test1><test2><bar baz=\"boom\">Real?</bar></test2></root>");

var signedXml = new SignedXml(doc);
signedXml.SigningKey = certPrivateKey;

Reference reference = new Reference();
reference.Uri = "";
XmlDsigEnvelopedSignatureTransform env = new XmlDsigEnvelopedSignatureTransform();
reference.AddTransform(env);
signedXml.AddReference(reference);

signedXml.ComputeSignature();
XmlElement xmlDigitalSignature = signedXml.GetXml();
doc.DocumentElement.AppendChild(doc.ImportNode(xmlDigitalSignature, true));

// doc is now a Signed XML document

Building a NAS with OpenBSD

Over a recent long weekend, I’ve decided to build a small NAS for home use, mainly to have some of my data backed up and to have an archive of old stuff I don’t need all the time. Both of my Laptops have 256 GB SSDs, and while that’s usually enough, it’s good to have some extra headroom sitting around.

The idea was to:

  • Have a place to backup my stuff
  • Have a machine that can do BitTorrent downloads on its own
  • Have a machine that allows my to access big files from multiple other PCs
  • Have a machine that works as a local git server

The Hardware

I bought the motherboard and case a few years ago for something else, so I think better options are available now.

The desired setup:

  • Use the 128 GB SSD as the boot drive – because it’s mSATA it fits directly on the motherboard, and doesn’t take up space for mounting drives
  • Use the two 2.5″ 1 TB drives as a RAID 1 – that way, I’m protected against hard drive failure. Do note that RAID 1 is more an availability than a safety thing because viruses or accidential deletion of files isn’t something a RAID can help with
  • Use the one 3.5″ 3 TB drive as a big store for non-critical stuff, like backups of my Steam games or temporary BitTorrent files

The case doesn’t have much space for drives, even though the motherboard has plenty of S-ATA ports.

For the operating system, I went with OpenBSD 5.7 x64. I prefer OpenBSDs very minimalistic approach of offering a tiny base system, and then allowing me to add exactly the pieces of software that I need. I’m not going to give a full rundown of how OpenBSD works, because if you’re really interested you should definitely read Absolute OpenBSD.

Basic System Setup

Do setup a user during setup – in my case, I called him User.

My 128 GB SSD is partitioned as follows:

#                size           offset  fstype [fsize bsize  cpg]
  a:             2.0G               64  4.2BSD   2048 16384    1 # /
  b:             8.2G          4209024    swap                   # none
  c:           119.2G                0  unused                   
  d:             4.0G         21398592  4.2BSD   2048 16384    1 # /tmp
  e:            15.0G         29800544  4.2BSD   2048 16384    1 # /var
  f:             8.0G         61255840  4.2BSD   2048 16384    1 # /usr
  g:             2.0G         78027680  4.2BSD   2048 16384    1 # /usr/X11R6
  h:            15.0G         82220640  4.2BSD   2048 16384    1 # /usr/local
  i:             3.0G        113675936  4.2BSD   2048 16384    1 # /usr/src
  j:             3.0G        119957344  4.2BSD   2048 16384    1 # /usr/obj
  k:            59.0G        126238752  4.2BSD   2048 16384    1 # /home

The best setup varies on preference of course, in my case I stuck mostly to the OpenBSD defaults and only gave /usr/src and /usr/obj some extra space.

After the system boots up for the first time, add powerdown=YES to /etc/rc.shutdown. This turns off the machine when shutdown -h now is called. Do note that halt doesn’t seem to respect that, and needs to be invoked with halt -p. To my delight, pushing the power button on the case turns off the machine properly – hooray for working ACPI support!

The first thing before installing any software should be to follow -stable, recompiling the kernel, userland, and xenocara.

# cd /usr
# export CVSROOT=anoncvs@anoncvs.usa.openbsd.org:/cvs
# cvs -d$CVSROOT checkout -rOPENBSD_5_7 -P src ports xenocara

# cd /usr/src/sys/arch/amd64/conf
# config GENERIC.MP
# cd ../compile/GENERIC.MP
# make clean && make
# make install
# reboot

# rm -rf /usr/obj/*
# cd /usr/src
# make obj
# cd /usr/src/etc && env DESTDIR=/ make distrib-dirs
# cd /usr/src
# make build
# cd /usr/xenocara
# rm -rf /usr/xobj/*
# make bootstrap
# make obj
# make build
# reboot

This takes a long time, over an hour on this machine. After that, it’s time to do package setup

Add FETCH_PACKAGES=yes to /etc/mk.conf, and export PKG_PATH=ftp://ftp5.usa.openbsd.org/pub/OpenBSD/5.7/packages/amd64/to begin installing packages.

The OpenBSD packages and ports system is a bit interesting, because it seems that packages are built only once when a new OpenBSD version is released, and then never updated. You have to manually compile newer versions of software. That’s not that big of a deal, because with FETCH_PACKAGES enabled, the system will fetch packages if they are still the correct version and only build ports where needed.

Setting up a data drives, incl. RAID 1

I decided that my data drives should live under /var/netshared, so I created this and two subdirectories – data and glacier. I will set permissions later.

I have 2x 1 TB hard drives, from which I want to build a RAID 1. First, setup disklabels for both drives (disklabel -E sd0, then sd1), making sure that the partition type is RAID instead of the default 4.2BSD.

OpenBSD area: 0-1953525168; size: 931.5G; free: 0.0G
#                size           offset  fstype [fsize bsize  cpg]
  a:           931.5G                0    RAID                   
  c:           931.5G                0  unused

Then, run bioctl -c 1 -l sd0a,sd1a softraid0 to create the RAID. The -c 1 flag sets the RAID level (RAID 1 = mirroring), and -l (lowercase L) is a list of partitions that form the raid. The softraid0 at the end is an internal identifier – it must start with softraid. bioctl will then create a new device that will appear like a hard drive and can be used as such.

The actual device will be something like /dev/sd4. You need to run disklabel on the new device to create a partition, this time of the usual 4.2BSD type. In order to add it to /etc/fstab, you need to get the duid, which you can get by running disklabel sd4:

# /dev/rsd4c:
type: SCSI
disk: SCSI disk
label: SR RAID 1
duid: cc029b4fe2ac54dd

(I do note that using duids in fstab is optional, but I highly recommend it as it makes you independent of device name changes as long as the actual drive is the same)

Remember to run newfs /dev/sd4a to create a file system. OpenBSD will pick FFS for drives smaller than 1 TB, and FFS2 for drives bigger than 1 TB. Check man newfs for options.

Here’s how my fstab looks:

e8bd5e30aba4f036.b none swap sw
e8bd5e30aba4f036.a / ffs rw 1 1
e8bd5e30aba4f036.k /home ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.d /tmp ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.f /usr ffs rw,nodev 1 2
e8bd5e30aba4f036.g /usr/X11R6 ffs rw,nodev 1 2
e8bd5e30aba4f036.h /usr/local ffs rw,nodev 1 2
e8bd5e30aba4f036.j /usr/obj ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.i /usr/src ffs rw,nodev,nosuid 1 2
e8bd5e30aba4f036.e /var ffs rw,nodev,nosuid 1 2
cc029b4fe2ac54dd.a /var/netshared/data ffs rw,nodev,nosuid,noexec,noatime 1 2
f4540651dabd448d.a /var/netshared/glacier ffs rw,nodev,nosuid,noexec,noatime 1 2

Notice the nosuid,noexec,noatime,nodev flags on the two data drives. This is just some precaution against malicious files, and noatime is just to reduce disk wear by a tiny fraction. Check the manpage of mount for more information.

Setting up a user

During OpenBSD Setup, a user should’ve been setup. If you decided not to, use useradd to create one now.

Create a group for access to the shared directories: groupadd netshared

Add the user to that group: user mod -G netshared User

Change owner and permissions:

chown -R User:netshared /var/netshared/* 
chmod -R 0770 /var/netshared/*

Note that the execution-bit is required to traverse directories, so chmod 0660 wouldn’t work as a permission mask. Since the file system is mounted noexec, it doesn’t matter anyways.

Installing Samba

Start by installing the samba port:

# cd /usr/ports/net/samba
# make install

Then, configure samba (thanks Pierre-Philipp Braun for the tip with sed):

cd /etc/samba/
mv smb.conf smb.conf.dist
sed '/^#/d; /^;/d; /^$/d;' smb.conf.dist > smb.conf
vi smb.conf

Here’s my smb.conf:

[global]
   workgroup = WORKGROUP
   server string = Samba Server
   security = user
   load printers = no
   log file = /var/log/samba/smbd.%m
   max log size = 50
   dns proxy = no
   printing = BSD
   unix extensions = no
   allow insecure wide links = no
[data]
   path = /var/netshared/data
   valid users = User
   writable = yes
   printable = no
[glacier]
   path = /var/netshared/glacier
   valid users = User
   writable = yes
   printable = no

If you want to give access to groups instead of individual users, prefix with an @-sign: valid users = @netshared

The manpage – man smb.conf – is very extensive. If you want to finetune permissions, take the time to browse through it.

To start samba on system startup, add this to /etc/rc.conf.local:

pkg_scripts="samba"
samba_flags=""

This should be it – start samba through /etc/rc.d/samba start and try accessing your new file shares!

Using the server as a git server

This isn’t really a NAS-specific, but git specific. If you want to install git on the server, cd /usr/ports/devel/git and make install.

Create or clone a bare repository on the NAS:

cd /var/netshared/data
mkdir myrepo.git
cd myrepo.git
git init --bare

Or clone an existing repository as a bare clone:

cd /var/netshared/data
git clone --bare https://github.com/mstum/faml.git

Then, on your machines, clone from that repository:
git clone \\nas\data\faml.git

This will automatically set up an origin remote on your local clone, so any changes you make on your laptop can be pushed to the server through git push.

Setting up a BitTorrent client

Install the port of transmission:

cd /usr/ports/net/transmission
make install

This will automatically create a _transmission user – add it to the netshared group:
user mod -G netshared _transmission

Create folders for BitTorrent:

mkdir /var/netshared/glacier/BitTorrent
mkdir /var/netshared/glacier/BitTorrent/incomplete
mkdir /var/netshared/glacier/BitTorrent/complete
mkdir /var/netshared/glacier/BitTorrent/watch
chown -R User:netshared /var/netshared/glacier/BitTorrent

Edit the /var/transmission/.config/transmission-daemon/settings.json file (if it doesn’t exist, run /etc/rc.d/transmission-daemon start and then stop it – changes to the file will be lost if you edit it while the daemon is running)
Important settings/changes:

"download-dir": "/var/netshared/glacier/BitTorrent/complete",
"incomplete-dir": "/var/netshared/glacier/BitTorrent/incomplete",
"incomplete-dir-enabled": true,
"rpc-whitelist": "127.0.0.1,192.168.1.*",
"rpc-whitelist-enabled": true,
"watch-dir": "/var/netshared/glacier/BitTorrent/watch",
"watch-dir-enabled": true

These settings make it so that any .torrent you drop into the watch directory immediately gets added and started. Downloads go into the incomplete directory while they are downloading, and are then moved to the complete directory afterwards.

rpc-whitelist is a comma-separated list of IPs that can remotely control transmission, so this should be limited to your local network. You can access the web UI on http://nas:9091/transmission/web which is pretty neat.

To auto-start transmission, edit your /etc/rc.conf.local and add transmission_daemon to the pkg_scripts. I recommend starting it before samba, so that samba gets shutdown before transmission. (OpenBSD stops services in the reverse order of startup).

Keeping up to date

Keeping OpenBSD up to date is described in the following -stable link above. Basically, CVS update all of src, ports, xenocara if needed, then recompile and reboot.

To check if your ports are up to date, you can run /usr/ports/infrastructure/bin/out-of-date, then cd into any ports and run make update.
Note that if you’ve installed a package, it’s safe to update it through make update in the ports directory – packages are really just precompiled ports, no “magic”.

Closing Remarks

This was really just a howto of how setup my NAS currently, aimed at people that already know OpenBSD. If you’re curious about a *NIX server and don’t mind spending some time to learn the system. I’m highly pleased with OpenBSD. The system is minimalist – there are not many moving parts by default – and really invites to understand stuff properly.

If you have a more sophisticated NAS setup, you may want to look at FreeNAS as well. Do note that the 8 GB minimum RAM requirement is not a joke – FreeNAS will install and seemingly run on 4 or even 2 GB, but random data loss is almost guaranteed to occur.

configSource only works on sections, not sectionGroups

I have an app.config with some custom sectionGroups:

<configSections>
	<sectionGroup name="MyApp">
		<section name="foo" type="System.Configuration.NameValueSectionHandler, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
	</sectionGroup>
</configSections>
<myApp>
	<foo>
		<add key="MySetting" value="14"></add>
	</foo>
</myApp>

I wanted to externalize that:
<MyApp configSource="myapp.config">

This yields an error:

System.Configuration.ConfigurationErrorsException: The attribute 'configSource' cannot be specified because its name starts with the reserved prefix 'config' or 'lock'.

Long story short: configSource only works on <section> elements, not on <sectionGroup>.

OS X Screen Recording and Converting to GIFs with free tools

One of the unknown features of newer versions of QuickTime (at least on OS X) is the ability to record videos (Arguably, QuickTime Player is misleading as a name), either from a connected camera or from the screen. Click File > New Screen Recording to bring up the recorder. If you want, select “Show Mouse Clicks in Recording”.

After you’re done recording, you can do some trimming right in QuickTime as well – Edit > Trim.
Now you have a QuickTime file – great, but the point is to create an animated GIF from it. For that, we’ll use two free tools: ffmpeg and gifsicle. Since we’re on OS X, homebrew will do the heavy lifting for us.

brew install ffmpeg
brew install gifsicle

With both installed, we can now convert the video:
ffmpeg -i MyRecording.mov -r 10 -f gif - | gifsicle --optimize=3 --delay=3 > MyRecording.gif
Since I want to do this often, I’ve added a shell command to my .zshrc:

function movtogif {  if [[ $# = 0 ]]
  then 
    echo "USAGE: movtogif filename.mov"
  else 
    ffmpeg -i bash -r 10 -f gif - | gifsicle --optimize=3 --delay=3 > bash:r.gif 
  fi 
}

For reference, the :r means to take the file path without extension. See the manpages of ffmpeg and gifsicle for more information about the parameters.

Thanks to Alex Dergachev for the original idea.

faml – A Markup Language for browsers and node.js

A common request on many websites is to offer some light formatting capability for a user: Bold, Italic, Links, maybe lists. It should not clutter the markup too much and allow little room for error.

John Gruber’s Markdown is one of the most popular markup languages, but it has a few features that I commonly need to tweak or remove altogether. For my needs, I have customized a Markdown parser to remove features (recently the excellent stmd.js), but I’ve just decided to create a little markup language of my own:

faml – A Markup Language

The syntax may be inspired by Markdown, but it is really its own thing. I only included the things I need, and there is generally just one way of doing things (e.g., emphasis is added through asterisks). The code is based on stmd.js but heavily changed and broken up differently.

You can check out the source, documentation and JavaScript files on GitHub or play with it in the browser. It is also published to npm, allowing you to just npm install faml. I have example code for web browsers and for node.js.

The current version is 0.9 because I’m still working things like the tree that the parser returns (it contains a bunch of unneccessary stuff), adding tests, and giving it a nice homepage.

But it’s there for people to play with 🙂

var parser = new faml.FamlParser();
var renderer = new faml.FamlRenderer();
var input = "test *with emph*";
var parsed = parser.parse(input);
var rendered = renderer.render(parsed);
console.log(rendered);

Standard Flavored Markdown Tips

Today, some of the biggest users of Markdown have given a gift to the Internet: Standard Flavored Markdown (Read more in Jeff Atwood’s Blog Post)

I played with it for an hour and I’m absolutely in love with it, for three reasons:

  1. It’s rock solid and mature – Try nesting Ordered and Unordered Lists in any combination and see it just do the right thing, something many implementations struggle with
  2. It comes with a reference implementation in C and JavaScript
  3. The JavaScript implementation is easy to extend (I have not done anything with the C version)

I was able to replace the Markdown parser in our current application with the stmd.js reference parser and got up and running immediately.

Here are some tips:

The Parser and Renderer are two different things

Instead of just taking Markdown and giving you HTML, stmd.js consists of a separate Parser and Renderer. This is massively useful, because it means you can either massage the parsed markdown tree before you render it, but you can also impact how the Markdown is rendered without messing up the parsing code. Look at this example:

var parser = new stmd.DocParser();
var renderer = new stmd.HtmlRenderer();
var input = "this **is** a\r\n" +
            "test.\r\n\r\n" +
            "With Inline-<b>HTML</b> as well";

var ast = parser.parse(input);

var html = renderer.render(ast);

document.getElementById("output").innerHTML = html;

Set a breakpoint (with Firebug or whatever JavaScript debugger you use) and look at the glorious ast. Look at the collections of children, at the tokens, and then you might see why this is so great: You can monkey around with this, without having to worry about HTML rendering.

Treat newlines as linebreaks

This is possibly the #1 request people have when they first try out Markdown. Normally, you need to have two spaces at the end of the line to make it a newline, otherwise it’s a space.

The parser correctly determines a simple newline as a Softbreak token. The default renderer renders Softbreaks as \n, that is a HTML newline which doesn’t translate into an actual line break. This is trivial:

var renderer = new stmd.HtmlRenderer();
renderer.softbreak = "<br/>";

Now, every linebreak inserts a proper <br> tag.

Disallow all HTML

Markdown allows Inline-HTML, since the original audience were programmers/bloggers. However, in some environments it may be required to disable any and all inline-HTML. To disable all HTML parsing, we tell the Parser to not generate any Html tokens:

var parser = new stmd.DocParser();
parser.inlineParser.parseHtmlTag = function() { return 0; }

All HTML Tags will now be interpreted as Str tokens and thus escaped on rendering.

Read the Source Code

The Source is on GitHub, and I highly recommend reading through stmd.js to understand how it works and where the extensibility points are. I wish that the Parser and Renderer were in two separate files, but it’s still very straight forward. Yes, there is a Regex which parses HTML, but since Markdown doesn’t just support any HTML but rather a defined subset, this is fine.

You should almost never have to edit stmd.js directly. Monkey patch, yes. But that can be in your consumer code.

This library is a gift.

Thank you, Standard Flavored Markdown team.