IIS/ASP.net Troubleshooting Cheatsheet

Setting up IIS usually results in some error (403, 500…) at first. Since I run into this a lot and always forget to write down the steps, here’s my cheatsheet now, which I’ll update if I run into additional issues.

Folder Permissions

  • Give IIS_IUSRS Read permission to the folder containing your code.
  • If StaticFile throws a 403 when accessing static files, also give IUSR permission
  • Make sure files aren’t encrypted (Properties => Advanced)

If your code modifies files in that folder (e.g., using the default identity database or logging, etc.), you might need write permissions as well.

Registering ASP.net with IIS

If installing .net Framework after IIS, need to run one of these:

  • Windows 7/8/2012: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i
  • Windows 10: dism /online /enable-feature /all /featurename:IIS-ASPNET45

Can we build a better console.log?

For a lot of Front End JavaScript work, our browser has become the de-facto IDE thanks to powerful built-in tools in Firefox, Chrome or Edge. However, one area that has seen little improvement over the years is the console.log function.

Nowadays, I might have 8 or 9 different modules in my webpage that all output debug logs when running non-minified versions to help debugging. Additionally, it is also my REPL for some ad-hoc coding. This results in an avalanche of messages that are hard to differentiate. There are a few ways to make things stand out. console.info, console.warn and console.error are highlighted in different ways:

Additionaly, there’s console.group/.groupEnd but that requires you to wrap all calls inside calls to .group(name) and .groupEnd()

The problem is that there is no concept of scope inherent to the logger. For example, it would be useful to create either a scoped logger or pass in a scope name:

console.logScoped("Foo", "My Log Message");
console.warnScoped("Foo", "A Warning, oh noes!");

var scoped = console.createScope("Foo");
scoped.log("My Log Message");
scoped.warn("A Warning, oh noes!");

This would allow Browsers to create different sinks for their logs, e.g., I could have different tabs:
From there, we can even think about stuff like “Write all log messages with this scope to a file” and then I could browse the site, test out functionality and then check the logfile afterwards.

Of course, there are a few issues to solve (Do we support child scopes? Would we expose sinks/output redirection via some sort of API?), but I think that it’s time to have a look at how to turn the console.log mechanism from printf-style debugging into something a lot closer to a dev logging facility.

Thoughts on ORMs, 2016 Edition

This is a bit of a follow up to my 3 year old post about LINQ 2 SQL and my even older 2011 Thoughts on ORM post. I’ve changed my position several times over the last three years as I learned about issues with different approaches. TL;DR is that for reads, I’d prefer handwritten SQL and Dapper for .Net as a mapper to avoid having to deal with DataReaders. For Inserts and Updates, I’m a bit torn.

I still think that L2S is a vastly better ORM than Entity Framework if you’re OK with solely targeting MS SQL Server, but once you go into even slightly more complex scenarios, you’ll run into issues like the dreaded SELECT N+1 far too easily. This is especially true if you pass around entities through layers of code, because now virtually any part of your code (incl. your View Models or serialization infrastructure) might make a ton of additional SQL calls.

The main problem here isn’t so much detecting the issue (Tools like MiniProfiler or L2S Prof make that easy) – it’s that fixing the issue can result in a massive code refactor. You’d have to break up your EntitySets and potentially create new business objects, which then require a bunch of further refactorings.

My strategy has been this for the past two years:

  1. All Database Code lives in a Repository Class
  2. The Repository exposes Business-objects
  3. After the Repository returns a value, no further DB Queries happen without the repository being called again

All Database Code lives in a Repository Class

I have one or more classes that end in Repository and it’s these classes that implement the database logic. If I need to call the database from my MVC Controller, I need to call a repository. That way, any database queries live in one place, and I can optimize the heck out of calls as long as the inputs and outputs stay the same. I can also add a caching layer right there.

The Repository exposes Business-objects

If I have a specialized business type (say AccountWithBalances), then that’s what the repository exposes. I can write a complex SQL Query that joins a bunch of tables (to get the account balances) and optimize it as much as I want. There are scenarios where I might have multiple repositories (e.g., AccountRepository and TransactionsRepository), in which case I need to make a judgement call: Should I add a “Cross-Entity” method in one of the repositories, or should I go one level higher into a service-layer (which could be the MVC Controller) to orchestrate?

After the Repository returns a value, no further DB Queries happen without the repository being called again

But regardless how I decide on where the AccountWithBalances Getter-method should live, the one thing that I’m not going to do is exposing a hot database object. Sure, it sounds convenient to get an Account and then just do var balance = acc.Transactions.Sum(t => t.Amount); but that will lead to 1am debugging sessions because your application broke down once more than two people hit it at the same time.

At the end, long term maintainability suffers greatly otherwise. It seems that it’s more productive (and it is for simple apps), but once you get a grip on T-SQL you’re writing your SQL Queries anyway, and now you don’t have to worry about inefficient SQL because you can look at the Query Execution Plan and tweak. You’re also not blocked from using optimized features like MERGE, hierarchyid or Windowing Functions. It’s like going from MS SQL Server Lite to the real thing.

So raw ADO.net? Nah. The problem with that is that after you wrote an awesome query, you now have to deal with SqlDataReaders which is not pleasant. Now, if you were using LINQ 2 SQL, it would map to business objects for you, but it’s slow. And I mean prohibitively so. I’ve had an app that queried the database in 200ms, but then took 18 seconds to map that to .net objects. That’s why I started using Dapper (Disclaimer: I work for Stack Overflow, but I used Dapper before I did). It doesn’t generate SQL, but it handles parameters for me and it does the mapping, pretty fast actually.

If you know T-SQL, this is the way I’d recommend going because the long-term maintainability is worth it. And if you don’t know T-SQL, I recommend taking a week or so to learn the basics, because long-term it’s in your best interest to know SQL.

But what about Insert, Update and Delete?

Another thing that requires you to use raw SQL is Deletes. E.g., “delete all accounts who haven’t visited the site in 30 days” can be expressed in SQL, but with an ORM you can fall into the trap of first fetching all those rows and then deleting them one by one, which is just nasty.

But where it gets really complicated is when it comes to foreign key relationships. This is actually a discussion we had internally at Stack Overflow, with good points on either side. Let’s say you have a Bank Account, and you add a new Transaction that also has a new Payee (the person charging you money, e.g., your Mortgage company). The schema would be designed to have a Payees table (Id, Name) and a Transactions table (Id, Date, PayeeId, Amount) with a Foreign Key between Transactions.PayeeId and Payees.Id. In our example, we would have to insert the new Payee first, get their Id and then create a Transaction with that Id. In SQL, I would probably write a Sproc for this so that I can use a Merge statement:

CREATE PROCEDURE dbo.InsertTransaction
	@PayeeName nvarchar(100),
	@Amount decimal(19,4)

    DECLARE @payeeIdTable TABLE (Id INTEGER)
    DECLARE @payeeId INT = NULL

    MERGE dbo.Payees AS TGT
        USING (VALUES (@PayeeName)) AS SRC (Name)
        ON (TGT.Name = SRC.Name)
        INSERT(Name) VALUES (SRC.Name)
        UPDATE SET @payeeId = TGT.Id
    OUTPUT Inserted.Id INTO @payeeIdTable

    IF @payeeId IS NULL
        SET @payeeId = (SELECT TOP 1 Id FROM @payeeIdTable)

    INSERT INTO dbo.Transactions (Date, PayeeId, Amount)
    VALUES (GETDATE(), @payeeId, @Amount)

The problem is that once you have several Foreign Key relationships, possibly even nested, this can quickly become really complex and hard to get right. In this case, there might be a good reason to use an ORM because built-in object tracking makes this a lot simpler. But this is the perspective of someone who never had performance problems doing INSERT or UPDATE, but plenty of problems doing SELECTs. If your application is INSERT-heavy, hand rolled SQL is the way to go – if only just because all these dynamic SQL queries generated by an ORM don’t play well with SQL Server’s Query Plan Cache (which is another reason to consider stored procedures (sprocs) for Write-heavy applications)

Concluding this, there are good reasons to use ORMs for developer productivity (especially if you don’t know T-SQL well) but for me, I’ll never touch an ORM for SELECTs again if I can avoid it.

Late 2015 PC Build

I’ve been saying for a while that the Mid 2010 Mac Pro was the best computer I’ve ever owned. The internal layout was just so great, swapping out hard drives or RAM was easy, no cables in the way, airflow well thought out, just awesome.

Unfortunately, being a Mac limits upgrades. While there are ways to upgrade the CPU, upgrading the video card requires some firmware flashing and trial and error. I had some luck buying a card to drive my 4K Monitor off MacVidCards, but the pricing for anything decent just didn’t work for me.

After 5 years, it was time to relegate my Mac Pro to be dedicated to Logic Pro X and Final Cut Pro X and go back to building a PC. I still had some leftover parts from my previous PC and ended up with this parts list for a bit less than $1000:

The pieces

(Disclaimer: Intel, nVidia and Gigabyte have been sponsors of my employer and are advertising with a game made by said employer. My choice these components was independent of that and mostly driven by price, availability and benchmarks from sites like Tom’s Hardware or Anandtech or whatever Google came up with.)

For the case, I had to concede that PC cases just can’t compete with the Mac Pro. The Corsair case looks nice from the front, has no obnoxious window in the side, is mostly toolless and has enough space. Check the YouTube review linked above. I went with a 650W power supply which is more than plenty to support the 65W CPU, 60W Graphics Card and all the other stuff. 80Plus Platinum for just under $100 is neat. It’s not modular, but modular PSUs usually result in me losing the extra cables anyway.

On the CPU side, Intel is a no-brainer these days for games. Skylake Core-CPUs just came out, are priced really well and are fast as heck. Since I don’t care about overclocking, the i5-6500 won out over the 6600 or 6600K simply because it’s a lot cheaper.

Going with that is a Z170 chipset board. The Z170XP-SLI isn’t very expensive, has USB 3.1 Type-C port, supports DDR4 and has a M.2 SSD slot – more on that later.

I have a collection of RAM sticks at home, mainly because whenever I upgrade RAM, the old doesn’t fit in any machine. This machine comes with yet another type of RAM, DDR4. How much you need is always up for discussion, I wouldn’t go below 8 GB these days, and I didn’t see a reason to get more than 16 GB. YMMV, but RAM is cheap enough to err on the side of more. Make sure to get a pair – the CPU uses dual channel memory controllers, which means that you should use two or four modules and if you use two, make sure to populate the two slots of the same color. I got DDR4-2133 memory which is technically the slowest, but the only speed the board supports without overclocking. A lot of marketing talks about support for faster DDR4 speeds, but in parentheses you usually see (O.C.). I’m not interested in overclocking, so I went with the speed that’s supported.

The OS drive is pure luxury overkill. Getting an SSD is a must these days, and normally I’d have gone with a Samsung 850 Pro – StorageReview can tell you why. This is still a S-ATA based SSD though. S-ATA was aimed at mechanical hard drives and is limited to 6 Gigabit/s. A 256 GB drive runs at about $130. On the other hand, the 950 Pro uses the M.2 slot and supports NVM Express (NVMe). It’s shaped like a stick of gum and sits directly on the motherboard – that’s why I went with this board. It’s ridiculously fast (2.5 GB/s compared to 550 MB/s on the 850 Pro), although IOPS are roughly the same. I’d not consider anything but Samsung these days as they make all components – flash chips, controller, finished product – and aren’t priced much differently than the competition.

The graphics card is the one component where I had to compromise. My number one choice would’ve been a Geforce GTX-970. However, these cards (and other 2nd generation Maxwell cards) suffer from coil whine, which is a high pitched noise when playing games. I didn’t want to take the risk, so I went with a first generation Maxwell-based GTX 750 Ti which are cheaper and will work well as a stopgap until coil whine is solved.

Whether or not you need a sound card is debatable these days. I had the X-Fi Titanium HD and love the cinch outputs to go to my amplifier.

I love my K70 keyboard because of the media keys (complete with a volume wheel) and because mechanical keyboards are just a must-have.

NVM Express and Windows

The thing with brand new hardware standards is that older operating systems don’t support it well. In my case, I was faced with the preference to run Windows 7 and the necessity to support a NVMe-based boot drive. There is a hotfix to add NVMe support to Windows 7, and Intel has a set of instructions. But at the end of the day, it was time to go to a newer version of Windows.

Windows 10 supports a NVMe boot drive natively, so no issues there.

A look at Adventure Game GUIs

Adventure Games went a long way from the Text Adventures of the 80’s to be something like an anachronism in the 2010’s landscape that’s dominated by twitch-based action games. Here are some GUIs that Adventure Games used over the years. This is by no means exhaustive, but should cover the most common ones.

Early Sierra Games were essentially still text adventures – you could move your character, but not interact with anything without typing.

(King’s Quest 1)

A lot of early games used text-based GUIs that still resemble Text Adventures, except that the parser is now hidden, thus eliminating the “Guess the Verb” problem.

(Zak McKracken / Commodore 64)

(The Secret of Monkey Island / Amiga)

Lucas Arts would eventually reduce the number verbs and replace the inventory with icons.

(The Secret of Monkey Island / PC DOS, VGA Version)

(Indiana Jones and the Fate of Atlantis / PC DOS)

Some games had icons for the actions.

(The Secret of Monkey Island Special Edition / iPad)

(Das Erbe / Amiga)

(BiFi Roll Action in Hollywood / Amiga)

There are Hybrid Icon/Text approaches.
(Das Telekommando kehrt zurück / Amiga)

Assigning Body parts is a way to not have actions per-se (since e.g. a Hand can mean Push, Pull, Punch, Use, Climb, Shoot…),

(Full Throttle)

(Gemini Rue)

(Curse of Monkey Island)

Some games use a simple Interact/Look breakdown:

(Secret Files: Puritas Cordis, but the same cursor was used in Secret Files: Tunguska and Lost Horizon)


And some games didn’t have verbs in the traditional sense at all, but relied on a single context-sensitive action, plus inventory action and dialogue. However, sometimes right-clicking implies “Look” while left clicking is “Interact”, so it’s not really that different.

(Broken Sword: The Shadow of the Templars)

(Legend of Kyrandia)

(Space Quest V)

A special case are multiple verbs that are context-sensitive (so they may completely differ from hotspot to hotspot)

(A New Beginning – Final Cut)

The Surface Pro is a PC, so what did you expect?

So yesterday The Verge reported that the 64 GB Surface Pro would only have 23 GB free space left, and all of a sudden the internet pretended to be surprised. Of the many reactions, I think that Marco Arment had the only interesting one, but his “Truth in Advertising” suggestion will likely clash with the ignorance of customers around the world and especially in the USA, so don’t expect anything to happen.

Any why should it? The Surface Pro is a PC, not a Tablet. It has a real, 64-Bit x86 CPU instead of an ARM Chip and it runs a real operating system instead of one of these slimmed down Tablet OSes (Granted, Windows 8 is a vastly inferior version of Windows 7 with a halfway-decent Tablet OS clumsily bolted on, but that’s a different story – it’ll be interesting to see the people always screaming “But we want real Windows apps on our devices!” realizing that almost all real Windows apps until now are meant for Keyboard and Mouse since they have small touch targets, right-click menus and only work well in the default 96dpi).

Have you ever tried to install a real Windows on a 64 GB C: Drive? There is a reason small SSDs aren’t really that popular as boot drives, especially when you consider that the Page- and Hibernation files also take up space (although the small amount of RAM – 4 GB – in the Surface Pro might help a bit).

Once you start installing a few applications, that space will go away just as it would on a normal PC. That’s when you connect an auxiliary storage device (a D: Drive in form of an SD Card) and read up tweaking guides to move stuff around, just like people did when they tried cramming Vista and Win7 on a 64 GB C: Drive.

It will be seen if the Surface Pro is crippled by UEFI Secure Boot like the Surface, thus preventing you from upgrading to Windows 7 or Vista, but if it does, don’t complain since it was your choice to buy such a PC.

I’m not saying that the Surface Pro is a bad product, but you should be aware that you’re not buying a Tablet running an optimized Tablet OS which runs optimized Tablet Apps to give you an optimized Tablet experience. You are buying a Touchscreen Laptop with all Pros and Cons. Which is why the Type Cover is such a good, important and mandatory idea.

An Epic Win

An Epic Win is an outcome that is so extraordinarily positive you had no idea it was even possible before you achieved it. It was almost beyond the threshold of imagination and when you got there you’re schocked to discover what you’re truly capable of.

Apart from this perfect definition, Jane McGonigal’ “Gaming can make a better world” talk contains some really interesting bits. It starts out a bit strange, but picks up steam quickly.

Jane McGonigal: Gaming can make a better world

Game Flow, Part One (Dissecting Indiana Jones and the Fate of Atlantis Part 2)

(This article is part of the Dissecting Indiana Jones and the Fate of Atlantis series)

Typical Adventures increase the complexity and difficulty of the game gradually and like movie scripts, they often have several distinct parts. Atlantis is no difference, although the acts aren’t clearly marked in the game.

From a high level perspective, the game has these parts:


Atlantis is one of the most complex Adventure games because it has three different paths through the middle section. A lot of Adventure games have multiple solution to individual puzzles, but the paths make so much use of conditional logic and global state variables, the complexity is a lot higher than in other Adventures. According to Wikipedia, adding the paths added an additional 6 month and turned it into a 2 year project, so that cost was significant.

In future posts I’m going to go more into handling state and how the paths work, but for now I want to focus on the high level game flow, specifically on the “Find Plato’s Lost Dialogue” path, henceforth called “Act 1” (The game doesn’t name its acts).

Act 1

New York

The game starts out in a single location with a single room, New York. The very first puzzle requires you to get access to the theatre. Of course, tickets are sold out. There are three solutions to the puzzle, and the game remembers which one you’ve taken for some important flavor at the end of this act:

  • You can knock on the door, insult the bouncer and start a fist fight.
  • You can knock on the door, and praise Sophia Hapgood, the bouncer’s idol. He will let you in since you’re okay for a college boy, pal!
  • You can ignore the door and push some crates to gain access to the fire exit ladder.


The next room contains the stage hand, which needs to be distracted. The necessary news paper can be found at the street we arrived from, so this puzzle takes place in two rooms but in a single location.

Iceland, First Visit

We’re automatically going to Iceland where we have to talk to Dr. Heimdall and learn about Dr. Sternhart in Tikal and Mr. Costa on the Azores. Once we have learned about these two people, we have multiple locations to travel to: Tikal, Iceland, The Azores and Barnett College, and no clear indication which one should be first.

Atlantis has very gradually increased complexity on us, and at this point the player needs to make choices where to go to and quite likely they will run into a few dead ends. What is important is that even though we have 4 locations, there is only one very linear way to go through this, and each location is essentially self-contained.

Tikal: Sophia is more than just an attachment…

The correct way through this linear progression starts in Tikal. Once inside the temple, the player will need to involve Sophia to advance as she needs to keep Dr. Sternhart busy so that Indy can steal the kerosene lamp.

While the concept of multiple characters isn’t new (Maniac Mansion had three, Zak McKracken had four characters), novice players may not be aware that Sophia is more than just an attachment and Tikal allows to find out about this in a very natural way (as Sternhart will always catch Indy when he tries to take the lamp, it feels very natural to ask Sophia for help).

Another small thing that Tikal introduces is manipulating items in the inventory. Before, you just gave the newspaper to the stagehand, but here you need to open the kerosene lamp and then use it on the spiral on the wall. This is a tiny detail, but remarkable nonetheless.


Iceland, Second Visit

In Iceland, Dr. Heimdall has frozen to death, and the frozen eel figurine is partially exposed at the head. This is significant for two reasons: First, the player is supposed to remember that in New York, Sophia put an Orichalcum bead inside the mouth of her necklace (which looks like a face) and awesome stuff happened. So the mental connection should be “Get Orichalcum and come back!”

The second reason is that Iceland is a very small location and the player already visited it, so it might feel stale at this point. As grim as it is, having Heimdall frozen helps the location to not feel boring as it changed between visits.

The Azores: …she is a real character

With the Eel in hand, the Azores can be tackled. Mr. Costa bluntly sends Indy away. Previously in Tikal, the player could talk to Sophia and ask her to distract Sternhart, which serves as a clue on how to proceed here.

It is not immediately obvious that Costa might react differently to her than to Indy, and Tikal helped in having the player naturally try talking to her (instead of just trying out everything and stumbling on the "Talk to Sophia” option)

When Indy asks her to take over, the player can control her as a real character. This is once again introducing a new concept gradually and solving one half of the puzzle here.

The second half can be solved by looking at the dialogue: Costa wants to trade, but the necklace is not an option. The player may (and should) recall the eel figurine in the ice from the first visit to Iceland (Heimdall even spoke about it, and the player HAD to go through the dialogue explaining the figurine before they could advance).

Exchanging the eel figurine for information about the location sends the player to Barnett College.


Barnett College: The most complex puzzle yet, softened through the intro

So up until this point, every location was self-contained with the exception of a single thing that needed to be done in one location in order to solve the next one.

The choice of four locations is an illusion but it keeps the player busy exploring and makes the game longer without feeling stretched out.

Barnett College, contains a (relatively) complex puzzle, the final puzzle in this act. Also, the location of Plato’s lost dialogue changes in every playthrough. There are three different locations where the Dialogue might be:

  • In a chest, which requires a key from the room above


  • In the tipped-over bookcase in the library (there are two ways to get it)


  • In one of the cat figurines


The player is already familiar with the location (remember, we went through it in the intro), so the fact that we have a whopping six possible rooms with items that need to be carried between is softened a bit by the familiarity.

We may need the mayonnaise from the office to move the totem pole and climb up to the attic, get the key fro, the urn, go down again, move the big box and open the chest.

We may need to grab the arrowhead from this room and combine it with the rag from the boiler room so that we can unscrew the back of the library bookcase and get the book. In this case, combining the rag and arrowhead to create a screwdriver is the first use of combining inventory items in the game.

We may need to grab the gum from the desk in the library and use it to go up the coal chute (after we grabbed a piece of coal) and then either throw the coal at the dangling book to get it down, or we may need to get a wax cat figurine and smelt it in the furnace below.

Again, gradual increase, softened through familiarity with the location, but nevertheless a bump from previous puzzles.

After this puzzle is solved, the player has to choose one of the three paths through Act 2 (again, the game doesn’t indicate acts, I’m making this up). Sophia will recommend a path. Remember above when I said that the game remembers how Indy got into the theatre?

  • If he went through the crates and took the fire ladder, Sophia will recommend the Wits path
  • If he praised Sophia in front of Biff, Sophia will recommend the Team path
  • If he beat Biff in a fistfight, Sophia will recommend the Fists path


This is only a recommendation, the player is free to choose any of the three paths right now. Still, it is remarkable how the game tries to learn about the player and then tailor the rest of the game to them, but without committing at the beginning as to not punish a player that has gotten into the fistfight with Biff by accident or decided that they didn’t like fighting.


Act One: Maybe one of the best learning curves in any game, ever.

Some of the things you have read may seem obvious and have you go “Well, duh!”. But I’m trying to see how the game caters to first time gamers who may not be familiar with computers or games (remember, this was in 1992).

The first act very gradually introduces new elements, forcing the player to go through a linear progression that seemingly opens up, but really only turns into a wider tunnel instead of an intersection. Things that are important later on are clearly shown in advance and the player can remember them later (which seems to be an application of Chekhov’s gun in games).

Remembering how you deal with Biff and offering that as a choice hasn’t been repeated in mainstream adventures (with the notable exceptions of Fahrenheit and Heavy Rain), presumably because it adds so much extra work for something that a lot of players will never see – before writing this series, I never went through the Fists path for example.

(Sidenote: There was a big discussion about linearity vs. open world when Final Fantasy XIII was released, since it was very linear. Development costs for games have exploded over the past few years, and making expensive optional content is a concern for the bottom line, no doubt about that.)

In a way, the different paths act as difficulty levels as well. The Wits path may have the hardest puzzles, but is very light on fights. The Fists path is the opposite, and it adds difficulty because of two opponents that are immune to the sucker punch (Keypad 0, will instantly knock out an opponent). The Team Path is a nice balance and is a bit more dialogue heavy.

The beginning of the game introduces every individual element of the game in isolation (and there are quite a few elements if you look at it) and it does so without feeling like a limited tutorial. It encourages safe experimentation and discovery, so at the end, the player should be well prepared for the next part, which is the meat of the game.

Authentic Gaming Experiences

I’m just setting up my Commodore 64 again for some authentic gaming. The 5.25” floppy disks on the day use a notch in the side to detect if the disk was write protected or not, and commercially bought empty disks usually only had the notch on the right to make one side writable.

However, people soon found out that the disks are usually safe to use on both sides, and that if you put a similar hole in the other side, you could write data onto the other side as well. There were even sophisticated hole punches for floppy disks.

Wussies I say, real men use cutting tools and make their own holes, so that their disks have character and personality! Okay, okay, I kid, the real reason is that I just can’t find these disk hole punches anymore on eBay, they have become even rarer than disk boxes.


Anyway, my setup is up and running and almost ready to record (fighting with a broken hard drive in a RAID-0 array and with the Blackmagic Intensity Pro’s lack of good configuration options – the setup is arguably overkill anyway, it was meant for 1080p HDMI capture).


One thing I noticed is that the Commodore 64 had some real differences in timing between the PAL and NTSC versions, the Zak McKracken Intro seems to play 50% faster on mine.

Here’s how the Intro is supposed to be like:

Zak Mckracken Intro (Commodore 64)


I like how far emulation has gotten over the years and how we can now experience classic games easier and more comfortable (faster loading times, less or no disk swapping, Save/Restore the system state, no cabling and no need for different controllers), but sometimes, authentic experiences require the real hardware.

Apart from Tube Screens. Sorry, I take the slightly degraded quality on a digital LCD over the size of an analogue, proper NTSC color TV.


CircularBuffer added to my .net Utils Library

I’ve just updated my .net Utilities Library with a Circular Buffer. Such a buffer (also called Ring buffer) has a given capacity, and when this capacity is reached new entries overwrite old ones. In other words, it is a buffer that holds the last {capacity} items.

The implementation is currently not optimized for speed, this is something I’ll tackle soon. (Update: Done, CopyTo and Contains should be much faster) Implementing a circular buffer is relatively simple, but it makes my head spin with off-by-one errors that you encounter when you have to deal with an array that’s split at an arbitrary point. It is definitely a nice exercise for a Code Kata though and may teach you a thing or two about Enumerators.

It is not possible to remove items (I don’t need that functionality yet for my purposes), I might look into it in the future. The Enumerator works as expected, it starts with the oldest element and returns all elements until the most recently inserted one. Modifying the collection while enumerating throws an Exception, and thread safety is the same as with a List<T>, which means "none at all".

Example usage:

var buffer = new CircularBuffer<int>(3);
// buffer now holds [2,3,4]