Agile, a series of Waterfalls

I have been through many different ways to manage development of software during my career. I have cowboy coded and ninja-deployed. I have had months of research and spec writing. I have had 2, 3, 4 and 5 week sprints. I have had daily standups at the beginning of the day, at the end of the day, once a week, once a month. I had retrospectives at the end of a sprint, at the end of a month, at the end of a quarter, at the end of a feature, at the end of a release. I have estimated in days, story points, US Dollars, T-Shirt sizes. I have done Waterfall, Kanban, Scrum, Lean, XP, Waterboarding. I think I've even been a Rockstar developer once, by which I mean that I threw up after drinking too much and speaking sim-lish.

In short: If you can imagine a way to develop software, I've likely experienced it first hand. And I am still astounded how much the term "Agile" (not to be confused with the similar Italian word) gets abused. So please, let me start by giving the complete, unabridged, definitive definition of Agile Development:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Agile Manifesto, https://agilemanifesto.org/

This is it. This is the entirety of the Agile Manifesto. And yet, this short, 4-clause manifesto has spawned thousands and thousands of pages of written books, coaching and courses, a plethora of new vocabulary, and of course, certifications and TLA galore. PMI-ACP, CSM, CSPO, CSD, CSP, PSM, PSPO, PSD-I, SAFe, MFG, OMGWTFBBQGTFO, Six Sigma.

Now, this stuff is good for the job market. A lot of Project/Product/Program Managers (or "Producers" in the Entertainment industry - but if you ask 10 people what that term actually means, you get 15 different answers) managed to make a career out of this, and I'm not against that. Technology is changing really fast, and that affects non-developers as well.

But it also causes a paradox: Too many times, it leads to an attempt to force "Agile Development" to follow extremely specific, narrow rules. Otherwise, you are doing it "wrong". The counter-argument to this is of course that not having any rules means that you're just winging it.

The point about Agile Development - and software development in general - is that you need to figure out what works for your team and your kind of work. Are you writing software for government/medical/military use? There's gonna be waterfall-levels of spec writing involved, no way around that. But you can still split out features that are specced, develop them in sprints and regularly check with the customer. Are you a startup that's sustained on Ramen, a shoestring budget that doesn't have any room to actually buy shoestrings? Just hacking stuff together and deploying it 73 times an hour might be A-OK. Are you writing corporate software that requires early heads-up? Having monthly or quarterly releases and doing "3 week waterfalls" is an option.

The two worst ways I've seen software development handled is trying to conform to very strict rules that clearly don't work for the kind of work you're doing, and just constantly changing the process under the guise of making adjustments, but really just hoping that something just magically works.

But there's no silver bullet, and there's really no way around for the person or people in charge of the team/process to understand software development to the point of making educated adjustments. If your process isn't working - why is that? That's a tricky question to answer, and often one that stays unanswered as random adjustments are made.

Figure out if it's a problem with your customer interaction (More/Less frequent reviews? Involvement in the sprint planning or not? Approving every change before even starting real work on it, or making decisions and asking for approval during the process?), if it's a problem with your team structure (Too few developers doing too many features at once? Too many developers working on one feature? Skill sets not matching up with tasks?), if it's a problem with your decision making process (Does it take 3 weeks to even get a meeting about whether the button should be on the left or right of the form, without even making a decision? Unless that button is the "Test Missile Alert" button, you should probably look at a faster way to make decisions.) Do you have one super-urgent "do or die" feature that needs to go in NOW? Abandon parts of the process that stand in the way of implementing the feature.

Everything is a tool. While it's perfectly possible to make a career without knowing the tools available to you, it's worth learning the pros/cons/strengths/weaknesses of each available tool. That book or certification you got? It's a tool to put on your CV to land a job, but it's also a way to learn more about tools.

Use only that which works, and take it from any place you can find it.

Using a Dual-M.2 to 2.5″ SATA Adapter with RAID-1

I have a little home file server, which is filled with several hard disks in Mirrored Windows Storage Spaces arrays. While Mirrors/RAID-1 isn't backup (won't help with Ransomware, viruses, accidential deletion), it gives me a bit more peace of mind.

Now, because of how little space is in the case, I filled it with 6 regular hard drives for Data and 2 PCI Express x4 NVMe drives in slots, which left no real space for an OS Drive. I knew that I could cram a 2.5" SSD in there, which meant no mirroring for the OS drive.

After some research, I found exactly what I wanted: An adapter that takes 2x M.2 SATA Drives, does RAID-1 in hardware, and makes them look like a regular 2.5" SSD. (Amazon.com article B076S9VK1M, StarTech.com M.2 to SATA Adapter - Dual Slot - for 2.5in Drive Bay - RAID, $44). The manual calls it S322M225R.

I got 2 128GB AData SU800 (ASU800NS38-128GT-C) drives with it, which are cheap TLC drives that still have a DRAM Cache so they aren't terribly slow.
Important: You need to make sure that you use SATA M.2 drives, not PCI Express/NVMe ones. They are keyed slightly differently, but look otherwise identical. Check the description of whatever drive you want to use.

The adapter supports for modes: RAID-0, RAID-1, Span, and JBOD. JBOD requires that your SATA Port supports port multipliers, because it will see two individual hard drives. RAID-0 and Spanning are modes in which one failure causes data loss of both drives, so I don't care about that mode at all.

There are also 3 LEDs on the adapter: Power, and Activity for Drive 1 and 2.

You set the desired mode using three jumpers. You need to set another jumper (J2), power on the device to set the RAID mode, then power off and unset the jumper to use the drive.

Jumper Settings for Modes

The drive shows up as ASMT109x- Safe, with the capacity of 128 GB. The first boot up is pretty slow, I assume the drive blocks while it initializes the drives. Further reboots are as fast as normal. The drive shows up like a regular single 128 GB drive and can be partitioned and formatted as normal.

To make sure that the mirroring works, I put each drive into a simple M.2 SATA Adapter and verified that the data shows up on both. I then made a few changes to the data to test rebuilding.

Using a second, simple M.2 SATA Adapter to verify that I can read the data

The good news: Mirroring worked fine, I could mount each disk individually and access the data. After plugging the drives back into the RAID Adapter, and the mirror was rebuilt.

The bad news: There was no indication that the array failed when I removed one drive. SMART still says that everything is OK. The manual says that a permanently lit LED indicates failure. I'll have another look at the SMART Data over time to see if there is a way to detect disk failure.

Overall, the adapter does what it's supposed to do, so that's great. I dislike that there seems to be no software-detectable way to see when a drive fails, which limits use in more critical environments. But as a way to safe me some time rebuilding the OS in case a drive dies, it does what I want.

Here are some more pictures of the manual and usage:

Missing XML comment for publicly visible type or member ‘considered harmful’

One of the nice things about .net is that you can automatically generate an .xml file for the xmldoc comments.

One of the worst things however is that by default, this leads to compiler warnings (and, in case "warnings as errors is enabled" - as it should be - leads to a failed compilation).

1>FooWrapper.cs(5,18,5,28): warning CS1591: Missing XML comment for publicly visible type or member 'FooWrapper'
1>FooWrapper.cs(7,21,7,24): warning CS1591: Missing XML comment for publicly visible type or member 'FooWrapper.Foo'
1>FooWrapper.cs(9,16,9,26): warning CS1591: Missing XML comment for publicly visible type or member 'FooWrapper.FooWrapper()'
1>FooWrapper.cs(14,16,14,26): warning CS1591: Missing XML comment for publicly visible type or member 'FooWrapper.FooWrapper(bool)'
1>FooWrapper.cs(19,32,19,39): warning CS1591: Missing XML comment for publicly visible type or member 'FooWrapper.Dispose(bool)'
1>FooWrapper.cs(23,21,23,28): warning CS1591: Missing XML comment for publicly visible type or member 'FooWrapper.Dispose()'

This often leads to the desire to add comments to everything, possibly even using automated tools, which results in a class like this:

/// <summary>
/// A Class to wrap a Foo value.
/// </summary>
public class FooWrapper: IDisposable
{
    /// <summary>
    /// The wrapped Foo value
    /// </summary>
    public bool Foo { get; }

    /// <summary>
    /// Initializes a new instance of the <see cref="FooWrapper"/> class.
    /// </summary>
    public FooWrapper()
    {
    }

    /// <summary>
    /// Initializes a new instance of the <see cref="FooWrapper"/> class,
    /// with the given value for foo.
    /// </summary>
    public FooWrapper(bool foo)
    {
        Foo = foo;
    }

    /// <summary>
    /// Releases unmanaged and - optionally - managed resources.
    /// </summary>
    /// <param name="disposing">
    ///     <c>true</c> to release both managed and unmanaged resources;
    ///     <c>false</c> to release only unmanaged resources.
    /// </param>
    protected virtual void Dispose(bool disposing)
    {
    }

    /// <summary>
    /// Performs application-defined tasks associated with freeing,
    /// releasing, or resetting unmanaged resources.
    /// </summary>
    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }
}

What's wrong with this class? The signal-to-noise ratio is atrocious, and I consider this downright harmful to understanding what the class does, and of course the comments get outdated even quicker the more there are. Let's break it down into the useful and useless:

FooWrapper: A Class to wrap a Foo value.

Potentially useful. This tells me what the class is meant for, but sane naming of the class already does that. It could be more useful to explain why Foo needs to be wrapped and when I should use this instead of just passing around the Foo value directly, and when to subclass it.

Foo: The wrapped Foo value

Useless. I know it's a wrapped Foo value because it's a property named Foo in a class named FooWrapper. What could make this useful is by explaining what this Foo value represents, and what I would use it for.

FooWrapper: Initializes a new instance of the <see cref="FooWrapper"/> class.

Useless. I know that it initializes a new instance of the FooWrapper class, because it's a constructor of the FooWrapper class. That's what constructors do, they initialize new instances of the class they are part of. There is no other information conveyed here - no information about potential side-effects, about valid input arguments, about potential Exceptions, nothing.

The overload that tells me that the bool foo argument will initialize Foo to the given foo is also useless, because - well, duh, what else is it's going to do?

Dispose: Releases resources

Useless. IDisposable is a fundamental language feature, so both the reason for this method and the Dispose pattern are well known. What isn't known is if there's anything noteworthy - does it dispose any values that were passed into the constructor? (Important e.g., when passing Streams around - whose job is it to close/dispose the stream in the end?). Are there negative side effects if NOT disposing in time?

Useful comments

Now, this class is arguably a very simplistic example. But that makes it also a very good example, because many applications and libraries contain tons of these simple classes. And many times, it feels that they are commented like this out of Malicious Compliance in order to shut the compiler warnings up or fulfill some "All Code must be documented" rule.

The real solution is to suppress the 1591 warning and only add comments to code that do something non-obvious or critical to pay attention to. In the case of the above example class, the best I can come up with is below.

/// <summary>
/// This class wraps a Foo value, captured
/// from when the operation was started.
///
/// Operations that need to capture additional values
/// should derive from this to add their own additional
/// values.
/// </summary>
public class FooWrapper : IDisposable
{
    /// <summary>
    /// The Foo that was wrapped at the beginning of the operation.
    /// Changes to the Foo value in the holder class do not change this value.
    /// </summary>
    public bool Foo { get; }

    public FooWrapper()
    {

    }

    public FooWrapper(bool foo)
    {
        Foo = foo;
    }

    /// <summary>
    /// This class implements IDisposable to allow
    /// derived classes to capture values that need to be
    /// disposed when the operation is finished.
    /// </summary>
    protected virtual void Dispose(bool disposing)
    {
    }

    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }
}

Now, the comments convey useful information: We learn the intent of the class - that's something not obvious from the code. Though arguably, this class should now be called InitialOperationState or something like that. It also explains why/when to create subclasses for it. The comment on the property now explains something about the purpose, rather than just reiterating the code in prose. And finally, the Dispose(bool) method explains why it's there. The constructors and Dispose() methods do not need any comments - they don't do anything worth commenting.

And because I suppressed 1591, the compiler is happy as well.

Accessing LDAP Directory Services in .NET Core

The .NET Framework has had support for LDAP through the System.DirectoryServices Namespaces since forever. This has been a P/Invoke into wldap32.dll, which limited the ability for developers to troubleshoot issues and wasn't platform-independent. With the advent of .NET Core and the desire to run applications on Linux or macOS, the lack of LDAP Support has been an issue.

In the JAVA World, it's normal to have fully managed libraries in lieu of platform-limited wrappers, and LDAP is no Exception. These days, the Apache Directory LDAP API™ looks like the go-to, but way back in the day, Novell also had an LDAP Client. This was eventually donated to the OpenLDAP project and lives in the JLDAP tree, although development has long since stopped. Back in the day, Novell used to own Mono, and during that time they made a C# conversion of their LDAP Client. The code was clearly ran through an automated JAVA-to-C# converter, but it offered a fully managed way to access LDAP.

While that C# code had lain dormant since the initial release in 2006, .NET Core offered a new incentive to revisit it. dsbenghe made a conversion of the code to support .NET Standard 1.3/2.0, which lives at https://github.com/dsbenghe/Novell.Directory.Ldap.NETStandard and is available on Nuget as Novell.Directory.Ldap.NETStandard.

Over the past couple of weeks, I've made some contributions as well, mainly to add support for SASL Authentication, which is available since Version 3.0.0-beta4. At this time, only the CRAM-MD5, DIGEST-MD5 and PLAIN mechanisms are available, but this offers the foundation to connect to a wider range of directories in case Simple LDAP Bind isn't an option.

An example of how to connect using DIGEST-MD5 an LDAP Directory (in this case, Active Directory):

var ADHost = "mydc.example.com";
var saslRequest = new SaslDigestMd5Request("Username", "Password", "Domain", ADHost);

using (var conn = new LdapConnection())
{
    try
    {
        conn.Connect(ADHost, 389);
        conn.StartTls();
        conn.Bind(saslRequest);
        Console.WriteLine($"[{conn.AuthenticationMethod}] {conn.AuthenticationDn}");
    }
    finally
    {
        if (conn.Tls)
        {
            conn.StopTls();
        }
    }
    
}

Now, whether this is preferable over simple bind is up for discussion - the fact that DIGEST-MD5 requires the domain controller to store the password with reversible encryption is certainly a potential issue. But on the other hand, if you cannot guarantee the security of the transport, DIGEST-MD5 at least means your password will never have to be sent over the wire.

Ultimately, support for the SASL EXTERNAL mechanism with Client Certificates and support for Kerberos will offer modern security/authentication mechanisms. But the bottom line is that there is now a 100% managed LDAP Client for .net that's in active development. One that is supposed to support any LDAP Server instead of focusing mainly on Active Directory, but one that will offer first class Active Directory support as well. For Stack Overflow Enterprise, we made first class LDAP Authentication support a big goal for the future. We want to support as many real-world environments as possible, and we want everything to work on .NET Core as well. There's still plenty of work to do, but I'm happy that this project exists.

PicSol – a .net Nonogram/Picross Solver Library

Nonograms - also known as Griddlers, Picture Crosswords, or Picross - are pretty cool puzzles, kind of like a more visual Crossword puzzle or Sudoku. Of all the games on my New 2DS XL, Mario's Picross and the Picross e series are near the top of my Activity Log (beaten only by Smash Bros).

I got curious about algorithmic solutions to those Nonograms, which seems deceptively easy, but is actually NP-complete. When trying to solve a Nonogram, often I can to only fill in one or a few cells of a group, which then leads to another cell that can be filled in (or X-ed out), and step by step, cell by cell, I solve the Nonogram. Now, that assumes that the Nonogram is properly designed - if that's the case, then there is always at least one cell that either must definitely be filled or definitely be empty.

All of Jupiter's games are well designed - even the most tricky ones (with a bunch of 1's and 2's and no big numbers) always follow the mantra of There's always at least one cell that has a definitive solution. There are a lot of other games on the market (Steam returns about 15 games when searching for Picross or Nonogram), and some are not well designed and actually require guessing.

I ended up (after a bunch of googling approaches and other existing solvers) with a solution that's mostly brute force - generate all possibilities for a row and column, then eliminate those that can't be correct, rinse and repeat until there's only 1 possibility left for each row and column, or until we determined that the Nonogram is actually unsolvable. There are some shortcuts that we can take, e.g, when a row/column is empty, completely filled, or completely filled with gaps in-between them.

I've created PicSol, a library for .net Standard 2.0 and .net Framework 4.0 (or newer) and available on Nuget which offers a Solver for Nonograms.

Check out the README for information on how to use it, or look at the Console project in the GitHub repository.



Using .net Framework sources for better debugging

Over the last couple of weeks, we've been working on changes to our SAML 2.0 Authentication on Stack Overflow Enterprise. This was required to better support scenarios around signing and encrypting SAML Requests and Responses, and as such, a lot of the work was centered around XML Security, specifically the SignedXml and EncryptedXml classes.

Now, one thing I can say for sure about SAML 2.0 is that every Identity Provider implements it slightly differently, causing XML Validation errors or CryptographicExceptions somewhere deep in the code. So, how can we properly debug and fix this?

The first option is to Enable Symbol Server support in Visual Studio. This gives you .pdbs for .net Framework code, but because the Framework is compiled in Release mode, some code is inlined or otherwise rewritten to no longer match the exact source, which makes following variables and even call stacks really hard the deeper you go.

Another option is to check out the .net Framework Reference Source, also available on the easy to remember http://sourceof.net. This is the actual source code of the .net Framework, which allows at least reading through it to see what's actually going on, at least until you hit any native/external code. You can even download the sources, which not only allows you to view it in Visual Studio, but it also allows you to compare implementations across Framework versions to see if anything changed. (The code on the website is always only for the latest Framework, which is 4.7.1 at the time of writing. If I need to see how something was implemented in 4.6.2, I need to download the sources)

Another thing that we can do with the reference sources is to put them into our project, change namespaces to avoid ambiguity, and then use our private copies of the implementation.

This is a huge help for several reasons:

  • We can step through actual code, no compiler symbols or weird abstractions, it's actual code.
  • Since we can compile in Debug mode, we don't have to worry about optimizations making the debug experience much harder.
  • Breakpoints work properly, including conditional ones.
  • If we have a hypothesis about a cause of or fix for an issue, we can make source code changes to verify.

Now, there are two problems with Framework Sources. First off, because Framework code can refer to protected and internal methods, we might have to either copy a massive amount of supporting code, or implement workarounds to call those methods that are inaccessible to us (there are 79 files in my private copy of SignedXml/EncryptedXml). But the real showstopper is that the license doesn't allow us to ship Framework code, as it's only licensed for reference use. So if I found a way to fix an issue, I need to see how I can make this work on our side of the code, using the actual .net Framework classes.

Now, if we really don't find a way to solve this issue without needing to modify Framework code, a possible option are the .NET Core Libraries (CoreFX) sources, because that code is MIT Licensed. It's a subset of the .net Framework, but due to the license, anything that's there can be used, modified, and shipped by us. This is a bit of a last resort, but can be preferable to worse workarounds. I can not understate how awesome it is that Microsoft releases so much code under such a permissible license. It not only makes our life so much easier, but in turn it benefits our customers by providing a better product.

In the end, we could resolve all issues we've seen without having to modify framework code after why understood exactly where (and why!) something was failing, and our SAML 2.0 implementation got a whole lot better because of the availability of the source code.

Simplexcel 2.0.5

It's been a few month since I released Simplexcel 2.0.0, which was a major change in that it added .net Standard support, and can be used on .net Core, incl. ASP.net Core.

Since then, there have been a few further feature updates:

  • Add Worksheet.Populate<T> method to fill a sheet with data. Caveats: Does not loot at inherited members, doesn't look at complex types.
  • Also add static Worksheet.FromData<T> method to create and populate the sheet in one.
  • Support for freezing panes. Right now, this is being kept simple: call either Worksheet.FreezeTopRow or Worksheet.FreezeLeftColumn to freeze either the first row (1) or the leftmost column (A).
  • If a Stream is not seekable (e.g., HttpContext.Response.OutputStream), Simplexcel automatically creates a temporary MemoryStream as an intermediate.
  • Add Cell.FromObject to make Cell creation easier by guessing the correct type
  • Support DateTime cells
  • Add support for manual page breaks. Call Worksheet.InsertManualPageBreakAfterRow or Worksheet.InsertManualPageBreakAfterColumn with either the zero-based index of the row/column after which to create the break, or with a cell address (e.g., B5) to create the break below or to the left of that cell.

Simplexcel is available on Nuget, and the source is on GitHub.

How a larger girth helps avoiding fires – a cautionary tale about power cables

Here are three power cables. Do you see what's different between them? Hint: One of them is a serious fire hazard.

I don't know about you, but I have a box full of computer cables that I amassed over the years, and whenever I need a cable, I grab one from the box. There are plenty of power cables in that box, and I never thought twice about which one to use for a PC. Until one episode about two years back. The PC that I was using was a real high end killer machine - I don't remember the exact specs, but I know that it was a $1000 Intel CPU, so I believe it was a Core i7 Extreme, paired with a high end Geforce card (I believe a GTX 660 or 680). I was playing a game, when suddenly I heard a popping noise and saw sparks falling to the ground. Like, literally sparks. At first I thought that the power supply blew up and tried a new one. After some more trial and error, we finally found out that is was the power cable that melted and sparked.

I had never seen that happen before. I knew that the super high end power supplies had a different connector (IEC 60320 C19 instead of C13) - but I didn't think that there was any difference for regular power supplies.

Turns out that the thickness of the wires inside the cable matters a lot. This makes sense: Electricity going through a wire heats the wire up - the more power, the warmer it gets. If the wire isn't thick enough, it will literally melt and can then cause a short, or like in my case, sparks (and potentially a fire). One of the standards used for wire thickness is called the American wire gauge, or AWG for short - you may have seen this used for speaker wire. A cable that you buy will have a number - like 18 AWG - which describes the thickness. Lower numbers are thicker, so a 14 AWG wire is thicker than an 18 AWG wire (do note that there is a difference between a wire and a cable - a cable is one or more wires plus insulation and connectors).

In the above picture, there are 14, 16 and 18 AWG cables with C13 connectors shown. Monitors tend to ship with 18 AWG cables, which is why I have a bunch of them. But 18 AWG power cables are not suitable for powerful PCs. They might be suitable for lower end PCs (that can safely run on a 450W or less power supply), but even a single 95W CPU and 8-Pin powered Graphics Card (like a GTX 1080) might draw too much power for the cable - a fire hazard waiting to happen. The cables will have their gauge written on them, or etched (which is harder to read).

Now, before you go and buy a bunch of 14 AWG power cables, do note that the thicker a wire is, the stiffer it is. 14 AWG cables are generally very stiff, so if the PC is close to a wall or the cable needs to make a bend for another reason, you might be putting a lot of force on the power supply connector. In general, 16 AWG should be perfectly fine to at least 850W - possibly more.

The strife for a great whitebox server case

Update 2017-07-25: I found a case, see at the bottom.

My home setup is a bit of a mess. That's mainly because I haven't properly planned out my needs, and now I have a Simple File Server that doesn't accommodate my future growth, an old server to run VMs on, and some random assortment of hardware to do backups on.

So, I'm now making a list of my actual needs to build one new server to rule them all, sometime in 2018. The list of needs is fairly short:

  • Enough CPU Power to run about 6 VMs
  • Space for an ATX motherboard, to not limit options even if I end up with a Micro ATX board
  • ECC RAM
  • Enough disk space for my stuff
  • Redundancy/Fault Tolerance for my disks
  • Ability to do proper backups, both to an on-site and an off-site medium
  • Low Energy Use

Most of these requirements are fairly straight forward: For the CPU, a Xeon D-1541 (e.g., on a Supermicro X10SDV-TLN4F-O) or a Ryzen 7 PRO 1700 will do fine. For the hard drive, using my existing WD Red 3.5" drives gives me the storage. After considering RAID-5, I'm gonna pick up a LSI Logic SAS 9211-8I controller to do RAID 1E instead, with RAID 10 being a future option.

The real question is though: Where to put all that stuff? That led me down to the rabbit hole of finding a server case. The needs seemed simple:

  • Space for at least 4x 3.5" drives (ideally 8) and 2x 2.5" drives (ideally 4)
  • Power Supply on top, so I don't have to worry about overheating if putting the PC on the floor
  • Don't look like crap. If possible, no Window, no lit fans, not designed like 1960's Russian military hardware
  • Absolutely no tempered glass. If I can't avoid a window, it needs to be plastic/plexiglass.
  • Want: Ability to hot swap at least some of the drives, so some backplane
  • Ideally $150 or less

Now, the "don't look like crap" part is, of course, highly subjective. Still, I'd definitely prefer the look of a Corsair Carbide 100R over their Graphite 780T. The power supply positioning changed from the top to the bottom in recent years. This is because a modern CPU and GPU produce a lot of heat, so the old way of "have the PSU suck out the heat" no longer works well. Also, water cooling isn't super-niche anymore, so radiator space is needed.

I'd like to hotswap drives, so one of my ideas was to look at some rackmountable case, but in that price range, there isn't much. I found the Norco RPC-4308 which would be pretty awesome, if not for a small detail: The power connector on the SATA Backplane is a 4-Pin Molex connector. Now, while there is a problem with Molex to SATA Power Adapters catching fire, this is not a concern here as the power is properly routed through the backplane. No, my concern is that Molex Power is not SATA compliant. SATA Power is a 15-Pin connector:

Now, the fact that there are 3 pins each for 5V and 12V isn't so much a problem (that's more a side effect of how thin the pins are and concerns sending enough current over one of them). The problem is rather that some parts are completely missing. There's no 3.3V power, no staggered spinup and no Power Disable with a Molex adapter. Arguably, 3.3V isn't needed by most drives, and power disable is almost an anti-feature outside the data center. Still, the question is: Why invest into a system that isn't fully compliant?

I haven't seen any other rackmount cases with hotswap trays that fit the price range. There is a tower case - Silverstone CS380 - that looks awesome, but also suffers from the Molex power. Next up was looking at 5.25" cages that hold up to five 3.5" drives. There are some nicely priced and not too shabby looking ones out there (e.g., Rosewill's RSV-SATA-Cage-34, but once again, buyer beware: Molex power, so that's a no for me. I am currently looking at Silverstone's FS303, FS304 or FS305. I'm not sure if putting five 3.5" drives in three 5.25" slot is a bit too closely packed, even with the low-power WD Red drives. But even ignoring the FS305, I could get six drives in four slots, or four drives in three slots, so that's pretty good.

This now leads to the next problem: Cases with 5.25" slots are becoming rarer and rarer. This makes sense, since many people don't even have optical drives anymore, and those that do only need one bay. I need at least four, better five or six. So, how many PC Cases are there that...

  • Have four to six 5.25" bays
  • Have the power supply on top
  • Don't look like crap
  • Don't cost more than about $150
  • Can fit an ATX mainboard

Spoiler warning: Almost none. I spent quite a bit of time looking through the offerings on Amazon and Newegg and on many manufacturers websites, and it seems that modern day gamer-cases and really cheap mini tower cases have completely replaced everything else on the market. Now, there are a few cases for Mini ITX boards that are interesting, like the Silverstone DS380, which seems like a popular NAS case these days. Still, my goal is to not compromise unless I really have to.

I'm still researching, but here's my current shortlist:

  • Lian Li PC-8N - discontinued, but still available on Newegg for about $100. 4x 5.25" bays, PSU on top
  • Antec NSK4100 - discontinued, but still available on Newegg for about $50. 3x 5.25" bays, PSU on top
  • Corsair Carbide 200R - about $65, my choice for my own PC, 3x 5.25" bay, PSU at bottom
  • Rosewill Legacy QT01 - about $100, 3x 5.25" bay, bottom PSU
  • Fractal Design R5 - about $120, gorgeous case, but only 2x 5.25" bays, so I'd have to seriously consider if I really want hotswap
  • Cooler Master N400 - about $60, only 2x 5.25" bays and bottom PSU, but looks pretty nice, like a workstation
  • Cooler Master CMP350 - about $85, 4x 5.25" bays, top mounted PSU, incl. 500W PSU, seems discontinued
  • APEVIA X-Cruiser3 - about $70, 5x 5.25" bays(!), and the design should be good for some social media points
  • Buying something used - especially old tower servers or workstations. Don't really want to do that, I've learned that name-brand complete systems usually mean some compromises in case design that I don't like

If I go with a case that has the PSU at the bottom, I'd have to consider a PSU that has the fan in the back or can be mounted with the fan pointing into the case. There aren't many PSUs with a fan in the back left, one option is the Antec EA-380D Green (which has 5x SATA connectors).

It definitely seems harder than it should be to build a whitebox server these days than it used to. Sure, the components are cheaper and more powerful than ever, but it seems that cases have stopped serving the market. I can see why people would rather buy a Synology NAS, or get some old rackmount server for cheap (Dell's R720 should really come down in price now as thousands are being replaced), or don't care about hotswapping, but still, it feels like the PC case market has regressed since the legendary Chieftec Dragon (which were also sold by Antec under the SX name) were every enthusiasts choice.

Maybe it's indeed a sign of the times, where the real innovation happens in the Mini-ITX and gaming spaces, while everything else becomes a specialized device offered by someone.

Update 2017-07-25: I found a Thermaltake Urban S41, which hits most of the things I want. It looks nice and clean, it has 4x 5.25" bays, 5x internal 3.5" bays, and a even a temporary hotswap bay on top. There is plenty of cooling, with a 200m fan on top, 120mm fans in the front and back, and an optional 120mm fan at the bottom. The Power Supply is mounted at the bottom, but the case actually has feet that elevate it quite a bit above the floor. Of course, like all nice tower server cases, it is discontinued, but Amazon still had a few for $100.

I'll add an ICY DOCK FatCage MB153SP-B, which houses 3x 3.5" SATA drives in 2x 5.25" slots. I might either add another one of those, but I'm also seriously considering 2.5" Seagate BarraCuda drives. They go up to 5 TB on 2.5" (at 15mm height), for a similar price as 3.5" IronWolf/WD Red. I'm not sure if using a non-NAS drive is a good idea, but then, vibration/heat/power usage isn't really a concern with these drives. In that case, I'd likely use an ICY Dock ToughArmor MB994SP-4S for 4x 2.5" in 1x 5.25", but it'll be a while before I need to think about that. Who knows, maybe by then there will be 2.5" Seagate IronWolf, or a 2.5" WD Red bigger than 1 TB (I run two of their WD10JFCX in a RAID-1 currently).

To Verb or Not To Verb in Adventure Games

A while ago I put up a post showcasing adventure game GUIs, without really going into much details about them. But if you want to make your own adventure game, one of the first questions is how you want to control it. And that means, deciding how many Verbs there should be. If you ask "old-school" gamers, you will hear a lot of complaints that modern games are "dumbed down", while game designers talk about "streamlining the experience" - both positions have some truth to them, because it is important to differentiate between complexity and depth.

Let me make a non-adventure game related example: The game of Chess. The game isn't terribly complex - there are only 6 different pieces, and only a few special rules. However, the game possesses a great depth due to the many different options to play it. The game of Go is even simpler, but comparable to Chess in its depth.

Simple/Complex describes the amount of rules/actions to do, while shallow/deep describes the combinations you can achieve throughout the game. Which brings us back to adventure games. Have a look at Maniac Mansion:

There are fifteen verbs available. If you play the game, you will notice that you use verbs like "Use" and "Give" quite a few times, while "Fix" is used possibly only once or twice during a play-through, if at all. There is complexity, but do "Unlock Door with Key" or "Fix Phone with Tube" really add more depth than "Use Key on Door" and "Use Tube on Phone"?

I'd like to quote Goldmund from a thread on the AGS Wiki:

I click "use" on a furnace and I have no idea whether the protagonist will open it, push it, sit on it, piss on it, try to eat it... Of course, there are GUIs with more detailed actions, but still it's nothing compared to the Richness of Interactive Fiction. In IF you really have to think, not just click everywhere with every item from your inventory. The solution could lie in the text input, like it was done in Police Quest II

The problem with that is that it's not just a matter of thinking but also a matter of creating enough content. Having a lot of different verbs - or even the mentioned text parser - means that the game needs to have a lot of responses for invalid actions, or risk boring the audience. If I can "kick the door", I should also be able to "kick the mailbox", "kick the window", "kick the man" and get a better response than I can't kick that. Otherwise, you add complexity, but not any perceivable depth and the game is back in "guess the parser" mode.

LucasArts decided to trim down the verb list to nine in the nineties - then even changed the original Monkey Island from twelve Verbs on the Amiga to nine Verbs on DOS (removing Walk To, Turn On and Turn Off).
image.png
image.png

Removing Verbs removes complexity, but it doesn't have to mean that it removes depth. Depth is created by meaningful interactions of the verbs you have. This means that you should create a lot of dialogue - if I push something I can't push, having a more specialized message than "I can't push that" goes a long way, but that's still not actual depth. Actual depth stems from the ways I can solve the game. Do I have to solve the puzzles in order or can I pick which ones I solve when? And are there multiple solutions? Can I Use golfclub on Man to solve the puzzle by force, while also having Give golfclub to Man in order to bribe him as an alternative option?

A lot of games these days have a simple two verb system - "Interact" and "Look".
image34.png

These games work nicely with a mouse but also on a tablet (where "Look" is usually a long tap). A lot of the puzzles are inventory or dialogue puzzles, which may make these games more "realistic" (they mirror real world problem solving closer), but also are often shallower. Often, there is only one path through a dialogue tree, or one inventory item that works. I can use hammer on nail, but usually not use screwdriver on nail or use book on nail - even though these are valid real world options in a pinch. And for dialogues, often there are only two outcomes, "fail" and "pass". The bouncer in Indiana Jones and the Fate of Atlantis is one exception that I can think of, where dialogue can lead to him letting you in, him fighting with you, or him dismissing you.

In the end, it's important to strike a balance between usability, immersion, and design complexity. Especially if you add translations and voice acting, having more responses and possible solutions increases the required time and money, just to create content players may never see. On the other hand, having more variety and truly different solutions makes the game feel a lot more alive and higher quality.

And that's one of the reasons I still think that Indiana Jones and the Fate of Atlantis is the perfect Point and Click Adventure.

←Older