TypeScript in the Wild

When I previously used JavaScript, the grand debate was which JS utility library to use: jQuery vs. mootools vs. YUI vs. dojo. While I’m glad this tradition of grand debates has continued (Angular vs. React vs. Vue), the community has also quickly coalesced around the biggest improvement to JavaScript in the last decade: TypeScript.

In this post, I want to talk about my experience migrating an existing JavaScript codebase to TypeScript. But first, a quick digression on why I love TypeScript, and how TypeScript is the culmination of a number of great ideas in PL.

The first great idea is JavaScript as a compilation target. We took baby steps with transpiled languages in the late aughts with CoffeeScript (which still looked mostly like Javascript), but the state of the art has progressed such that in 2020 nearly any language (Python, Java, C++, Ruby) can be compiled to JavaScript, and with near-native levels of performance to boot. Even pure JavaScript developers will commonly use transpilers to work around uneven ES language support across browsers.

The second great idea is gradual typing. Previous attempts to bring types to JavaScript usually required rewriting everything in a new language, which meant a huge initial porting effort as well as losing access to the JavaScript library ecosystem. TypeScript took a different tack and was designed as a strictly compatible superset of JavaScript. While this limits what TypeScript can do as a language, it means you can easily convert an existing JavaScript codebase to TypeScript, incrementally gaining value as you continue to add type annotations.

The third great idea is developer tooling. The TypeScript support in VS Code and other IDEs is excellent, thanks to the TypeScript language server. The type annotations at DefinitelyTyped are frequently updated and cover all the popular packages I’ve used. The official TypeScript documentation is approachable and up-to-date. It’s clear that Microsoft wants TS to be useful for real-life working developers, and that effort has paid dividends in the form of a vibrant developer community.

Given how much obviously better TypeScript is than JavaScript, perhaps its rapid ascent is unsurprising. I think it’s an incredible innovation not just for JavaScript, but for programming languages in general.

That said, TypeScript is a very complicated and powerful language, and you have to understand its limitations to use it correctly. Also, no matter the language, sometimes bad code is just bad code.

Let’s begin.

Implicit anys abound

TypeScript has a special any type that allows a variable to be assigned any value. Since this circumvents the point of a type system, use of any is strongly discouraged. The task of converting an existing JS codebase to TS essentially boils down to replacing all the any types with real types.

This can be daunting since every unannotated variable starts off, implicitly, as an any. This is the blessing and the curse of a gradually typed language like TypeScript.

When we ran an initial type check with noImplicitAny enabled, there were more than ten thousand of these errors in our <100K line codebase. Granted, a lot of these were easy to fix, but we weren’t going to be able to drive this to zero any time soon. Meanwhile, without enforcement of noImplicitAny in our PR checks, more anys were sneaking into the codebase with every new commit.

The solution we came up with was a codemod that created a new type $FIXME = any and added an explicit $FIXMEannotation wherever there was an implicit any. This was good for two reasons: it let us turn on noImplicitAny enforcement for new commits, and also made it clear if the any was added automatically by the codemod or by a person who had looked at it and wasn’t sure of the type.

Actually adding type annotations

With noImplicitAny enabled, we still had to actually add types. We encouraged developers to add proper types when working in an area of the code, but it wasn’t yielding good results. A lot of core functions were still taking and returning any types, so all we’d achieved was making our implicit anys in the rest of the code explicit. We needed a better approach.

The more systematic method was to add types going bottom up in the dependency graph. This started with installing TypeScript annotations for all third-party packages (thank you, DefinitelyTyped!). Then, adding types for our most common Mongoose models, since almost every meaningful operation involved querying the database. Next, model helpers and shared utility code, and after that, the most important pieces of business logic. Eventually, we had enough type coverage that we could reasonably expect that new code would not contain additional any types.

This was a huge milestone, and at this point we were reaping a lot of benefit from TypeScript.

Just because you can, doesn’t mean you should

Another set of problems we quickly ran into were ambiguous objects with unclear types. Since it’s better to have a too-wide type than an incorrect type, we’d leave behind an explicit any and a TODO to indicate a complicated situation we weren’t sure about.

When digging into some of these complicated cases, I found some truly crazy code. As a hypothetical example, let’s suppose you’re working at a food truck with the following menu:

  • Hot dog, which can be topped with sauerkraut or peppers, and sauced with relish, ketchup, or mustard
  • Single or double patty hamburger, which can be topped with cheese, lettuce, tomato, or onion, and sauced with ketchup or mayonnaise.
  • French fries, which can come with ketchup, mustard, or mayonnaise.

Starting with your homegrown ordering system written in JavaScript, an initial attempt at typing the Order API request object might look like this:

type Sauce = 'relish' | 'ketchup' | 'mustard' | 'mayonnaise';
type Meat = 'hotdog' | 'hamburger';
type Topping = 'cheese' | 'lettuce' | 'tomato' | 'onion';

interface Order {
    type: 'hotdog' | 'hamburger' | 'fries';
    customer: string;
    amountCents: number;
    params: {
        meat?: Meat | Meat[];
        sauerkraut?: boolean;
        peppers?: boolean;
        toppings?: Topping[];
        isDoubleCheeseBurger?: boolean;
        sauce?: Sauce | Sauce[];
        extraFries?: boolean;
        extraFriesParams?: {
            sauerkraut?: boolean;
            peppers?: boolean;
            sauce?: Sauce | Sauce[];
        }
    }
}

Looking at it, this type is difficult to use. It still allows a lot of invalid combinations of items, the toppings are modeled differently for hot dogs and hamburgers, meat can be a single item, an array, or undefined, sauce is different for standalone fries vs. extra fries, and isDoubleCheeseBurger could be set even if it’s not a double patty burger and cheese isn’t a topping.

Let’s apply TypeScript best practices and model this as a discriminated union instead:

type BaseOrder = {
    customer: string;
    amountCents: number;
}

type FriesSauce = 'ketchup' | 'mustard' | 'mayonnaise';
type ExtraFries = {
    params: {
        extraFries?: boolean;
        extraFriesParams?: {
            sauerkraut?: boolean;
            peppers?: boolean;
            sauce?: FriesSauce[];
        }
    }
}

type HotdogSauce = 'relish' | 'ketchup' | 'mustard';
type HotdogOrder = BaseOrder & ExtraFries & {
    type: 'hotdog';
    params: {
        meat?: 'hotdog'
        sauerkraut?: boolean;
        peppers?: boolean;
        sauce?: HotdogSauce | HotdogSauce[];
    }
}

type HamburgerSauce = 'ketchup' | 'mayonnaise';
type HamburgerTopping = 'cheese' | 'lettuce' | 'tomato' | 'onion';

type HamburgerOrder = BaseOrder & ExtraFries & {
    type: 'hamburger';
    params: {
        meat?: 'hamburger';
        toppings?: HamburgerTopping[];
        isDoubleCheeseBurger?: false;
        sauce?: HamburgerSauce | HamburgerSauce[];
    }
}

type DoubleHamburgerOrder = BaseOrder & ExtraFries & {
    type: 'hamburger';
    params: {
        meat?: ['hamburger' | 'hamburger'];
        toppings?: HamburgerTopping[];
        isDoubleCheeseBurger: boolean;
        sauce?: HamburgerSauce | HamburgerSauce[];
    }
}


type FriesOrder = BaseOrder & {
    type: 'fries';
    params: {
        sauce?: FriesSauce | FriesSauce[];
    }
}

type Order = HotdogOrder | HamburgerOrder | DoubleHamburgerOrder | FriesOrder;

This is a lot better! We’re able to ensure the right combinations of meat, sauce, and toppings for each order, that isDoubleCheeseBurger is only set for double patty hamburgers, and we’ve prevented invalid orders like fries with extra fries.

However, there are still some major issues:

  • The inconsistent topping styles between hot dogs, hamburgers, and double cheese burgers.
  • We were unable to unify the types for Fries and ExtraFries. Also maybe instead of ExtraFries, an order should contain multiple items?
  • All the Order params are optional, which means we annoyingly need to check and handle nulls on access.

At this point, we’ve gone about as far as we can within just the type system. There’s some tech debt to clean up in the business logic before we can further improve these definitions.

Also, note that even this innocuous set of types might surface undocumented behavior in our business logic! Perhaps the chef lets her friends order special off-menu items like hot dogs with cheese. We’ll need to track down these special cases and reconcile our types with the code.

You still need a data model

In the food truck example, we were actually fortunate in that we had a preexisting spec (the menu) which mostly described how the system should work. Sometimes, you aren’t even that lucky: you’ll be shown a MongoDB collection with records from different schema versions and plenty of mixed types, or a bunch of hand-coded backend routes without parameter validation.

TypeScript’s type checker is only useful insofar as the type accurately describes the data. If you’re dealing with untrusted input and can’t be sure about the object’s structure, you need to validate it first.

As an example, maybe in the past we used to serve gyros at our food truck. We’ve since removed it from our menu and deleted all the gyro related code, but there are still old Order records in our DB with type: gyro.

What happens when we write a quick script to analyze our order history?

const orders = await OrderModel.find({}) as Order[];

const sums: Record<Order["type"], number> = {
    hotdog: 0,
    hamburger: 0,
    fries: 0,
}

for (const order of orders) {
    sums[order.type] += 1
}
console.log(sums); // Will see a 'gyro' property with invalid value 'NaN'!

TypeScript only runs at compile time, so the type system can’t catch this error. Unless we validate input at runtime, we can run into trouble.

So, use database-side schema validation (supported by MySQL, PostgreSQL, MongoDB, etc), and use an RPC framework that helps with modeling and validation (OpenAPI, GRPC, etc). These techniques synergize wonderfully with TypeScript since you can often generate TS types from these schemas, resulting in a chain of strong typing from the client to the server to the DB and all the way back out again.

Type assertions are any by another name

TypeScript lets you assert (or cast) the type of a variable using a type assertion. As a beginning TypeScript programmer, you will quickly encounter situations where you need to use type assertions, and they are discussed at length in the official docs and other online resources:

  • Since TypeScript types do not exist at runtime, use type guards to determine an object’s type based on its structure and assert that it is that type.
  • The ! non-null assertion operator narrows a union type by removing null|undefined.
  • When you know something the compiler doesn’t, e.g. you’ve enabled schema validation on your database and know all the records are well-formed.

The big mistake is using a type assertion when you aren’t 100% sure about the type. Type guards are fine since they validate the object before asserting the type. I have a less favorable view of the ! operator. It doesn’t validate, is used in the same context as the very common and useful ? optional chaining operator, but is really just a type assertion. If you’re sure that the value is not null, ask yourself, could I fix this instead where the value was originally assigned? Or just do a null check (using an assertion signature):

function assertDefined<T>(obj: T): asserts obj is NonNullable<T> {
    if (obj === undefined || obj === null) {
        throw new Error('Must not be a nullable value');
    }
}

We don’t need to discuss the issues with raw type assertions. They’re a code smell, and should be minimized and isolated whenever possible.

As an aside, I also wish the TypeScript designers had just called it “cast” instead of “type assertion”, since in C-like languages (including JavaScript!) assert is a runtime error check, not a compile-time unchecked type conversion.

Accidentally widening inferred types

Type inference is another powerful language feature that requires some care. The lack of type inference was historically a major critique of Java, since you would have to “tell it twice” when declaring and defining a variable:

HashMap<String, HashMap<String, String>> mapping = new HashMap<String, HashMap<String, String>>();

Why do I need to say mapping is a HashMap<String, HashMap<String, String>> when it’s being assigned a value that’s a HashMap<String, HashMap<String, String>>? Java 7 gave us a half-measure with the diamond operator, but Java 10 finally gave us the var keyword which makes this a lot snappier:

var mapping = new HashMap<String, HashMap<String, String>>();

This works great! The type of this variable is unambiguous for both the compiler and the reader since it’s being assigned immediately.

Typescript extends this even further and will automatically infer the types of function return values. This is nice since it means you can immediately get some typechecking when starting with unannotated JavaScript:

function isEven(n) { // Typescript infers the return type is boolean
  return n % 2 === 0;
}

This is better than getting an implicit any return type! However, I think this practice should be discouraged when going full Typescript. Imagine our function is later modified as follows:

function isEven(n) { // Return type is inferred as boolean|null
    if (typeof n !== 'number') {
        return null;
    }
    return n % 2 === 0;
}

Due to type inference, we accidentally widened the return value from boolean to boolean|null. This will start throwing type errors near all the isEven callsites when callers try to use the new wider return value. We can still debug this, but it’d be easier if the compiler flagged the incorrect null return value instead:

function isEven(n): boolean {
    if (typeof n !== 'number') {
        return null; // Type 'null' is not assignable to type 'boolean'.
    }
    return n % 2 === 0;
}

Function signatures are a contract between the caller and the callee. By being explicit about the function’s intended behavior, it makes it clear whether a change in behavior is accidental or deliberate.

Roundup of other pitfalls

There are a few other miscellaneous things I’ve run into and want to briefly cover:

  • Objects can have extra properties beyond what’s specified in the type. This can bite you during serialization since simply calling JSON.stringify will serialize these extra properties too. Pick out only the properties you want with lodash first.
  • Although Record<string, string> looks very similar to a Map<String, String> from Java, you almost always want to use a Partial<Record<string, string>>. This makes it clear that undefined will be returned for keys that aren’t present (which is what will happen, whether Typescript thinks so or not). Even better is using a union of string literals as keys, like in our order analysis script.
  • Beware the subtle differences when iterating the keys of string and numeric enums. Numeric enums contain both a forward and reverse mapping, while string enums just have the forward mapping.
  • I could go on about the numerous warts carried over from JavaScript, but that’s deserving of a separate post.

Conclusion

This was a peek into my experience converting a decently large codebase from JavaScript to TypeScript. There’s a definite learning curve to TypeScript’s gradual, structural, algebraic type system, but it’s an incredibly powerful productivity improvement and I’m glad that Microsoft has invested so much in making it a success.

My thanks go out to great resources like the official TypeScript docs, the TypeScript Playground, and the Effective TypeScript book from O’Reilly which has even more in-depth advice like this about using TypeScript.

I’m always interested to hear about other’s experience with TypeScript. What do you like and not like about the language? Are there any stories you’d share in a post like this one? Let me know in the comments!

Thanks to my editors Calvin Huang and Steven Salka, who went on this TypeScript journey with me and taught me a lot along the way!

Posted by andrew in Software, 0 comments

On the importance of software testing

As the famous programmer Jean-Paul Sartre once put it, hell is other people’s code. This is what echoes through your head when you’re jolted awake at 2AM by PagerDuty, blaring about a Sev0 production outage. You trawl through the changelog to find the offending commit: a missing null check that results in an exception. You start rolling back the bad deploy, but as you sit there, illuminated by the glow of your laptop screen, you curse to yourself: how did a simple error like this make it all the way to production?

We’ve all been in escalation situations like this, but perhaps just as many times, also been the author of the offending code change that caused the outage. During my time working on Hadoop, I’ve both written and fixed bugs like:

  • A new file format deserializer that would produce an empty result when reading a file written by the old serializer.
  • A rate limiter which would limit too aggressively by a factor of over 1000x.
  • A function that calculated how much data to flush to disk would, in almost every situation, not flush enough data.

In terms of complexity, these are obvious bugs that barely outrank the typical null pointer exceptions in sophistication, and should have been caught by even the most basic degree of testing. Fortunately, most of these examples were caught during our test cycle, but could have otherwise easily been Sev0 issues.

The case for testing is clear, but I’ve seen bug authors that never learn this lesson and (implicitly) refuse to write tests. Yes, there are times where skipping or deferring testing is acceptable. Yes, there are many nuanced arguments about the downsides of writing too many unit tests, the issues with mocking, and the uselessness of code coverage as a metric. But what really gets my goat is when a bug author’s simple apathy or lack of interest in testing results in a continuation of late-night pages, busted SLAs, and burned-out on-call engineers.

In this post, I present two case studies that illustrate our responsibility as software developers to deliver high-quality, production-ready artifacts for the consumers of our systems. In both of these studies, a catastrophic failure in a critical software system can be directly attributed to a lack of testing and poor quality assurance processes.

Therac-25

The Therac-25 was a medical radiation device used to treat cancer patients. It operated in two different treatment modes:

  • An electron mode which used an electron beam (beta radiation) to treat surface-level cancers.
  • An X-ray mode which turned that same electron beam into X-rays by increasing the current and pointing it at an X-ray target. This could be used to treat deeper tumors.

The Therac-25 was the latest machine in a series of radiotherapy machines. Previous models had hardware interlocks to prevent dangerous situations from happening, namely, operating the beam in high-current X-ray mode without the X-ray target in place.

However, the Therac-25 was the first to be entirely computer controlled. The manufacturer decided to depend entirely upon the control system to insure that this situation would not occur, and removed the hardware interlocks.

This was a fatal mistake. Due to a race condition, it was possible for the operator to accidentally configure the machine in X-ray mode without the X-ray target in place, delivering 100X the intended amount of radiation. Patients suffered horrible burns and radiation sickness, with three ultimately dying as a result of their injuries.

AECL, the manufacturer of the Therac-25, initially did not believe the complaints and delayed investigating the issue. Even after admitting the problem was real, the bug had to be independently reproduced by a hospital technician before AECL was able to develop a software patch. This patch should have been the end of it, but it turned out that the Therac-25 had yet another bug that manifested in the same fatal error. Another patient was killed before the machine was ultimately recalled.

The reason for this was directly pinned on poor software engineering practices. AECL did not have a formal software specification, test plan, or risk analysis for the Therac-25. Most of the coding was done by a single developer who simply carried forward the same code from the earlier Therac model with hardware interlocks. Furthermore, there was no independent testing or end-to-end testing done at all, with most testing happening internally on a hardware simulator.

There are a lot of resources to read more about the Therac-25. The original report on the Therac-25 by Nancy Leveson is great, as well as her 30 years later retrospective on the topic.

Mars Climate Orbiter

The spaceflight business is a risky one. These projects are huge engineering efforts that involve hundreds of millions or billions of dollars invested over a timespan of multiple years, with many agencies, contractors, and subcontractors involved. Even after all that, there’s also a surprisingly high chance that the rocket carrying your payload blows up on the launchpad.

NASA awarded the $125 million dollar Mars Climate Orbiter contract to Lockheed Martin. After four years (and 286 days in space), the Orbiter reached Mars and began a series of maneuvers for orbital insertion. However, the spacecraft entered Mars’ atmosphere much lower than expected and was destroyed.

The primary cause of failure was eventually found to be a software component that emitted calculations in Imperial units (pound-force seconds) while the invoker expected it to be in SI units (Newton-seconds), a factor of 4.45x difference. Although it’s tempting to attribute the issue to this seemingly simple bug, NASA ultimately placed the blame on multiple concurrent failures within their own testing and systems engineering processes.

A choice quote from the IEEE Spectrum article on this topic, which is highly recommended:

Thomas Gavin, deputy director for space and earth science at NASA’s Jet Propulsion Laboratory, added: “A single error should not bring down a $125 million mission.

Because of the rush to get the small forces model operational, the testing program had been abbreviated, Stephenson admitted. “Had we done end-to-end testing,” he stated at the press conference, “we believe this error would have been caught.” But the rushed and inadequate preparations left no time to do it right.

Other complaints about JPL go more directly to its existing style. One of Spectrum‘s chief sources for this story blamed that style on “JPL’s process of ‘cowboy’ programming, and their insistence on using 30-year-old trajectory code that can neither be run, seen, or verified by anyone or anything external to JPL.” He went on: “Sure, someone at Lockheed made a small error. If JPL did real software configuration and control, the error never would have gotten by the door.” Other sources commented that this problem was particularly severe within the JPL navigation team, rather than being a JPL-wide complaint.

So, should I test my software?

The lesson here is not that we need to apply the same software development processes as NASA or medical equipment manufacturers. Waterfall-style software development went out of style for a good reason, and it’s probably not that big a deal if your REST microservice goes down occasionally.

What is notable is that both of these failures were directly attributed to a lack of testing. Testing is both necessary and important when working on a large software project. Without good tests and QA processes in place, it’s nigh impossible to reason about the correctness of the system as a whole. Forgoing testing results in fragile products where even the simplest of bugs can result in catastrophic failure.

In a future post, I’ll dive more into the mechanics of software testing: the different types of tests, and how and when to apply them.

 

Special thanks to my wonderful editors Tiffany Chen, John Sherwood, and Michael Tao, who gave feedback on earlier drafts of this post.

Posted by andrew in Software, 0 comments

Bicycle touring post-mortem

San Francisco to San Diego was my first multi-day tour, and overall I’m very happy with how it went. I’ve done plenty of overnight bike tours to Half Moon Bay or Samuel P. Taylor, and I carried basically the same kit on the multi-day tour.

Here’s a breakdown of what went well and what I might do differently on my next tour. I’m really eager to do the northern section of this route (Seattle or Portland to San Francisco), perhaps next year.

Continue reading →

Posted by andrew in Travel, 1 comment

Riding the 101: Bicycle Touring Mega-Update

I spent about two weeks of my sabbatical riding from San Francisco to San Diego (Jul 31- Aug 15). I got in the habit of posting end-of-day recaps to Facebook and Strava as I went, which really helped me reflect on what happened. I’m reposting all of them here as a mega-post.

Total distance: 622.9 miles

Total climbing: 24436 feet

Total riding time: 55.85 hours

Sunsets on the beach: whenever possible

Continue reading →

Posted by andrew in Travel, 0 comments

Blog refresh: WordPress

I’ve come full-circle. My very first websites circa-2005 were built with a CMS (Joomla or WordPress). I started messing with custom themes and plugins (which is how I really learned to code) then drank deeply of the semantic web koolaid and started hand-coded everything in XHTML, CSS, and PHP. I migrated to a static site generator seeking simplicity and reduced hosting costs, and now, umbrant.com is again powered by a full-fledged CMS: WordPress.

Continue reading →

Posted by andrew, 1 comment

The Next Generation of Apache Hadoop

Apache Hadoop turned ten this year. To celebrate, Karthik and I gave a talk at USENIX ATC ’16 about open problems to solve in Hadoop’s second decade. This was an opportunity to revisit our academic roots and get a new crop of graduate students interested in the real distributed systems problems we’re trying to solve in industry.

This is a huge topic and we only had a 25 minute talk slot, so we were pitching problems rather than solutions. However, we did have some ideas in our back pocket, and the hallway track and birds-of-a-feather we hosted afterwards led to a lot of good discussion.

Karthik and I split up the content thematically, which worked really well. I covered scalability, meaning sharded filesystems and federated resource management. Karthik addressed scheduling (unifying batch jobs and long-running services) and utilization (overprovisioning, preemption, isolation).

I’m hoping to give this talk again in longer form, since I’m proud of the content.

Slides: pptx

USENIX site with PDF slides and audio

Posted by andrew in Talks, 0 comments

Distributed testing

I gave a presentation titled Happier Developers and Happier Software through Distributed Testing at Apache Big Data 2016, which detailed how our distributed unit testing framework has decreased the runtime of Apache Hadoop’s unit test suite by 60x from 8.5 hours to about 8 minutes, and the substantial productivity improvements that are possible when developers can easily run and interact with the test suite.

The infrastructure is general enough to accommodate any software project. We wrote frontends for both C++/gtest and Java/Maven.

This effort started as a Cloudera hackathon project that Todd Lipcon and I worked on two years ago, and I’m very glad we got it across the line. Furthermore, it’s also open-source, and we’d love to see it rolled out to more projects.

Slides: pptx

Source-code: cloudera/dist_test

Posted by andrew in Talks, 0 comments

Windows Azure Storage

What makes this paper special is that it is one of the only published papers about a production cloud blobstore. The 800-pound gorilla in this space is Amazon S3, but I find Windows Azure Storage (WAS) the more interesting system since it provides strong consistency, additional features like append, and serves as the backend for not just WAS Blobs, but also WAS Tables (structured data access) and WAS Queues (message delivery). It also occupies a different design point than hash-partitioned blobstores like Swift and Rados.

This paper, “Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency” by Calder et al., was published at SOSP ’11.

Continue reading →

Posted by andrew in Reviews, 0 comments

Transparent encryption in HDFS

I went on a little European roadshow last month, presenting my recent work on transparent encryption in HDFS at Hadoop Summit Brussels and Strata Hadoop World London. I’ll also be giving the same talk this fall at Strata Hadoop World NYC, which will possibly be the biggest audience I’ve ever spoken in front of.

Slides: pptx

Video: Hadoop Summit Brussels (youtube)

If you have access to O’Reilly, there should be a higher quality video available there.

Posted by andrew in Talks, 0 comments

Mesos, Omega, Borg: A Survey

Google recently unveiled one of their crown jewels of system infrastructure: Borg, their cluster scheduler. This prompted me to re-read the Mesos and Omega papers, which deal with the same topic. I thought it’d be interested to do a compare and contrast of these systems. Mesos gets credit for the groundbreaking idea of two-level scheduling, Omega improved upon this with an analogy from databases, and Borg can sort of be seen as the culmination of all these ideas.

Continue reading →

Posted by andrew in Reviews, 0 comments