Mope experiment

2020-11-27

A while back I was working with Flow and TS and wondered whether there wasn't an easier way of doing all of this. The typing offered by Flow and TS look simple on the surface and look nice in example projects but before you know it you'll be entangled in large complex typing structures, unions, generics, and various levels of complexity.

tldr; I worked on a zero-config typing system for JS and I ended up abandoning it. You can find the failed experiment at: https://pvdz.github.io/mope/web/. The repl will load a random test case (from /tests) on page load. Scroll all the way down in this post if you just want more details on that repl.

Mono


Rule 1: all bindings are monomorphic.

I don't quite remember how I ended up on it but I started out with an approach where everything that could have a type was considered to be monomorphic. Surely, if you create a variable and assign it a type it should always have this type throughout its lifetime?

While that's often true and is certainly a pattern you can enforce I quickly wrote some tests that clearly showed there was no way this model would hold in JS.

The core problem for enforcing monomorphism throughout were functions. In particular, builtin functions but even userspace functions.

Builtin functions are often either overloaded or accept any kind (or a set of kinds) of parameters. Additionally their return value could be affected by the input. This problem is heavily exacerbated by objects.

Simple examples are String.prototype.split or Object.assign.

But even userspace functions often have valid polymorphic use cases. The most prominent example I kept running into was the AST walker handler. A statement can be of a couple of kinds and all of these kinds have a different shape and so they are a different type.

It didn't take long before I realized a purely monomorphic system was a fool's errand. It may be possible to enforce it but it would be nigh useless.

Poly functions


Rule 2: Function input and output values can be polymorphic.

By allowing functions to be called and to return different types the whole approach felt a whole lot more viable. It certainly opened the door to a new range of problems but it also seemed to be the proper approach here.

So now something like this could pass:

Code:
function f(x) { return x; }
const a = f(1);
const b = f('foo');

Because a and b are still monomorphic and it's only the inputs and outputs of the function that were different. Inside f, the binding x would also be monomorphic. The system works!

This allowed me to move forward and implement a bunch of things that kept validating my approach.

Strict mode


Rule 3: operators should be monomorphic / strict

This felt like an obvious one to me. One way that typing leaks and subtle bugs get introduced is by coercion. Operators like == and >= allow various types in either side and will try to make them equal underwater. This is less of a problem when this coercion doesn't make sense, but it's a lot more dangerous when the coercion might make sense, like doing 2 == '2'.

To this end Mope would require all operators to have the same type on either side.

Obviously this quickly introduced a bunch of problems.

The + operator allowed for only two types of args, and it had to be the same kind, but you wouldn't really know which one until the actual call. In an early model I would track the type of bindings and have some kind of "Plussable" pseudo-type for this purpose. In later models I would take a meta runtime approach and types would need to be known by the time operators got meta-evalled and so it was less of a problem.

The logical operators (&& and ||) are allowed to return their middle and right side operands. So they can't just be required to be booleans, but they did have to be the same.

Oh and talking about strict mode; we will assume the module parsing goal so all code is assumed to be "strict mode". For the most part this makes our life actually easier because implicit contexts are undefined and we don't need to worry about with (even though, ironically, the model could support that).

Rule 4: JS is assumed to be strict mode code

Conditions


Working on conditions, like with an if and while, it soon became obvious that it may sound reasonable to have conditions be required to be booleans, but in real code that is more often not the case than it is. Especially when it came to nullables. Not that my models knew what nullables were, but that's another topic.

So I quickly moved to a model where the rules were allowed to be broken in certain cases. The condition for an if would not need to be a strict boolean. The type would not propagate anywhere else and so it was more of a linting problem than an actual show stopper. If you want to pass objects to an if, I mean, okay?

Rule 5: rules are meant to be broken. Have a warning.

Screw the rules.

In the last model I moved away from throwing hard errors and instead allow the model to be broken and emit warnings for anything that might be a problem.

Some problems are bigger than others and while I intended to make a difference between the severity of a violation, I never really got around to implementing that system properly. So everything is a warning now. But at least nothing (ought to) throws an error.

Models


At some point I started to call my approaches "models". My comments, and especially my test cases, often have comments with stuff like "this did not work in model v3". I don't even really remember the early models or when I bailed on them.

I rewrote Mope from scratch at least twice. The rewrites were quite demotivating because they were triggered from a realization that the previous iteration had a fundamental flaw and rewriting the whole thing was a lot of boring work.

In the the last full rewrite I switched from a static AST analysis to a more meta eval appraoch. Phase 1 does a simple AST init. Phase 2 generates pseudo code from the AST, almost like a byte code interpreter. Phase 3 is a stack based meta eval engine that interprets the generated "code" from phase 2, which basically represents the original code. (Note: At no point is the input code actually evaluated.).

Considering how much time I had spent on the project at that point, the last rewrite was finished quite quickly (like within a week?), passing pretty much all the tests that were passing before. Amazed myself there.

Terminology


Each type is called a Tee. I know, how original. Each tee has a unique tid. This is a string that is either an incremental id or a hardcoded name based on the JS builtin model. I found this a very easy way to reason about this part of the model.

The beauty is that the tids would be immutable and global. This made a lot of sense and made it possible to globally reference to builtins, knowing that it would always reference the same thing.

For example, the tid 'Number#toString' would always mean the tee that represents the builtin function Number.prototype.toString. These names are unique and there was no danger of collisions.

Rule 6: all builtins are immutable

Tee


Initially a Tee was one of a few possibilities. There were the primitives, objects, arrays, functions, plussables, and unknowns. By property of rule 1, a tee would immediately seal the type once it resolves to anything.

Having types be immutable and sealed once they become resolved makes it very easy to build up a model. Each node in the AST is assigned a tid, representing an anonymous tee, and any node where it made sense got its own tee. Then a second phase would go through the code more semantically and merge tees together.

For example:

Code:
const a = 1 + 2

This AST would assign a bunch of tids (like 20, even if we'd only use 10). It would then walk the AST and merge certain static things together. This is the meta evaluation I'm talking about.
Code:

const
a // T2
= // T3
1 // T4
+ // T5
2 // T6

I mean, this happens in a tree form (AST) but the principle is the same. It would take T4, T5, and T6, restrict it to number or string, and merge them together. The result depended on what T4 and T6 resolved to. In this case they are plain number so the resulting tee is a number too.

The resulting type is then assigned to T5 (the + sign, which is an AST node).

Then it goes to the = assignment node and merges the left and right tees. For variable bindings this was an initialization so there was no real merging. But for regular assignments, the left would be merged with the right and the result assigned to the left.

Code:
let a = 10;
a = 1 + 2; // ok, a = number before and after

let b = 10;
b = 'foo'; // not okay, b was a number before and the assignment wants to merge it to a string

Initially, the above was really all it was doing. Pretty simple basic stuff, all things considered. Even transitivity is not that hard.

Code:
let a = 10;
let b = 20;
let c = a + b; // The system has assigned numbers to a and b so the merge results in a number

On the surface, that's what I wanted. But things are never easy.

Unknowns


What if a tee was unknown? In my early models I was under the assumption that not all bindings would be known ahead of time. And in fact, for functions that still wasn't the case in the last model (but the approach was different so it didn't matter).

Let's say you have a function with unknown params, you might still already determine a bunch of things with simple static analysis.

Code:
function f(a, b) { // f=T1 a=T2 b=T3
const c = a * 2; // c=T4 T5 T6 T7 2=T8
const d = b + c; // d=T9 T10 T11 T12 T4
return d; // T13 T9
}
let n = f(1, 2); // n=T8, f(1,2)=T9

In this example, a and b are unknown but we do have a couple of hints inside the function that allow us to deduce their expected types.

Primitive literals can only be a certain type. So certain tees immediately get sealed to a certain type. Above, T8 is always a number. Additionally, multiplication must operate on two numbers and returns a number so all the tees in the first constant declaration become numbers. This is how c is determined to having to be a number (note, that is the actual tid; 'number').

In the second constant declaration the plus makes it harder to reason about in absolute terms. But by then the c variable will already have been reasoned about. Since it's a number, the other operand must also be a number. The result of the operator is a number. And ultimately d becomes a number.

The function must therefore receive two tees that resolve to number and it returns a tee that also resolves to number.

Functions


Ultimately, functions turn out to have caused the biggest problems. On the one hand they're fairly easy to reason about. And on the other hand it's not as easy as it seems.

The inputs to a function are not just parameters. It also receives a context. And if you think it ends there, the types of a function are influenced by the closure it generates. Observe this (contrived) example:

Code:
function f(x) {
let y = x;
function g() {
return y;
}
}
let a = f(1)();
let b = f('a')();

In this case, g gets called with the same parameters (none) and the same context (undefined) and still returns a different type because it depends on the closure. (You'll see lots of these kinds of test cases in the repo ;) so if you're interested in them have a look).

So when tracking a function call the model had to know the actual call stack, since that governs the closures. To be more precise, it needed to know the "meta call stack". What types were passed on when calling a function.

In the example above, it's not relevant that that function was called with a 10 or 'monkey'. Just the tids used for params and context.

Digest


I needed to create a digest for function tees to solve this problem. But how do you properly construct this digest?

For primitives and built-ins this is a nobrainer. But what about objects? Contexts are always objects (if not undefined) so there's no getting around it.

This raises the question: what is the minimal set of things to record in order to capture whether you've already computed this call on a meta level?

Not sure if this clear so I'll try to illustrate it.

Code:
function f(a) {
return a
}
f(1)
f(2, 'foo')

The above has two calls to f but our model could capture (and dedupe!) them by a digest something like T1{[number],undefined,global}. The digest records:

- the tee that is being called (the tee that represents f)
- the tees being passed on as arguments
- ignore any excessive args, those that cannot map to a parameter, because they can't influence the execution of the function
- pad missing args with their default or undefined, like JS would
- have fun with "rest" parameters and spreaded args
- the tee of the context
- we assume "strict mode", because modules, so implicit contexts are undefined
- the digests of all upper scopes
- on a typing level, the "seed" for every scope is governed by the inputs of the function that creates it

The interesting property for the model is that we can cache the digest of a call. And if another call has a known digest then we would not need to meta eval the entire call but rather replay (meta meta eval?) the mutations that we've computed when meta evalling it the first time.

This isn't necessarily easier but ought to be faster. If we didn't do this then the number of digests could be infinite and the program might never end. However, by limiting the digest to types, the number of types within a program ought to be finite and so the set of function digests ought to be finite.

This does require recording all mutations to params, context, and scope and to replay those mutations on the current inputs. This is already kind of hard but also kind of tricky as you also need to return the same type and expensive because, well, it's a lot of work to track everything.

Return values


The return type is one such problem because in my earlier models objects were immutable but later I changed them to be forward updating in terms of their properties. This, however, meant that I had to start tracking instantiations and I don't think I ever really recovered from that change.

Code:
function f(a) {
return a;
}
f({}).foo = 1;
f({}).foo = 'a';

In this case f should take care not to return the same tee since in JS the two objects are distinct and receive different properties later on.

But then consider this example:

Code:
function f() { return {}; }
const a = f();
a.foo = 1;
const b = f();
b.foo = "str";

This will fail if the call to f would return the same tid. So some cloning is involved. And where do you stop? Because some things need to be cloned but not everything.

I introduced some instantiation fences for this. I would track when, while resolving in meta eval order, an object was created. To determine whether an object had to be cloned or not when returning, I would track whether its "instantiation id" was before or after the start of the current call. This is a little harder than it sounds due to nesting and scoping and what not.

Another problem with the instantiation fence is that you'd still have to traverse the entire tee structure, whether a certain part of the tree has to be cloned or not, because it may contain properties that have values that do need to be cloned regardless. Contrived example:

Code:
const a = {}
const b = {}
function f(x) {
x.foo = {b};
b.xyz = {};
return x;
}
f(a);
f(a);

Every call to f will set the properties to the same type. This by itself is not a problem for the model since all the tee shapes match. But what happens when the code continues like this?

Code:
const first = {};
const second = {};
f(first);
f(second);
first.foo.val = 1;
second.foo.val = 'x';

Very contrived. Where do you draw the line of monomorphic typing? Is this a violation because the entire shape of first is {foo: {b: <b>, val: 1}} vs {foo: {b: <b>, val: 'x'}}? Probably. But you're going to run into these cases faster than you might think. I did.

Inheritance


Some interesting aspects that I got pretty far in were properly tracking the prototype and ES6 class inheritance cases. These are harder than they appear at first glance. Especially the super and constructor cases. I learned a thing or two about their semantics.

Especially the difference between an ES5 constructor and an ES6 constructor proved quite relevant. They might look the same at first glance but underwater there are observable side effects that are relevant to track. And of course I got completely lost in trying to cover these edge cases to perfection.

The other case is super and the way that it implicitly binds to constructors and class methods. I never knew but it was a big problem to solve for me.

Meta eval


I've mentioned this term a few times but what do I mean when I say "meta eval"?

It didn't start this way, but at this point Mope kind of pseudo-evaluates the code in real execution order like Prepack. Mope doesn't actually evaluate anything, mind you. But rather the trick of Prepack was to actually run through the code and use eval to figure out what the runtime value of certain things were in order to eliminate the developer overhead from it. In a away Mope goes through the AST and follows the path of function calls, applying the typing to one another, while ignoring actual branching logic. That's a mistake btw.

In the end the decision to ignore branching logic AND the choice to defer support for nullables to some other time lead to problems with real world code. Mope doesn't track actual values so it can't really deal with if (typeof foo !== 'undefined') {} kinds of cases. It does not do refinement. The only thing that mutates a tee are new properties and aliasing one tee to another.

And since Mope doesn't track nullables (whole can of worms in that one), it assumes that a value is either one type or the other, but not possibly both. This means conditional typing checks won't work and that Mope still assumes that a binding is always the same anywhere inside a function. Branching or not.

While that works fine in theory, in the real world it does not work. Too much real world code depends on nullable checks. Whether it be property or argument, it happens all the time.

Nullables


I never started to support nullables. Initially they violated the first rule of the model; you can't be nullable if you can only be one type. Supporting the concept of nullable adds a lot of complexity to the model. For starters, JS makes a difference between null and undefined. And you can't really pick your poison here because the builtins will actively use both.

For example, there's a potential difference between f(), f(undefined), and f(null). Consider this:

Code:
function f(a = 10) { return a; }
f(); // 10
f(undefined); // 10
f(null); // null

The builtins will return both values as well. You get undefined from non-existing properties and unpopulated function parameters. You get null from regex match and Object.prototype.__proto__. Alright, fine, Object.getPrototypeOf(Object.prototype).

If every value is possibly nullable then Mope would have to do a looot more checks. Digests become even harder. Certain assumptions no longer hold. There are so many dragons there that I never dared to fight them.

Ultimately, I think this is one of the reasons why I abandoned the project though. I realized that Mope would not be feasible in the real world without supporting nullables.

Recursion


Let's talk about another subtle hard problem to solve: recursion.

This one plagued me for a long time. I believe I had to abandon the second or third model because of recursion.

Code:
function f() {
return f();
}

Mope can't solve this. Arguably this is a bug anyways so who cares, right? Okay, I hear you.

Code:
function f(x) {
if (x) return x;
return f(x);
}

Very contrived but consider that recursion always has a conditional base case. This problem is a very real challenge for Mope and it needed to support it to have any chance at all. But it's risky.

I think this is the reason I switched to use digests in the first place.

First you create a digest (see above) of the call. Then you store this digest in some store and assign it a placeholder tee. That's a tee that is not yet resolved and could be any type. Then you meta eval the call and once you return, you merge the placeholder tee with the return value (which ought to be resolved at that point). This way, true recursion calls should hit the cache and return the placeholder. In case of the first infinite loop, that will result in a placeholder tee that never resolves. But at least it doesn't throw Mope in an infinite loop. In the second case the recursive call resolves to the tee of x.

At some point I had to update the digest algorithm and completely screwed this up. Probably spent two weeks debugging the way Mope handled recursions, desperate to find a solution. I think it still doesn't handle certain cases of recursion properly but you can find those in the tests.

Arrays etc


One easy to grasp source of unsoundness in TS and Flow are arrays (and all data structures like maps and sets).

The problem at hand is that you can't tell at compile time whether an array has a value or not. So you don't know whether arr.pop() returns an undefined or the actual kind of the array. That's a real problem because you don't really want to force your users to do an existence check every time. You also don't want to do this at runtime because there are plenty of cases where it is clear in the code that the array cannot be empty. It's just nigh impossible to generically verify this with static analysis.

Initially, the way I set out to solve these was to require to use for of loops. This way you never run into the problem above since the loop would only iterate on existing elements. While that's still true, I think that will only cover a very small set of use cases in real life. Not to mention that generators are their own little hell. But I treat them as regular arrays like that's never coming back to haunt me. (Hah, joke's on you! I abandoned Mope before it could.)

Looping through an entire array, map, or set is unacceptable for large structures when you only want one element. So the best way around this is either issuing warnings (like TS allows you to do map!.get('foo') to tell it that you know the value won't be undefined) or by adding some kind of framework (or dsl with code transformations) that does the pop for you while also doing existence checks. I didn't do this but that's probably where I would have gone.

For Mope, arrays, maps, and sets are an explicit type of tee with a "kind" for their elements. Like anything else, the kind of a structure must be the same and this is checked when adding, removing, and reading values from/to the structure. This is not that much different from Flow/TS, I guess.

One problem that is not immediately obvious is that structures that start without elements are harder to type. This is why you type a binding declaration for an empty structure in TS (const arr: Array<Abc> = [];) even if it might be able to deduce this later. Since Mope does not do annotations it injects a placeholder tee and issues a warning.

Placeholders


This is relevant because this is one of the very few ways how placeholder tees trickle into the system. An uninitialized array, map, set. An uninitialized var/let binding. Recursion. Those sorts of things may cause placeholders to leak into the general system and be unresolved forever.

Unresolved tees are annoying because in the latest model it is assumed that any type should be known by the time it is meta-evaluated. But since it is possible for unresolved tees to leak into the system we cannot assert that each tee must be resolved at "runtime".

Code:
const arr = [];
function f(a) {
if (true) a.push([]);
return a.pop();
}
const foo = f(arr);

I dunno, there are probably better examples. But either way, foo is now an unresolved tee, were it not for some safe guards around this.

Mope will realize that the kind is unresolved and, as such, this array had never seen elements added to it before. As such, it must be an empty array and the pop should return undefined. But what if an element was added to the array and that element was an unresolved tee? Woops.

Sealing undefined


There are a few places where JS defaults to evaluating to undefined for things that are still unknown. Non-existing properties, uninitialized arrays, uninitialized bindings. Just some examples.

When Mope does any sort of read where JS would generate undefined because it doesn't exist yet, Mope will seal the type to the undefined tee.

Code:
const obj = {};
const a = obj.foo;
obj.foo = 1; // rejected. obj.foo was read as undefined once so it should not suddenly return something else

It will do the same for arrays (a=[]; a.pop(); a.push(1);) etc.

The problem with this is that it goes back to not supporting nullables in the first place. Just because a property was read to be undefined doesn't mean it actually ought to be. One common example is options objects, where properties may or may not exist. Inferring their shape is hopeless for Mope. Sadly.

Classes


Mope supports ES5 and ES6 classes pretty deeply. This was a lot of work (turns out the super keyword has some interesting runtime oddities) as there are quite a few differences between the two kinds of classes that are not very obvious.

One thing I struggled with was whether or not to consider a class its own type, or to "duck type" classes and treat two classes with the same shape equal.

I believe I landed upon considering classes their own tee. Apart from the conceptual question (is a class Person the same type as a class Dog when they both only have a name property?), there are too many sublte differences to just unify class instances like that.

Modules


One of the last things I worked on was to support "ES6 modules". This was actually quite easy. I recall there were very few problems with supporting ESM in my final model.

Mope will follow all imports and shares all tees across files. This is not a big deal because builtins are immutable so should be safe to be shared. Anything else gets a unique tid and so tees should not collide.

Anything that is exported still looks up their tees in the same store. It's not a problem for Mope if one file refers or even mutates the tee that was generated in another file.

The repl also supports modules, albeit rather unpolished. I just made this work for my own experimenting and didn't work long on Mope after building in this support so this didn't quite get fleshed out as much as it could have. It is what it is :)

Dynamic property access


A big elephant in the room here is dynamic property access. While there are certainly many more problems, this one is pretty core to the language and even surfaces in many places where it doesn't intend to dynamically access properties on an object: arrays.

But there are multiple places where this shows up;

Code:
const obj = {};
obj[foo] = bar;

const ding = {[dong]: 1};

const arr = [];
arr[1] = 10;
arr[here] = 20;

obj["nomin"] = "xyz";

One of the last things I did to work around this was to tentatively support array access by checking whether the tee was a number. In that case it'll kind of do the same as .pop() (return the kind tee and issue a warning).

The last case is a common pattern for telling a minifier not to minify this property. The minifier will still compile the property to be without the dynamic property access. But it should not try to minify the property name. Annoying but trivial to support for Mope and so I believe it does.

Anything more than that defaults to undefined and a warning. What can you do :)

Unsupported / unexplored


While I've gone through many parts of the JS spec at this point, Mope definitely doesn't support the full breadth of the language. Both syntactically as well as semantically.

I've added a lot of typing logic for builtins. A lot. https://github.com/pvdz/mope/blob/main/src/builtins.mjs

And even then not all of them support edge cases surrounding context, inheritance, rest parameters, etc. For many these don't matter, mind you. But fleshing this out entirely will still be a lot of work.

I haven't explored the generator space, as well as the async/await space. Not sure what kind of problems those language features will bring. They might actually not be too bad, considering what's already there.

There is no support for exception tracing and I'm not entirely sure how I would be doing this with something like Mope. The problem here is that concrete values are not tracked and exceptions could be triggered almost anywhere. Hence you can't really predict where implicit exceptions are triggered. Though you could definitely trace explicit throws since you have full control over the call stack. Might even be able to regenerate the async stack, since Mope tracks the scope digests recursively.

To the previous point, here's an example that is troubling to debug in JS when using wrapper closures:

Code:
function f(callback) {
try {
const veryImportant = () => callback();
} catch (e) {
// You can catch the stack of `f` but not how `callback` came to existence
}
}

In the above, if you wanted to know what part of the code was responsible for creating the callback function instance you'll need to manually trace where it came from. A new Error().stack will only show you the path that lead to calling f, not (necessarily) how its value was generated. Mope could in theory give you that information.

Anyways. Mope has no exception support at all so forget I ever mentioned it.

The surface of JS is very large. There's many things that aren't supported once you start looking for it.

Tests


One of the things I ultimately regretted was not writing some form of snapshot testing. This was tremendously helpful for Tenko.

I didn't do this for Mope because, honestly, it's a lot of tedious chore-level work. At the same time, manually updating tests over and over again because you had a change of heart is also very time consuming work.

I recall there's about 1400 test cases right now. Ranging from simple operator checks to in depth prototype and inheritance edge casing. If you enjoy this sort of thing I invite you to have a look in https://github.com/pvdz/mope/tree/main/tests/cases. The repl will automatically load a random test case every time you load the page :)

State of the code


I abandoned the code base when I realized the last setback was a big one and I couldn't see a way in which Mope would be any kind of useful in a real world project.

I think it's fun to poke around in the test cases and in the repl (it will one of the test cases randomly on every load). Considering the challenging nature of JS I'm quite impressed I managed to get this far before tripping up. Some others are probably not as surprised that I tripped up, eventually. But that's fine.

REPL


You can find the online repl / demo at https://pvdz.github.io/mope/web/. It will load a random test case (from /tests) on page load.

The UI is relatively basic but here are some pointers:

Code:
- The top-left box is an editor
- The whole thing is driven by Tenko, meaning only standard 2020 JS is supported (no TS/Flow, no JSX)
- This box probably warrants its own blog post :)
- Hovering over the source code will give you feedback in the bottom right corner
- Problems will show up as red tabs in the gutter. Hovering over them will show you a tooltip with a list of warnings codes emitted for that line.
- There's a liiiitle debounce between typing and running Tenko but the delay will quickly become annoying with larger codes. Sorry.
- Mope supports modules and the file drop-down in the top-left box allows you to switch between actively editing files.
- The + / - buttons will add/remove files
- You can import these files with the same name as you added them, verbatim. The repl is set up to let Mope resolve those names properly.
- Note that modules were one of the last additions to Mope and especially the UI. It may not be an entirely smooth experience, but it did work.
- The "nodejs" button will load Mope itself (and all its dependencies, which is only Tenko). Good real world example :p
- Note that the file list has been pre-computed but the computations by Mope is done in the browser.
- The box in the top-right corner shows you the warnings for all files, if there are any.
- You can click on the line:col bit to jump to the code directly. A blue bubble should show up roughly where the problem occurs.
- You can click on the "x" to filter away any occurrences of the same warning.
- The dropdown in this box also allows you to toggle filtering certain types of warnings.
- These filter settings are not saved so a refresh resets everything. Sorry.
- The bottom-right box shows you details when you hover over the source code.
- For many tokens it will only tell you the token information. Some things will have more relevant information, like variable names, assignments, etc. Provided Mope actually meta-evaluated that part of the code at all.
- The bottom left box shows you the AST but I disabled this right now. Will try to enable it again and only disable it for larger sources.


(Putting this in a code block because list items don't work properly :p)

Wrap up


I think I've rambled on for long enough. Kudos if you managed to read it all. Hope you enjoyed it. Hope it made sense.

I really enjoyed working on Mope. It poses some super interesting challenges. And while I'm quite disappointed not to be able to present you a sound and complete typing system, I think it was still very valuable for me to have worked on this.

For the name I considered going for MonoPoly. Kind of the perfect name tbh. In the end I didn't want it to have the same name as a well known board game so I went with Mope instead.