Arc Forumnew | comments | leaders | submitlogin
2 points by shader 1739 days ago | link | parent

In the testable interfaces section, you mention parameterizing system calls so that fakes can be passed in for testing purposes.

1. How does Mu handle dependency injection?

You mentioned that Mu is supposed to be directly translatable to SubX, so I'm curious how that works. Otherwise, it sounds like a fragile pattern for testability, since someone is likely to hard-code their preferred output.

2. The system-object parameters remind me a lot of object-capabilities.

Something I've been thinking a lot about for the design of my own system, in which I plan to restrict all side-effects (and thus I/O and system calls) to message passing. I was mostly interested in it for the security value, in that a program must be introduced to a service to send it messages.

Your use case is obviously much lower-level, and trying to match the machine execution more closely, but it seems you've arrived at almost the same design for very different reasons. By decoupling a program from its environment and requiring that resources be passed as parameters, you gain testability.

There's no reason you couldn't go further and implement a full object-capability system for resources at runtime, so that the program itself must be given those resources when launched, instead of implicitly getting them from the environment. Based on your application to testing, it could also make external black-box testing of applications easier: just provide fakes when running the program in a sandbox, and see how it behaves.

Just some thoughts for when you start implementing the OS kernel you mentioned...



2 points by akkartik 1738 days ago | link

Oh yes, we're very much thinking in similar ways here. I'd love to build a capability-based OS at some point. For now my ambitions are more limited, to just support testing syscalls with a pretty similar shape to their current one. But it's very much something I hope will happen.

This is the method of science as I understand it. I'm trying to change as few variables as possible (still very many), in hopes that clarity with them will open up opportunities to explore other variables.

> How does Mu handle dependency injection?

Fakes are just code like any other. You can see my code for a fake screen in an earlier prototype: https://github.com/akkartik/mu1/blob/master/081print.mu#L95. Or colorized: https://akkartik.github.io/mu1/html/081print.mu.html#L95 ('@' means 'array'). Does that answer the question? I think maybe I'm not following what you mean.

> it sounds like a fragile pattern for testability, since someone is likely to hard-code their preferred output.

The way I see it, it's a losing battle to treat code vs test as an adversarial relationship. All the tutorials of TDD that start out hard-coding a value in the first test are extremely counter-productive.

There is indeed a lot of fragility in Mu's tests. And when I write them I'm often testing the results in ways that aren't obvious. The test may validate that the code a compiler generates is such and such, but is that really what we want? To make sure I have to actually run the code. Occasionally I encounter a situation where I traced something but didn't actually do what I said I was doing. Mu's emulator requires validating against a real processor, and I've encountered a couple of situations where my program ran in emulation but crashed natively. In each case I had to laboriously read the x86 manual and revisit my assumptions. I haven't had to do this in over a year now, so it feels stable. But still, you're right. There's work here that isn't obvious.

The use case for Mu is someone who wants to make an hour's worth of changes to some aspect of their system. As they get deeper into it I'm sure they'll uncover use cases I haven't paved over for them, and will have to strike out and invent their own experimental methodologies like all of us have to do today. My goal is just to pave everything within a few hours of Hamming distance from my codebase. That seems like a big improvement in quality of life. I'll leave harder problems to those who come after me :)

-----

2 points by shader 1737 days ago | link

> I'd love to build a capability-based OS at some point.

I'm designing a capability-based lisp at the moment that will at some point need an efficient VM implementation and byte-compiler, and I was planning on turning it into a microkernel OS (where the interpreter is the kernel); maybe if I actually build enough of it in a reasonable timeframe we can meet somewhere in the middle and avoid duplicating effort.

> Fakes are just code like any other...

I wasn't really asking how fakes were handled, but rather how you inject them during a test.

> My goal is just to pave everything within a few hours of Hamming distance from my codebase. That seems like a big improvement in quality of life. I'll leave harder problems to those who come after me :)

Now you're starting to sound like my earlier comment about not needing to save the whole world. :P

-----

2 points by akkartik 1737 days ago | link

That would be excellent!

Any function that needs to print to screen must take a screen argument. All the way up the call chain. Then tests simply pass in a fake. I end up doing this for every syscall. Memory allocation requires an "allocation descriptor". Exit rewired an exit descriptor that can be faked with a continuation. And so on. Does that help?

-----

2 points by shader 1736 days ago | link

Yep, that answers my question perfectly.

You had said "dependency injection" somewhere, so I thought there might be more to it.

-----

2 points by akkartik 1736 days ago | link

Maybe I should clarify that I mean dependency injection but not a dependency injection framework. Automating the injection decisions defeats most of the purpose of DI, I think.

-----

2 points by shader 1736 days ago | link

Yes, I suppose "dependency injection" as a concept doesn't actually require anything sophisticated like a framework or IoC containers etc. But the term "dependency injection" sounds to me like it's doing more than just passing in a parameter, and I normally wouldn't expect it to be used unless it meant something more.

I think that's because "injection" is active. It sounds like intrusively making some code use a different dependency without its explicit cooperation. Passing parameters to a function isn't really "injecting"; it's just normal operation.

> Automating the injection decisions defeats most of the purpose of DI, I think.

I don't know about "automated" decisions, but the value of something like "injection" to me seems that you could avoid explicitly mentioning the dependencies at intermediate layers of code, and only reference them at the entry points and where they are used. The way your code works, every function that depends on anything that could call 'print has to have a screen in its parameter line. For Mu, that may be a reasonable design decision; you want to keep things transparent and obvious every step of the way, and don't want to add any more complexity to the language than necessary. However, I think there is a case to be made for something like dynamic variables to improve composability in a higher-level context. That's a discussion for a different language (like the one I'm designing, which gets around the various problems of scoping by making almost everything static anyway).

This is probably talking past your point, but I'm trying to argue that there might be value in some degree of DI beyond parameterizing functions. I have not necessarily justified "automation", and since I don't have a good definition for that anyway, I don't think I'll try.

-----

2 points by akkartik 1736 days ago | link

That makes sense. I think my criticism is really for Guice, which has the effect of making Java a dynamically typed language that can throw type errors at run-time. But if you start out with a dynamic language like Lisp, dynamically-scoped variables are much more acceptable.

Regarding the term "injection", I'm following https://en.wikipedia.org/wiki/Dependency_injection. It feels like a marketing term, but now we're stuck with it.

-----

2 points by shader 1733 days ago | link

Why are runtime type errors acceptable in Lisp but not in Java? Or was there some other reason for dynamically-scoped variables to be acceptable in Lisp?

-----

2 points by akkartik 1733 days ago | link

This side is definitely not absolutist. In my opinion, Java started out encouraging people to rely on the compiler for type errors. We got used to refactoring in the IDE, and if nothing was red we expect no type errors.

With a dynamic language you never start out relying on the compiler as a crutch.

This argument isn't about anything directly technical about the respective compilers, just the pragmatic thought patterns I've observed..

-----