Thanks for reporting this! I bet this happened because of the HN redesign. There was some other stuff that broke on the site when that happened: http://arclanguage.org/item?id=18744 . I'll email the people at YC that can fix it.
Until they get it fixed, you can try Anarki instead. Anarki is a fork of Arc that's been changed somewhat, both to fix bugs and to add features. https://github.com/arclanguage/anarki .
[1] The CSS file for the Arc Forum is oddly named, at ycombinator.com/news.css -- but that looks like that's not new with the YC site redesign: http://web.archive.org/web/20140114040405/http://arclanguage... uses the same CSS file. I guessed there was a separate CSS file for AF vs HN, but that appears to not be the case -- the differences in style (e.g. the color of the top bar being #ffbb33 here vs #ff6600 at HN) are in the HTML source itself.
EDIT: It's now fixed, less than an hour after I posted this. Yay!
In tcl the syntax is build from strings and in tclisp the syntax is build from symbols and cons cells.
tclisp could have a better syntax checking and real macros.
Questions like this are hard to answer. You're asking for a whole lot of help because you're not providing many details.
Make it easy for someone to help you. Where are you? Do you need help making an account on OpenShift? Do you not know where to download the HN source from? Are you not sure why some HTML is wonky?
Asking "How do I set this up" could mean any of these questions, and more. What have you done so far, and what do you think you need to do next? What errors are you encountering?
I think you're telling fibs. :-p I double-checked srv.arc (which defines 'defop), and the code there opens a thread for every request. This is true in Anarki, in official Arc 3.1, and even way back in Arc0.
Even without parallelism, this would come in handy to prevent I/O operations from pausing the whole server.
Arc does have threads, yes, but it also has a style of mutating in-memory globals willy-nilly. As a result, all its mutator primitives run in atomic sections (http://arclanguage.github.io/ref/atomic.html#atomic) at a deep level. The net effect is as if the HN server has only one thread, since most threads will be blocked most of the time.
I can't find links at the moment, but pg has repeatedly said that HN runs on a single extremely beefy server with lots of caching for performance.
Edit: Racket's docs say that "Threads run concurrently in the sense that one thread can preempt another without its cooperation, but threads do not run in parallel in the sense of using multiple hardware processors." (http://docs.racket-lang.org/guide/concurrency.html) So arc's use of atomic wouldn't matter in this case. It does prevent HN from using multiple load-balanced servers.
Looking back, I see that I did indeed inaccurately answer zck's question about "running single-threaded". I'd like to amend my answer to "No, it runs multi-threaded, but the threads use a single core." Rocketnia is right that arc has concurrency but not parallelism.
"The net effect is as if the HN server has only one thread, since most threads will be blocked most of the time."
Well, in JavaScript, I do concurrency by manually using continuation-passing style and building my own arbiter/trampoline... and using it for all my code. If I ever do something an easier way, I have to rewrite it eventually. Whenever I want to try out a different arbiter/trampoline technique, I have to rewrite all my code.
Arc's threading semantics are at least more automatic than that. Naive Arc code is pretty much always usable as a thread, and it just so happens it's especially useful if it doesn't use expensive 'atomic blocks (or the mutators that use them).
"Automatic" doesn't necessarily mean automatic in a good way for all applications. Even if I were working in Arc, I still might resort to building my own arbiters, trampolines, and such, because concurrency experiments are part of what I'm doing.
All in all, what I mean to say is, Arc's threads are sometimes helpful. :-p
Absolutely. I was speaking only in the context of making use of multiple cores.
I see now that I overstated how bad things are when I said it's as if there's only one thread. Since I/O can be done in parallel and accessing in-memory data is super fast, atomic isn't as bad as I thought for the past 5 years.
"What was the reason you decided to do it this way? It seems more complicated to work with."
In my designs, I don't just want to make things that are easy for casual consumers. I want to make things people can consume, understand, talk about, implement, upgrade, deprecate, and so on. These are things users do, even if they're not all formal uses of the interface.
I hardly need number types most of the time. If I add them to my language(s), they might just be dead weight that makes the language design more exhausting to talk about and more work to implement.
Still, sometimes I want bigints, or at least numbers big enough to measure time intervals and pixels. I don't think any one notion of "number" will satisfy everyone, so I like the idea of putting numbers in a separate module of their own, where their complexity will have a limited effect on the rest of the language design.
---
"Huh, this is similar to Church numerals"
I'm influenced by dependently typed languages (e.g. Agda, Twelf, Idris) which tend to use a unary encoding of natural numbers in most of their toy examples:
data Nat = Zero | Succ Nat
To be fair, I think they only do this when efficiency isn't as important as the simplicity of the implementation. In toy examples, implementation simplicity is pretty important. :)
A binary encoding might use a technique like this:
data OneOrMore = One | Double OneOrMore | DoublePlusOne OneOrMore
data Int = Negative OneOrMore | Zero | Positive OneOrMore
Or like this:
data Bool = False | True
data List a = Nil | Cons a (List a)
data Int = Negative (List Bool) | Zero | Positive (List Bool)
I get the impression these languages go to the trouble to represent these user-defined binary types as efficient bit strings, at least some of the time. I could be making that up, though.
For what I'm talking about, I don't have the excuse of an optimization-friendly type system. :) I'm just talking about dynamically typed cons cells, but I still think it could be a nifty simplification.
I don't think this itself would be called Church numerals, but it's related. The Church encoding takes an ADT definition like this one and looks at it as a polymorphic type. Originally we have two constructors for Nat whose types are as follows:
Zero : Nat
Succ : (Nat -> Nat)
These two constructors are all you need to build whatever natural number you want:
We could make this function more general by abstracting it over any type, not just Nat:
buildMyNat : a -> (a -> a) -> a
This type (a -> (a -> a) -> a) is the type of a Church numeral.
While it's more general in this way, I think sometimes it's a bit less powerful. Dependently typed languages often provide induction and recursion support for ADT definitions, but I think they can't generally do that for Church-encoded types. (I could be wrong.)
For something more interesting, we can go through the same process to build a Church encoding for my binary integer example:
data OneOrMore = One | Double OneOrMore | DoublePlusOne OneOrMore
data Int = Negative OneOrMore | Zero | Positive OneOrMore
buildMyInt :
OneOrMore -> -- One
(OneOrMore -> OneOrMore) -> -- Double
(OneOrMore -> OneOrMore) -> -- DoublePlusOne
(OneOrMore -> Int) -> -- Negative
Int -> -- Zero
(OneOrMore -> Int) -> -- Positive
Int
buildMyInt :
a -> -- One
(a -> a) -> -- Double
(a -> a) -> -- DoublePlusOne
(a -> b) -> -- Negative
b -> -- Zero
(a -> b) -> -- Positive
b
How did you figure that out? I found I couldn't macex1 it, because it was already erroring. I guess it's related to when ssexpansion happens in relation to macroexpansion?
(mac somemacro (someobj)
`(if ,someobj!field t nil))
When we enter any code at the REPL, it causes three phases to occur: Reading, compilation, and execution.[1]
The read phase processes the ( ) ` , symbols and gets this data structure of cons lists and symbols:
(mac somemacro (someobj)
(quasiquote (if (unquote someobj!field) t nil)))
Note that someobj!field is a single symbol whose name contains an exclamation point character.
At this point you can probably see the problem already. What you may have expected was ((unquote someobj) (quote field)), but what we got was (unquote someobj!field). This is technically because Arc doesn't implement ssyntax at the reader level; instead it uses Racket's reader without modification, and then it processes the ssyntax in the next phase.
Even though the issue should already be clear, I'm going to go through the rest of the process to illustrate macroexpansion.
At this point the compilation phase starts. It expands the (mac ...) macro call, and then it re-processes whatever that macro expands to. As it goes along, at some point it also processes the (quasiquote ...) special form, and it expands the someobj!field syntax. The result of expanding someobj!field is (someobj 'field), and since this isn't a special form or macro, it's compiled as a function call.
The overall result is Racket code. In case it helps show what's going on, I just went to http://tryarc.org/ and ran the following command:
($ (ac '(macro somemacro (someobj) `(if ,someobj!field t nil)) (list)))
This produced a bunch of Racket code, which looks like this if I format it nicely:
Personally, I never think about the raw Racket code. Instead I pretend the result of compilation is Arc code again, just without any macro calls or ssyntax:
Either way, you can see the original (if ... t nil) is still in there somewhere. :)
Finally, this Racket code is executed. It modifies the global environment via Racket's namespace-set-variable-value! and puts a macro there. The macro is simply stored as a tagged value where the tag is 'mac and the content is a function. Then the result of execution is printed to the REPL as "#(tagged mac #<procedure: somemacro>)", and the REPL prompt appears again.
Later on, we execute the following command:
(somemacro oo)
The reader parses this to make this s-expression:
(somemacro oo)
Then the compilation phase starts. It starts to expand the (somemacro ...) macro call. To do this, it invokes the macro implementation we defined earlier. It passes in this list of s-expressions:
(oo)
The macro's implementation is the function that resulted from this Racket code:
Or alternately, in my imagination, it's the result of this macroless Arc code:
(fn (someobj)
(quasiquote
(if (unquote (someobj (quote field)))
t
nil)))
When this function is called, someobj's value is the symbol named "oo". When we try to call the symbol, we get an error.
The compilation phase terminates prematurely, and it displays the error on the console. The execution phase is skipped, since there's nothing to execute. Then the REPL prompt appears again.
I hope this gives people a good picture of the semantics of macroexpansion and ssyntax.
[1] Technically we might add printing and looping phases to get a full Read-Eval-Print-Loop. The eval step of the REPL does compilation followed by execution.
---
As I said above, the confusing point is probably that the reader doesn't give the result that you might expect when it sees ",someobj!field". On the one hand, this is a technical limitation of the fact that Arc uses Racket's reader which doesn't process ssyntax. On the other hand, I think it's debatable if this interpretation of the syntax is better or worse than the alternative.
The ssyntax is a red herring here, since the behavior is the same even if you don't use ssyntax like in my previous comment. I'd also bet that it's the same in common lisp. Just a consequence of macros operating in name space rather than value space. Though this is a kinda subtle consequence of that; I had to run that type example I mentioned before to fully grok what was going on.
The reader parses the comma to make (unquote someobj!field) before ssyntax processing even occurs. I just wrote a long comment to describe the whole process, but this is the main point.