Sorry I fix a namespace problem and now it works in gambit scheme and in r5rs language of racket.
About normal mode in vim: I meant that it can be used as a real time scripting language. Suppose that you write your own program, It will be easy for the user to press ESC write dynamic forth expression and press enter. If he will do the same with arc, he will have to worry about parentheses. He will also have trouble using the result of the last calculation.
Could you talk about your decision to use it for Readwarp then? If Arc's not really ready for production use, might it still be a good choice for a certain minority of developers?
Yeah, I'm not trying to say you shouldn't use it for production use :)
They're opposing perspectives. As a user of arc I'd throw it into production[1]. At the same time, from PG's perspective I'd want to be conservative about calling it production ready.
I suspect arc will never go out of 'alpha' no matter how mature it gets, just because PG and RTM will not enjoy having to provide support, or having to maintain compatibility.
[1] With some caveats: treat it as a white box, be prepared to hack on its innards, be prepared to dive into scheme and the FFI. And if you're saving state in flat files, be prepared for pain when going from 1 servers to 2.
We can combine the next macro with the ":" shortcut
(mac c ((obj (func . args)))
'(call ,obj ,func ,@args)))
(= document!body!innerHTML
((c:document:getElementById "foo") value))
It will translate to:
(= ((document 'body) 'innerHTML)
((call document getElementById "foo") value))
Also a nitpick: your c macro needs quasiquote and has too many parens:
(mac c (obj (func . args))
`(call ,obj ,func ,@args)))
I'm not quite sure how the args in that macro are supposed to work. Is it somehow taking advantage of destructuring bind, or why is it (obj (func . args)) instead of just (obj func . args)
Thanks for the suggestion. Looking forward to hearing you elaborate.
Much of Lisp's power stems from the fact that virtually everything can be represented as a list. If this is true, then writing 'each for lists is almost as good as (or better than) writing a generic 'each for every kind of type. A language design like Arc's that capitalizes on this idea indirectly provides incentive for making as many things into lists as possible. This is why pg has flirted with representing both strings [1] and numbers [2] as lists, and also why he promotes using assoc lists over tables when possible [3].
One disadvantage of this approach is that it can sometimes seem unnatural to represent x as a list, but it has the benefit of providing a very minimal cloud of abstractions with maximum flexibility.
A powerful object system seems like a different way of going about the same thing. That's probably why Lisp users are often unenthusiastic about objects: there's a feeling of redundancy and of pulling their language in two different directions. (Arc users can be especially unenthusiastic because they're so anal about minimizing the abstraction cloud.) That's why I'm not particularly enthusiastic about objects, anyway. They're not worse - just different, and largely unnecessary if you have lists.
I could be missing the ship here. I don't have enough experience with object systems to understand all their potential benefits over using lists for everything. (And, of course, the popularity of CLOS demonstrates that a lot of people like to have both!)
)
[1] arc.arc has this comment toward the top:
; compromises in this implementation:
...
; separate string type
; (= (cdr (cdr str)) "foo") couldn't work because no way to get str tail
; not sure this is a mistake; strings may be subtly different from
; lists of chars
The Lisp that McCarthy described in 1960, for example, didn't have numbers. Logically, you don't need to have a separate notion of numbers, because you can represent them as lists: the integer n could be represented as a list of n elements. You can do math this way. It's just unbearably inefficient.
I once thought alists were just a hack, but there are many things you can
do with them that you can't do with hash tables, including sort
them, build them up incrementally in recursive functions, have
several that share the same tail, and preserve old values.
I think you're right about it being frustrating to be pulled in multiple directions when choosing how to represent a data structure.
In Groovy, I'm pulled in one direction:
class Coord { int x, y }
...
new Coord( x: 10, y: 20 )
+ okay instantiation syntax
+ brief and readable access syntax: foo.x
As the project evolves, I can change the class definition to allow for a better toString() appearance, custom equals() behavior, more convenient instantiation, immutability, etc.
In Arc, I'm pulled in about six directions, which are difficult to refactor into each other:
'(coord 10 20)
+ brief instantiation syntax
+ brief write appearance: (coord 10 20)
+ allows (let (x y) cdr.foo ...)
- no way for different types' x fields to be accessed using the same
code without doing something like standardizing the field order
(obj type 'coord x 10 y 20)
+ brief and readable access syntax: do.foo!x (map !x foos)
+ easy to supply defaults via 'copy or 'deftem/'inst
[case _ type 'coord x 10 y 20]
+ immutability when you want it
+ brief and readable access syntax: do.foo!x (map !x foos)
- mutability much more verbose to specify and to perform
(annotate 'coord '(10 20))
+ easy to use alongside other Arc types in (case type.foo ...)
+ semantically clear write appearance: #(tagged coord (10 20))
+ allows (let (x y) rep.foo ...)
- no way for different types' x fields to be accessed using the same
code without doing something like standardizing the field order
(annotate 'coord (obj x 10 y 20))
+ easy to use alongside other Arc types in (case type.foo ...)
+ okay access syntax: rep.foo!x (map !x:rep foos)
(annotate 'coord [case _ x 10 y 20])
+ immutability when you want it
+ easy to use alongside other Arc types in (case type.foo ...)
+ okay access syntax: rep.foo!x (map !x:rep foos)
- mutability much more verbose to specify and to perform
(This doesn't take into account the '(10 20) and (obj x 10 y 20) forms, which for many of my purposes have the clear disadvantage of carrying no type information. For what it's worth, Groovy allows forms like those, too--[ 10, 20 ] and [ x: 10, y: 20 ]--so there's no contrast here.)
As the project goes on, I can write more Arc functions to achieve a certain base level of convenience for instantiation and field access, but they won't have names quite as convenient as "x". I can also define completely new writers, equality predicates, and conditional syntaxes, but I can't trust that the new utilities will be convenient to use with other programmers' datatypes.
In practice, I don't need immutability, and for some unknown reason I can't stand to use 'annotate and 'rep, so there are only two directions I really take among these. Having two to choose from is a little frustrating, but that's not quite as frustrating as the fact that both options lack utilities.
Hmm, that gives me an idea. Maybe what I miss most of all is the ability to tag a new datatype so that an existing utility can understand it. Maybe all I want after all is a simple inheritance system like the one at http://arclanguage.org/item?id=11981 and enough utilities like 'each and 'iso that are aware of it....
I rewrote the type system for arc a while ago, so that it would support inheritance and generally not get in the way, but unfortunately I haven't had the time to push it yet. If you're interested, I could try to get that up some time soon.
Well, I took a break from wondering what I wanted, and I did something about it instead, by cobbling together several snippets I'd already posted. So I'm going to push soon myself, and realistically I think I'll be more pleased with what I have than what you have. For instance, Mine is already well-integrated with my multival system, and it doesn't change any Arc internals, which would complicate Lathe's compatibility claims.
On the other hand, and at this moment it's really clear to me that implementing generic doppelgangers of arc.arc functions is a bummer when it comes to naming, and modifying the Arc internals to be more generic, like you've done (right?), could really make a difference. Maybe in places like that, your approach and my approach could form an especially potent combination.
I finally pushed this to Lathe. It's in the new arc/orc/ folder as two files, orc.orc and oiter.arc. The core is orc.arc, and oiter.arc is just a set of standard iteration utilities like 'oeach and 'opos which can be extended to support new datatypes.
The main feature of orc.arc is the 'ontype definition form, which makes it easy to define rules that dispatch on the type of the first argument. These rules are just like any other rules (as demonstrated in Lathe's arc/examples/multirule-demo.arc), but orc.arc also installs a preference rule that automatically prioritizes 'ontype rules based on an inheritance table.
It was easy to define 'ontype, so I think it should be easy enough to define variants of 'ontype that handle multiple dispatch or dispatching on things other than type (like value [0! = 1], dimension [max { 2 } = 2], or number of arguments [atan( 3, 4 ) = atan( 3/4 )]). If they all boil down to the same kinds of rules, it should also be possible to use multiple styles of dispatch for the same method, resolving any ambiguities with explicit preference rules. So even though 'ontype itself may be limited to single dispatch and dispatching on type, it's part of a system that isn't.
Still, I'm not particularly sure orc.arc is that helpful, 'cause I don't even know what I'd use it for. I think I'll only discover its shortcomings and its best applications once I try using it to help port some of my Groovy code to Arc.
And yes, I did modify arc's internals to be more generic. Basically, I replaced the vectors pg used for typing with lists and added the ability to have multiple types in the list at once. Since 'type returns the whole list, and 'coerce looks for conversion based on each element in order, we get a simple form of inheritance and polymorphism, and objects can be typed without losing the option of being treated like their parents.
arc> (macex-all '(fn (do re mi) (+ do re mi)))
(fn ((fn () re mi)) (+ do re mi))
arc> (macex-all ''(do not expand this -- it is quoted))
(quote ((fn () not expand this -- it is quoted)))
I still don't follow. A) Why do we need "ignores" here? B) Why would we need to use the declaration outside of the definition? Expressions can be sequenced. I.e., wouldn't you do something like this?
(def f (x y)
(let i 3
(declare integer i) ; this is evaluated and sets metadata...
(something-with x y i))) ; ...but THIS is the value that gets returned
Similarly,
(def foo (bar)
(let baz 10
(prn "hello") ; evaluates and returns the string "hello"
(+ bar baz))) ; evaluates and returns bar + 10
arc> (foo 5) ; prints hello and returns 15
hello
15
I think fallintothis declared i as an integer because ey was trying to translate your code, which declares int_value as an integer.
So the 'temporary call serves to tell the 'do in 'declare not to return the value from 'undeclare, but instead to return the value from the (= temporary* (do ,@body)) line?
This seems a lot more complicated than simply saving the value in a 'let or using do1.
Even fixing that, the metadata-setting happens at macroexpansion time, so you get
arc> (def f (x) (declare x integer (prn "metadata*: " metadata*) (+ x 5)))
#<procedure: f>
arc> metadata*
#hash()
arc> (f 5)
metadata*: #hash()
10
arc> metadata*
#hash()
At no point before, after, or in the body is the metadata actually in the hash table. It was just there for a brief pause between the macroexpansions of declare and undeclare.
But the last line shows we just wipe any declaration we made, so a global metadata table gets messy, unless we make the declarations themselves global (i.e., get rid of body).
It was just there for a brief pause between the macroexpansions of declare and undeclare.
If we want to change the behavior of other macros for a certain region of code, then that pattern might be useful. Since we seem to be talking about static type declarations, which I presume would be taken into account at macro-expansion time, I think the "between the macroexpansions" behavior is the whole point.
Thank you for the insight. It's probably the most lucid I've been all thread. It didn't seem deliberate to me, but it could have feasibly been written that way to control other macros' expansions. This also pushes computation to expansion time, which might clarify ylando's objections about "wasting run time". Except those still confuse me: macro expansion happens once, inside a function's body or outside of it.
arc> (mac m (expr)
(prn "macro m has expanded")
expr)
#(tagged mac #<procedure: m>)
arc> (def f (x)
(m (+ x 1)))
macro m has expanded
#<procedure: f>
arc> (f 1)
2
But the original point seems lost because declare's story keeps changing. So, ylando: why do we need "ignores"?
Try building a macro that change global value,
expand code (with macros) and then change the value back.
I think that this macro must use another macro to
change the value back; like the undeclare macro above.
The second macro expands into unnecessary code; so
if you put it inside a function this unnecessary code
will waste run time.
If we have "ignore" macro, we can write macros that do not
produce unnecessary code.
This introduces a redundant nil in the after block, and using after is a bit slower than just a do1. But we can't use do1 because this "do all the work at macro-expansion" approach is so touchy that it breaks:
arc> (load "macdebug.arc") ; see http://arclanguage.org/item?id=11806
nil
arc> (macwalk '(declare name prop a b c))
Expression --> (declare name prop a b c)
macwalk> :s
Macro Expansion ==>
(do1 (do a b c)
(undeclare name nil))
macwalk> :s
Macro Expansion ==>
(let gs2418 (do a b c)
(undeclare name nil)
gs2418)
macwalk> :s
Macro Expansion ==>
(with (gs2418 (do a b c))
(undeclare name nil)
gs2418)
macwalk> :s
Macro Expansion ==>
((fn (gs2418)
(undeclare name nil)
gs2418)
(do a b c))
macwalk> :s
Subexpression -->
(fn (gs2418)
(undeclare name nil)
gs2418)
macwalk> :s
Subexpression --> (undeclare name nil)
macwalk> :s
Value ==> nil
Value ==> gs2418
Value ==> (fn (gs2418) nil gs2418)
Subexpression --> (do a b c)
macwalk> :a
Value ==> (do a b c)
Value ==>
((fn (gs2418) nil gs2418) (do a b c))
((fn (gs2418) nil gs2418) (do a b c))
Note that we reach undeclare before the actual body is expanded!
We can hack it without after or do1 (or mutation, but I avoid that anyway).
This way, declare expands in the right order and we only undeclare once, since it'll expand into nil. The nil is "unnecessary", which seems to be why you want ignore, but it's a terribly pedantic point: ignore is already accomplished by dead code elimination (http://en.wikipedia.org/wiki/Dead_code_elimination). This isn't even a case of "sufficiently smart compilers" for vanilla Arc, since mzscheme already implements the standard optimizations: function inlining, dead code elimination, constant propagation/folding, etc. (see http://download.plt-scheme.org/doc/html/guide/performance.ht...) should all be able to clean up whatever ac.scm generates. E.g.,
(mac foo ()
`(prn ',metadata*!name))
(declare name bar (foo))
Final idea: if expansion-time computation can't be avoided, you can expand the macros manually, if only for the sake of your readers. As a bonus, it does away with the dead code.
So your real question is why dots don't work in quasiquote, and what should be done to fix them. That's a question that we can relate to a lot better than what we originally thought was "I don't like dots"
Would someone mind being more explicit about how it doesn't work? After dotting the outer definition's rest parameter as well so that the first line reads,
"I still think that the bug in akkartik code is a result of too complicated one liner."
I'll make 2 objections to that:
a) That particular case was not a bug, but a performance issue.
b) The response to bugs isn't a more verbose formulation. Verbosity has its own costs to pay. Patterns that you could see in a single screen can no longer fit side by side, which can cause their own bugs.
If one-liners are to be avoided, you could just replace the call to reduce in your example with an explicit loop. But that's a bad idea, right?
Perhaps you're finding right-to-left hard to read. Stick with it; you'll find that it becomes easier to read with practice. Many of us started out with similar limitations. It's like learning to ride a bicycle; I can't explain why it was hard before and isn't anymore, but vast numbers of people had the same experience and you will very probably have it too. As you read more code you'll be able to read dense one-liners more easily. There is indeed a bound on how dense a line should be, but this example is nowhere near it.