Arc Forumnew | comments | leaders | submitlogin
8 points by applepie 6003 days ago | link | parent

I'm not necessarily thinking in the current Arc, but in the hypothetical hundred-year Arc.

- Could we make pattern matching through unification a part of the core language?

- Could we make some tasks easier (I mean terser) using Icon or Prolog-like goal-oriented programming?

- Could Haskell typeclasses be adapted to a dynamic language? (I know that is not possible in general, but maybe in the most common cases). CLOS-like oop is the way?

- Easy defstruct as in Haskell is also a win.

- Which would be the best module system? Python's first order modules are cool, but I also Haskell's module system seems better to do ADTs.

- Could we use generators using a lazy list-like interface?

- To which point could the system be reflective? In the past it was considered that "flambda" (user defined, first class special forms) and redefition of eval with "reflective towers" are not that good ideas, because of efficiency. Should we spend more cycles to allow such things?

- Should the read s-expressions also have associated meta-information?

- Could Arc live in an image?

- Could Arc be just the specification of its own compiler, making the bytecode an official part of the language (as pg once said)?

- Could Arc be blazingly fast?

- Could Arc be prepared for multiprocessor architectures like the guys (pun intended ;)) in Fortress want it to be? Or to intensive scalability like Erlang seems to have?

Of course all that questions are implementable. There probably is a library for every one of them in Cliki.

I am just asking which of them we should take and encourage to be seen as part of "Arc's core". Commonality and practice of an idiom, more than its mere possibility, is what actually defines what a language is.

Currently, Arc's core is not much more than sugar for Scheme (modulo defcall, mostly unseen and great). This doesn't want to be insultive, I am happy that people find Arc useful.

Maybe it is that I am looking for a revolution and Arc is Lisp, which is good but not revolutionary.



2 points by almkglor 6003 days ago | link

> - Should the read s-expressions also have associated meta-information?

IMO yes. It might even be possible to have the meta-information attached to the nearest cons cell instead, so a cons cell might have, say, (cons a d (o line-number) (o file-name)). This makes symbols always equal other symbols without having to worry about attached meta-information for each "symbol".

Might be useful to add in my VM then ^^.

> - Could Arc be just the specification of its own compiler, making the bytecode an official part of the language (as pg once said)?

Interesting. Might be useful to define Arc as a set of macros which just expand to a bunch of bytecode.

Anyway I'm thinking my VM should be "bytecode" based. The bytecode won't have a numeric representation, but it will have a symbolic one, like ((localvar 0) (globalvar foo) (plus))

> - Could Arc be prepared for multiprocessor architectures like the guys (pun intended ;)) in Fortress want it to be? Or to intensive scalability like Erlang seems to have?

Well, this is what I want to do ^^.

> - Could we use generators using a lazy list-like interface?

Scanners? I've actually used scanners for something like this in Arki, the wiki in Anarki.

-----

1 point by rincewind 6003 days ago | link

The Problem with unification and logical variables would be that they have to be dynamic in scope and work with a call-by-name evaluation strategy, at least in Prolog.

Anyway, you could build a Prolog-Like DSL with pat-m and amb

  http://arclanguage.org/item?id=2556
  http://arclanguage.org/item?id=6669
While logic Programming in Arc can make problem-solving easier, i think it introduces problems for libraries:

- How should you call functional functions from logical functions?

- How should you call logical functions from functional functions without violating referential transparency?

- Do closures over logical variables make any sense?

- Should goals be limited in scope? Lexical or dynamic? If I call 2 logical functions from different modules I may or may not want the first to be retried when the second fails, depending on the context.

-----

1 point by tung 6003 days ago | link

How do Haskell's modules differ from Python's?

-----

2 points by applepie 6003 days ago | link

Python's modules are first class. For example:

  import module

  x = module
  x.function(...)
In Haskell, names are resolved during compile-time...

Also, in Haskell modules can define which names to export. In Python everything is visible.

-----