Arc Forumnew | comments | leaders | submit | rocketnia's commentslogin
2 points by rocketnia 4234 days ago | link | parent | on: Pluralize no longer supports lists

Not to reward you with more work to do, but how about updating the docs? XD

https://github.com/arclanguage/anarki/blob/master/help/strin...

    pluralize
    " Returns `str' pluralized. If `n' is 1 or a list of length 1, `str' is
      returned unchanged; otherwise an `s' is appended.
      See also [[plural]] "
  
    plural
    " Returns a string \"<n> <x>\" representing `n' of `x' in english, pluralizing
      `x' if necessary.
      See also [[pluralize]] "
https://github.com/arclanguage/anarki/blob/master/extras/arc...

  (def pluralize "n str" "Returns str pluralized; if n is 1 or a list of length 1, str is returned unchanged; otherwise an 's' is appended. Renamed from plural in arc3."
  (tests (pluralize 2 "fox") (pluralize '() "fish")))
  (def plural "n str" "Returns n and str pluralized. New in arc3."
  (tests (plural 2 "fox") (plural '() "fish")))

-----

1 point by akkartik 4234 days ago | link

Yes, the docs are on my list :) Need to reacquaint myself with how to update them..

Edit: oh, you mean those help files! I always forget those exist.. I was thinking about kens's arc reference (http://arclanguage.github.io/ref) which shader prodded us earlier about: http://arclanguage.org/item?id=18536.

Do we still care about the help dir?

-----

2 points by rocketnia 4234 days ago | link

"oh, you mean those help files!"

Well, I linked to both. :)

--

"Do we still care about the help dir?"

It would be nice if both of those help resources I linked to could share the same codebase. They seem to share a lot of content already.

Er, actually I think kens's files might be generated using a script that scrapes those help files. ...Nope, I can't find that script, if it exists.

-----

1 point by akkartik 4234 days ago | link

OMG, I just realized how that help dir is used:

  arc> (help do)
  [mac] (do . args)
   Evaluates each expression in sequence and returns the result of the
      last expression.
      See also [[do1]] [[after]] 
This is awesome. Yes, worth updating. Let me think about how to sync it with the reference. (I'd also forgotten that the reference is generated from anarki. Is that still true, or is this copy redundant? Need to check..)

-----

2 points by rocketnia 4234 days ago | link

Sorry, I think I was mistaken. If the hypertext reference is generated from docstrings at all, I don't see how. Even if there were a tool, it would require some manual work afterward to place the definitions into appropriate categories.

-----

2 points by akkartik 4223 days ago | link

I've updated the docs. I've also taken out the help/ dir entirely and inlined all the docstrings into arc.arc and elsewhere.

I'm still unsure how to organize the arcfn reference guide at http://arclanguage.github.io/ref so that we remember to update it when we make changes, and so it's convenient to update the website. Another complication is that the arclanguage account contains multiple dialects of arc with subtle differences, and the current organization of documentation is misleadingly monolithic. Any suggestions to fix this most appreciated. (We discussed this previously a year ago: http://www.arclanguage.org/item?id=17774)

-----

4 points by akkartik 4221 days ago | link

Ok, I've taken a stab at a minimal-effort reorg of http://arclanguage.github.io. Before: http://i.imgur.com/hCpkFyj.png. After: http://i.imgur.com/KvwrEg9.png. It's more clear now that tryarc and /ref/ are for arc 3.1. Anarki is currently just more capable at the commandline.

-----

2 points by evanrmurphy 4221 days ago | link

Makes the Arc 3.1 vs. Anarki distinction much clearer to a new visitor. Nice job, akkartik.

-----

2 points by thaddeus 4222 days ago | link

Something like marginalia[1] might prove to be better than the arcfn docs. Not only because the docs would be fully integrated with the source code, but because it would also solve the multiple dialects problem. i.e. If some given code can be tagged with a dialect name then automation could also apply a dialect filter.

Of course this would probably be quite a bit of an undertaking.

1.http://gdeer81.github.io/marginalia/ & https://github.com/gdeer81/marginalia

-----

2 points by akkartik 4222 days ago | link

The crux is colocating the rendered docs online with the repo. Would marginalia help us use github's hosting with github pages, managing branches, etc? If it does I think I'd be willing to go on a significant undertaking.

-----

2 points by thaddeus 4222 days ago | link

Marginalia is clojure specific so I expect it will not help other than to provide ideas.

To create an arc equivalent you probably need build an arc library which provides some code inspection/dissection capabilities and ideally also be able to attach metadata to any given function or macro. With such a library you then build a script to auto generate the docs.

As for GitHub syncing; well no, I'm guessing users would need to trigger the script and then check in the updated docs.

This is still better for a few reasons...1. developers can generate docs, locally, that are in sync with the code base they are actually using (checked out or branched). 2. Even if the online docs gets out of sync for a while you're still only a script trigger away for updating all outstanding changes.

The alternative is what you just went through; having someone remind you to do the work manually as an after-thought, which I've only seen happen once.

-----

3 points by rocketnia 4234 days ago | link | parent | on: How do you use a hash field in a macro?

The reader parses the comma to make (unquote someobj!field) before ssyntax processing even occurs. I just wrote a long comment to describe the whole process, but this is the main point.

-----

5 points by rocketnia 4234 days ago | link | parent | on: How do you use a hash field in a macro?

For a step-by-step look at things...

  (mac somemacro (someobj)
    `(if ,someobj!field t nil))
When we enter any code at the REPL, it causes three phases to occur: Reading, compilation, and execution.[1]

The read phase processes the ( ) ` , symbols and gets this data structure of cons lists and symbols:

  (mac somemacro (someobj)
    (quasiquote (if (unquote someobj!field) t nil)))
Note that someobj!field is a single symbol whose name contains an exclamation point character.

At this point you can probably see the problem already. What you may have expected was ((unquote someobj) (quote field)), but what we got was (unquote someobj!field). This is technically because Arc doesn't implement ssyntax at the reader level; instead it uses Racket's reader without modification, and then it processes the ssyntax in the next phase.

Even though the issue should already be clear, I'm going to go through the rest of the process to illustrate macroexpansion.

At this point the compilation phase starts. It expands the (mac ...) macro call, and then it re-processes whatever that macro expands to. As it goes along, at some point it also processes the (quasiquote ...) special form, and it expands the someobj!field syntax. The result of expanding someobj!field is (someobj 'field), and since this isn't a special form or macro, it's compiled as a function call.

The overall result is Racket code. In case it helps show what's going on, I just went to http://tryarc.org/ and ran the following command:

  ($ (ac '(macro somemacro (someobj) `(if ,someobj!field t nil)) (list)))
This produced a bunch of Racket code, which looks like this if I format it nicely:

  ((lambda ()
     (ar-funcall3 _sref _sig (quote (someobj)) (quote somemacro))
     ((lambda ()
        (if (not (ar-false? (ar-funcall1 _bound (quote somemacro))))
          ((lambda ()
             (ar-funcall2 _disp "*** redefining " (ar-funcall0 _stderr))
             (ar-funcall2 _disp (quote somemacro) (ar-funcall0 _stderr))
             (ar-funcall2 _disp #\newline (ar-funcall0 _stderr))))
          (quote nil))
        (begin
          (let ((zz
                  (ar-funcall2 _annotate (quote mac)
                    (let ((| somemacro|
                            (lambda (someobj)
                              (quasiquote
                                (if (unquote (ar-funcall1 someobj (quote field)))
                                  t
                                  nil)))))
                      | somemacro|))))
            (namespace-set-variable-value! (quote _somemacro) zz)
            zz))))))
Personally, I never think about the raw Racket code. Instead I pretend the result of compilation is Arc code again, just without any macro calls or ssyntax:

  ((fn ()
     (sref sig (quote (someobj)) (quote somemacro))
     ((fn ()
        (if (bound (quote somemacro))
          ((fn ()
             (disp "*** redefining " (stderr))
             (disp (quote somemacro) (stderr))
             (disp #\newline (stderr))))
          (quote nil))
        (assign somemacro
          (annotate (quote mac)
            (fn (someobj)
              (quasiquote
                (if (unquote (someobj (quote field)))
                  t
                  nil)))))))))
Either way, you can see the original (if ... t nil) is still in there somewhere. :)

Finally, this Racket code is executed. It modifies the global environment via Racket's namespace-set-variable-value! and puts a macro there. The macro is simply stored as a tagged value where the tag is 'mac and the content is a function. Then the result of execution is printed to the REPL as "#(tagged mac #<procedure: somemacro>)", and the REPL prompt appears again.

Later on, we execute the following command:

  (somemacro oo)
The reader parses this to make this s-expression:

  (somemacro oo)
Then the compilation phase starts. It starts to expand the (somemacro ...) macro call. To do this, it invokes the macro implementation we defined earlier. It passes in this list of s-expressions:

  (oo)
The macro's implementation is the function that resulted from this Racket code:

  (lambda (someobj)
    (quasiquote
      (if (unquote (ar-funcall1 someobj (quote field)))
        t
        nil)))
Or alternately, in my imagination, it's the result of this macroless Arc code:

  (fn (someobj)
    (quasiquote
      (if (unquote (someobj (quote field)))
        t
        nil)))
When this function is called, someobj's value is the symbol named "oo". When we try to call the symbol, we get an error.

The compilation phase terminates prematurely, and it displays the error on the console. The execution phase is skipped, since there's nothing to execute. Then the REPL prompt appears again.

I hope this gives people a good picture of the semantics of macroexpansion and ssyntax.

[1] Technically we might add printing and looping phases to get a full Read-Eval-Print-Loop. The eval step of the REPL does compilation followed by execution.

---

As I said above, the confusing point is probably that the reader doesn't give the result that you might expect when it sees ",someobj!field". On the one hand, this is a technical limitation of the fact that Arc uses Racket's reader which doesn't process ssyntax. On the other hand, I think it's debatable if this interpretation of the syntax is better or worse than the alternative.

-----

1 point by rocketnia 4243 days ago | link | parent | on: Did arc language project alive?

Nice one. :-p

-----

2 points by rocketnia 4247 days ago | link | parent | on: Streams

Now may be a good time to mention almkglor's lazy list library for Arc 2, which did extend 'car and 'cdr like you're talking about:

https://github.com/arclanguage/anarki/blob/arc2.master/lib/s...

---

Meanwhile, here's a generator library for Arc 2 by rkts: https://github.com/arclanguage/anarki/blob/arc2.master/lib/i...

And here are three relevant libraries I made as part of Lathe:

- A lazy list library, which happens to use a multimethod framework I made: https://github.com/rocketnia/lathe/blob/master/arc/orc/oiter...

- A generator library: https://github.com/rocketnia/lathe/blob/master/arc/iter.arc

- An amb operator library: https://github.com/rocketnia/lathe/blob/master/arc/amb.arc

I've never actually found much use for these, heh.

-----

1 point by akkartik 4247 days ago | link

Thanks for the links! Too bad none of them show example usage.. :) But no matter, I'll add some to this version. It's looking promising so far, I'll push it later today.

One thing I learned from malisper's code was the existence of afnwith in anarki (https://github.com/arclanguage/anarki/blob/87d986446b/lib/ut...), which is a neat alternative solution to my http://arclanguage.org/item?id=18036.

-----

1 point by akkartik 4246 days ago | link

Ok, I've turned streams into a tagged type. I had to just support them in car and cdr and carif to get common list operations to work. However, existing operations still return regular (eager) lists when you pass them lazy streams, so I followed malisper's idea of creating lazy variants that preserve laziness in the result.

I've also added unit tests for them in lib/streams.arc.t, which is my first serious attempt at using zck's nice unit-test harness with suites.

malisper, I took some liberties with your code, such as renaming the 's' prefix to 'lazy-'. I'm not attached to these things, so feel free to revert any of my changes you don't like. Thanks for a fun exercise!

I'm not sure how you're measuring overhead, but let me know if it seems slower than before.

https://github.com/arclanguage/anarki/commit/6180f0e65

-----

2 points by malisper 4246 days ago | link

Why append lazy to the beginning when we can just extend all of them to use scons when given a stream.

-----

1 point by akkartik 4246 days ago | link

Well, you need it for cons and the generator lazy-gen and other functions that don't take a stream. Maybe the others don't, but it seemed to me that sometimes it makes sense to return a list and sometimes not. So especially since functions like firstn do something useful without changing anything, maybe we should keep that behavior around. But it's just an idea. What do you think? Since you're using them for project Euler, it'll be good to hear your experiences.

-----

2 points by rocketnia 4246 days ago | link

I'm a little surprised to see 'afnwith. Anarki also has aw's 'xloop, which is the exact same thing as 'afnwith but with the "self" anaphoric variable renamed to "next".

-----

1 point by akkartik 4246 days ago | link

Ah, that's a much nicer name. I (too) found the presence of fn to be misleading since the afn is called immediately after it's created.

I've deleted afnwith from anarki: https://github.com/arclanguage/anarki/commit/a0052f031

In wart the name can be nicer still -- just loop since for does what arc's loop does, in the footsteps of classical C.

Edit 20 minutes later: I ended up going with recur instead of next, to form a loop.. recur pair. I also considered with loop.. recur to strengthen the connection with with for the alternating var/val syntax. (The other pair of keywords in wart is collect.. yield in place of accum acc.) https://github.com/akkartik/wart/commit/b7b822f4fb has has an example use, which had me hankering for absz's w/afn (though with a better name; http://arclanguage.org/item?id=10125)

Edit 25 minutes later: It turns out xloop nests surprisingly cleanly. This does what you expect:

  loop (a 0)
    loop (b 0)
      prn a " " b
      if (b < 5) (recur ++b)
    if (a < 5) (recur ++a)
Edit 29 minutes later: Oh, loop.. recur is precisely what clojure uses!

-----

3 points by shader 4238 days ago | link

I feel like we need to extend the documentation on arclanguage.github.io to include the extra libraries on anarki, to improve discoverability.

-----


I think your second link's missing a hyphen: https://github.com/arclanguage/arc-nu/blob/c9642f4be0aad8839...

I think that's a very old version of Arc2js. There was at least one more version Pauan made after that, from scratch. Nowadays, Pauan's lisp-that-runs-on-JavaScript efforts seem to be focused on Nulan[1], which... seems to use the escodegen library[2] for JS generation.

And here I spent the weekend optimizing my own lisp-to-JS compiler, when I could've used escodegen. :-p Ah well, it'll probably be worth it because I'm very picky about stack overflows and such.

Well, that old version of Arc2js seems to be trapped in an orphaned set of commits that GitHub might garbage-collect at any time. I tried making a local mirror of the whole repo, but those commits don't come with it. Pauan probably wouldn't consider it to be code that's worth rescuing, but now's the time to figure out how to rescue it. :)

[1] https://github.com/Pauan/nulan

[2] https://github.com/Constellation/escodegen

-----

2 points by akkartik 4250 days ago | link

Crap. It's a real problem that branches aren't version controlled.

-----

2 points by akkartik 4249 days ago | link

I'm curious how you found that link, malisper.

-----

3 points by malisper 4249 days ago | link

I was just looking at some of the older posts here and came across the discussion of arc2js[0]. When that link didn't work I used Google and came across another post[1] where he mentioned an updated page and that one had a link that worked. I must have accidentally copied the broken link instead of the one to the actual page.

[0] http://arclanguage.org/item?id=14795

[1] http://arclanguage.org/item?id=15086

-----

2 points by ema 4249 days ago | link

I think escodegen is interesting when one wants to have source maps.

-----


I've been looking for Olin Shiver's "100% and 80% solutions" for a while, and this finally linked to it again!

Here's the link on its own: http://scsh.net/docu/post/sre.html

I keep thinking back to this... and begrudging all extant languages for being 80% solutions. :-p

-----

1 point by akkartik 4250 days ago | link

I'm not sure I'd ever seen that before, thanks.

"I am not saying that these three designs of mine represent the last word on the issues -- "100%" is really a bit of a misnomer, since no design is ever truly 100%. I would prefer to think of them as sufficiently good that they at least present low-water marks -- future systems, I'd hope, can at least build upon these designs."

But it's not clear that an 85% solution or 95% solution helps to solve the problem he identified earlier in the post:

"[The 80%] socket interface isn't general. It just covers the bits this particular hacker needed for his applications. So the next guy that comes along and needs a socket interface can't use this one. Not only does it lack coverage, but the deep structure wasn't thought out well enough to allow for quality extension. So he does his own 80% implementation. Five hackers later, five different, incompatible, ungeneral implementations had been built. No one can use each others code."

At best the number of different, incompatible systems will be lower over time. But there's no reason to believe that dissatisfaction with prior solutions will be more likely to build on them.

I'm curious if Conrad Barski was aware of Olin Shivers's regular expression library when he built http://www.lisperati.com/arc/regex.html and if so, if he built on the design. That would be a strong counter-example to my hypothesis.

-----

2 points by rocketnia 4249 days ago | link

"At best the number of different, incompatible systems will be lower over time. But there's no reason to believe that dissatisfaction with prior solutions will be more likely to build on them."

Could you clarify this? I think there might be a typo in here, but I don't know where.

-----

1 point by akkartik 4249 days ago | link

Say a 95% solution leaves 1 in 20 hackers dissatisfied, where an 80% solution leaves 1 in 5 hackers dissatisfied. The number of dissatisfied hackers goes down. But won't they still continue to react to their dissatisfaction by creating new libraries that don't build on prior attempts?

-----

2 points by rocketnia 4248 days ago | link

"The number of dissatisfied hackers goes down. But won't they still continue to react to their dissatisfaction by creating new libraries that don't build on prior attempts?"

Who's saying they won't? If you're willing to believe that 100% solutions will lead to fewer dissatisfied hackers and less duplication of effort, I can't tell what other claims you're trying to challenge here.

---

To bring in my personal goals, I'm interested in using programming to improve the expressiveness of communication, so that we have less severity in our petty misunderstandings, our terminology barriers, etc.

While I'm interested in reducing duplicated effort or saving people from dissatisfaction, I'm mainly interested in these things because they go hand-in-hand with establishing better communication. If hackers are more productive, they can communicate more; and if they communicate more, they can find satisfactory tools and avoid duplicated effort.

---

I think it'll be more interesting to look the "80% solution" idea with a scenario that combines more than one solution, so that it's not a simple feedback loop anymore.

Suppose we have two completely unrelated projects A and B, with A being a 95% solution and B being an 80% solution.

Now let's say some hackers have goals in the A+B domain, so that people might suggest for them to use project A and/or B as part of the solution.

       +A    -A
  +B   76%    4%  = 80%
  -B   19%    1%  = 20%
      ----  ----
       95%    5%
Only 76% are actually satisfied with A and satisfied with B at the same time. About 4/5 of the remaining hackers can still build on top of A, but they have to reinvent the functionality they hoped to get from B.[1]

Unfortunately, sometimes it's very difficult to combine the two dependencies A and B in a single program. If A and B involve different operating systems, different programming languages, etc., then the task of combining them may be even more difficult than reinventing the wheel. In these cases, the 76% quadrant of happy hackers must redistribute itself among the other quadrants.

How does it get distributed? Well, the user community itself is a feature of the system, so I'm assuming a 95% solution tends to have a more helpful community than the 80% solution by definition. We also haven't considered any reason that a helpful hacker would make a different technology choice than an unhelpful one, so I'll make the simplifying assumption that all communities have the same population-to-helpfulness ratio.[2] So I'd guess the 76% "+A+B" quadrant is necessarily redistributed to the 19% "+A-B" and 4% "-A+B" quadrants while roughly preserving that 19:4 ratio.

       +A    -A
  +B    0%   17%
  -B   82%    1%
This fuzzy reasoning suggests that if an 80% solution B is incompatible with a 95% solution A, approximately 83% of hackers won't use B to achieve their A+B goals. So we see, an 80% solution can leave the vast majority of hackers dissatisfied.

But 80% solutions must be acceptable in our software, or else all our software will be monolithic projects that take lots of resources to develop and maintain, and we won't be communicating very effectively.

Personally, one thing I take away from this is that the world could really use a near-100% approach to modularity, so that almost any two well-designed projects A and B can be used together. I've been working toward this.

(However, I also think this could backfire to some degree. Reusable software that doesn't bit-rot is in a way immortal, and certain immortal software may get in the way of or add noise to our communication.)

---

[1] While this 4/5 may look like it corresponds to the 80% solution, that's just a coincidence. This 4/5 roughly comes from the remaining 20% and 5% that each solution doesn't cover. We'd get a similar 4/5 result if we compared a 96% solution with a 99% solution.

[2] Some reasons this might be wrong: A big community invites people to use it as a symbol for inclusion-exclusion politics, even if that's non-sequitur with the original purpose of the community. Sometimes technology has lots of users not because its community is helpful or political, but because few people have the kind of expertise it would take to reinvent this wheel at all (not to mention how many people don't even know they have a choice).

-----

1 point by akkartik 4248 days ago | link

Hmm, my interpretation of the socket complaint (that I quoted above) was that it was a qualitative rather than a quantitative argument.

Ah, I think I've found a way to reframe what Olin Shivers is saying that helps me make sense of it. The whole 80% thing is confusing. The real problem is solutions that don't work for the very first problem you try them out at. That invariably sucks. Whereas a solution that is almost certain to support the initial use case of pretty much anyone is going to sucker in users, so that later when they run into its limitations it makes more sense to fix or enhance it rather than to shop around and then NIH ("not invented here", or reinvent) a new solution for themselves. In this way it hopefully continues to receive enhancements, thereby guaranteeing that still more users are inclined to switch to it.

:) I'm using some critical-sounding language there ("sucker") just to help highlight the core dynamic. No value judgement implied.

---

I think both of us are starting with the same goal of helping hackers communicate better, share code better. But we're choosing different approaches. Your approach of fixing modularity is compatible with the Olin Shivers model, but I think both of you assume that receiving enhancements is always a good thing, and things monotonically improve. I don't buy that more and more hackers adding code and enhancements always improves things in the current world. There's a sweet spot beyond which it starts to hurt and turns away users who go off trying to NIH their own thing all over again.

My (more speculative) idea instead is to ensure more people understand the internals so that the escape hatch isn't NIH'ing something from scratch but forking from a version that is reasonable to them and also more likely to be understandable to others. By 'softening' the interface I'm hoping to make it more sticky over really long periods, more than a couple of generations. I don't think modularity will work at such long timescales.

-----

2 points by akkartik 4243 days ago | link

I just came up with an example of my version of an "80% solution": http://www.reddit.com/r/vim/comments/22ixkq/navigate_around_...

It's not packaged up, all the internals are hanging out, some assembly is required. But it enumerates the scenarios I considered so that others can understand precisely what I was trying to achieve and hopefully build on it if their needs are concordant.

-----


"A straight arc to javaScript compiler would have to be really smart to produce fast code, and interop wouldn't be seamless either. So I decided to try writing my own language."

"Current status: It is already sort of usable, but I wouldn't recommend anyone else to use it for anything beyond throw away tinkering, because I still might decide to make big changes to the semantics."

You could be telling the story of my recent language projects too. :-p

---

"Self hosted compilers are a pain in the behind, one has to mentally switch between the language version the compiler is written in and the language version the compiler is compiling and when you break you compiler you can't use it to compile the fixed version."

That's a really good warning to hear, because I was headed toward having a self-hosted compiler myself.

Nevertheless, I think I'm still headed in that direction. Once you or I have a self-hosted compiler, if it can compile to a platform, it can also run on that platform. That's a way to escape from the original platform you built it on: Write a compiler backend, and then just move to that platform for developing in the future. This could be a pretty nice superpower when we have the silos of C#-based Unity, Java-based Android, etc.

-----

4 points by ema 4250 days ago | link

"Once you or I have a self-hosted compiler, if it can compile to a platform, it can also run on that platform."

In theory yes, but it wouldn't be a good idea for jasper because it is a relatively thin layer on top of javaScript. So a backend for a different platform would have to emulate a lot of JS quirks, which would be complicated and produce slow code.

So if you want to make a cross platform language, it should be a thicker layer on the first platform from the beginning.

"Nevertheless, I think I'm still headed in that direction."

Always make backups of your compiler binaries or don't overwrite old compiler binaries so in case of a bug you don't end up with only one, broken, compiler binary.

-----


"Arc's attitude, championed by aw, is to avoid generalities and look for concrete code examples where an alternative makes for greater concision."

Yeah, this can be a pretty nice metric sometimes. :) This is probably one of the leading reasons to use weakly typed (or otherwise sloppy) versions of utilities.

-----


"Incredibly, arc has never had a substring function, so I can't be sure what pg would do."

Here you go:

  arc> (cut "example" 2 7)
  "ample"
  arc> (cut "example" 2 10)
  Error: "string-ref: index 7 out of range [0, 6] for string: "example""
(Transcript from http://tryarc.org)

-----

More