It shouldn't be too hard if we provide some real reasons for people to use the language. A 'killer app' if you will.
That and production-readiness, so that they don't have to stop using their favorite 'toy' when they want to build something real. Some of that involves just making more libraries available.
Maybe starting with the package-management / CI system that pulls directly from github akkartik and I were discussing earlier. It would be cool, not too hard to implement, and make sharing libraries a lot easier.
Whoa! Pretty cool find. I keep hoping the community has some sort of comeback, but I'm not really sure how. Not to be too much of a downer -- I do really like so much about the design of Arc.
I'm having a hard time finding any documentation on ssyntax either. Must have been in some early forum posts... But your understanding is the same as mine.
For a little more background, the "." symbol is traditional Lisp syntax for a single cons cell. So (1 . 2) is a cons cell with car containing 1 and cdr containing 2. This is in contrast to the list '(1 2), which expands to two cons cells: (1 . (2 . nil)).
Cons cell notation is often used for pairs, such as k/v pairs in a hash table, since you don't need the ability to append additional content.
pg added "a.b" as "symbol syntax" to arc to provide shorthand for (a b), which is a very common pattern - calling a single argument function, looking something up in a table, etc. Furthermore, it chains, so "a.b.c" expands to ((a b) c) - the pattern you would want if you were going to look something up in a nested set of objects.
And as zck points out, symbols (quoted names) are very common keys for object, since they are perfect for literal names. In fact, that's precisely how templates are used to make something analogous to classes in arc.
I'm not sure where I read this, but the explanation is this:
Using this obj:
arc> (= h (obj 1 2 three 4))
#hash((1 . 2) (three . 4))
We can access the key 1 with dot ssyntax:
arc> h.1
2
If we have a variable bound to 'three, we can use it the same way:
arc> (let val 'three h.val)
4
But if we don't, and want to use the quoted literal 'three with ssyntax, we can't, because it errors:
arc> h.'three
Error: "Bad ssyntax h."
arc> three
So the ssyntax for this takes the two characters that don't map cleanly (the .' in the middle of h.'three), and pushes them into the closest character. .' becomes !
arc> h!three
4
Looking around, I can't find where I read this either. I'm 90% sure this came from pg, and not from my brain, but that is distinctly not 100%.
Looks interesting. But is anyone actually able to run this?
Parrot is apparently a VM that Perl6 runs on, but the install script throws a syntax error when run with Perl6. With Perl5 it complains about a missing `parrot_config`.
As a result of my research trying to answer this question, I now think it would be interesting to use PyPy to implement arc, if nobody has done so yet.
I've been thinking more about the "renaming" vs "numbered versions" for packages, and I'm now leaning more strongly towards a distinct version number—or at least a separate version field, number or otherwise.
It's true that the major version number doesn't convey much, except the vague idea that the authors believe it to be better, or they wouldn't have written it. However, as long as they are unique and you have some easy way to find the "latest" version, it doesn't really matter if they're numbers or not. Commit hashes or code names should work just as well.
However, I think there's a simple security argument in favor of making the version a subfield, to prevent spoofing by malicious or mischievous third parties. Otherwise anyone could claim that their fork was "python-4".
An alternate solution might be to add a "successor" field, so that a package could identify another package as the rightful heir, even if it wasn't developed by the same team. That should make the open-source fork-based community development a little easier. You'd still have to know what the root package was to follow the chain though.
It's not really the same as others being able to add tests to your automated suite. Rather, they add tests to their own package, and then the CI tool collects all tests indirectly dependent on your library into a virtual suite. Those tests are written to test their code, and only indirectly test yours. If a version of their package passes all of their tests with a previous version of your code, but the atomic change to the latest version of your code causes their test to fail, the failure was presumably caused by that change. The tests will probably have to be run multiple times to eliminate non-determinism.
It's still possible that someone writes code that depends on "features" that you consider to be bugs, or a pathologically sensitive test, so there may need to be some ability as the maintainer to flag tests as poor or unhelpful so they can be ignored in the future. Hopefully the requirement that the test pass the previous version to be considered is sufficient to cover most faulty tests though.
Ooh, that's another novel idea I hadn't considered. I don't know how I feel about others being able to add to my automated test suite, though. Would one of my users be able to delete tests that another wrote? If they only have create privileges, not update, how would they modify tests? Who has the modification privileges?
These are whole new vistas, and it seems fun to think through various scenarios.
I wasn't really considering that level of integration testing. It would certainly be cool to get detailed error reports from the CI tool. I don't see why you couldn't, since the code is published on the public package system, and you would be getting error listings anyway.
I don't think it would be creepy as long as it's not extracting runtime information from actual users. If it's just CI test results, it shouldn't matter.
Live user data would be a huge security risk. Your CI tests could send you passwords, etc., if they happen to pass through your code during an error case.
By "reduced need for tests" I didn't mean that the absolute number of tests would decline, but rather the need and incentives for the development team to write the tests themselves. Since they have the ecosystem providing tests for them, they don't need to make as many themselves. At least, that's how I understood the discussion.
So yes, if the package manager only enforced the tests you include in your package it would incorrectly discourage including tests. But if it enforces tests that _other_ people provide, you have no way around it. The only problem is how to handle bad tests submitted by other people. Maybe only enforce tests that passed on a previous version but fail on the current candidate?
Oh, I just realized what you meant. Yes, I've often wanted an open-source app store of some sort where you had confidence you had found every user of a library. That would be the perfect place for CI if we could get it.
Perhaps you're also suggesting a package manager that is able to phone home and send back some sort of analytics about function calls and arguments they were called with? That's compelling as well, though there's the political question of whether people would be creeped out by it. I wonder if there's some way to give confidence by showing exactly what it's tracking, making all the data available to the world, etc. If we could convince users to enter such a bargain it would be a definite improvement.
I wasn't saying there'd be a reduced need for tests. It's hard for me to see how adding CI would reduce the need for tests. I'm worried that people will say this stupid CI keeps failing, so "best practice" is to not write tests. (The way that package managers don't enforce major versions, so "best practice" has evolved to be always pinning.)
Unnecessary tests are an absolutely valid problem, but independent :)
We don't write tests just to have tests, but to verify that certain important functionality is preserved across versions. The trouble with many tests, is that they are written to fill coverage quotas, and don't actually check important use cases. By using actual dependents and _their_ tests as a test for upstream libraries, we might actually get a better idea of what functionality is considered important or necessary.
Anything that nobody's using can change; anything that they rely on should not, even if it's counter intuitive.
The problem remains that most user code will still not exist in the package manager. It might be more effective if integrated with the source control services (github, gitlab, etc.), which already provide CI and host software even if it's not intended to be used as a dependency. The "smart package" system could then use the latest head that meets a specified testing requirement, instead of an arbitrary checkpoint chosen by the developers.
I updated my original post based on conversations I had about it, and my updated recommendation is towards the bottom:
"Package managers should by default never upgrade dependencies past a major version."
I now understand the value of version pinning. But it seems strictly superior to minimize the places where you need pinning. Use it only when a library misbehaves, rather than always adding it just to protect against major version changes.
---
CI integrated with a package manager would indeed be interesting. Hmm, it may disincentivize people from writing tests though.
Interesting discussion. I do appreciate your observations that only the major version matters, and that could be equivalent to renaming the project, since it doesn't really communicate anything useful other than "differences exist".
However, I don't really agree that preventing version pinning will somehow encourage better coding standards. Hypothetically it makes sense; without version pinning, breaking changes would be more noticeable and painful, and users would have more incentives to switch to other libraries that don't do that. Unfortunately, software is not yet and possibly never will be the kind of competitive market required for that to be effective.
The problem is that the pain is entirely felt by the users; the developers may not even realize when one of their changes is "breaking", because they aren't running the users' code. Furthermore, since libraries are usually very different even from those with similar functionality, updating to accommodate the breaking changes will almost always be easier than switching to a different one with a better reputation (assuming it even exists). Since the user can't control the behavior of the developer, the best they can do to overcome breaking changes on their end is version pinning. It's not a good solution, but it's effectively the only solution.
Maybe in the long run people would read horror stories and avoid those libraries, but that still doesn't help much, because the developers usually don't have much incentive to actually meet the needs of their users--they are not really customers, merely beneficiaries. This would be different for commercial software, but then you won't be getting the resource through a package manager anyway.
One interesting solution would be a package manager with integrated CI functionality, so any minor update to a package is automatically tested against every other package that has it as a dependency. This wouldn't catch all of the errors, since most code wouldn't be published, but it would make the developers much more aware of and sensitive to breaking changes. If they still want to make the change anyway, they can change the name.
First, congrats with getting of Facebook and Linked-In!
Yes, most of the web doesn't care about non-free JS.
However, for me, it's higher priority to do what I think is right, rather than what is popular. If I didn't care about free software, I'd just put stuff on Ebay instead.
That's also why I coded this new event calendar in Arc -- that you can check out in the Anarki repository -- because I'm part of this art collective where everyone have been publishing events exclusively on Facebook, which is super annoying when you don't want to be used by Facebook.
I wanted to make something that was as easy to use but free, because many artists can't be bothered to use FTP to edit plain text files, and all the PHP calendars I looked at were overengineered overcomplex piles of drupal.
Anyway, I might look into Paypals Python SDK, because Hy makes Python acceptable.
Hy doesn't have linked lists or tail call optimisation, but it has pretty cool bidirectional interop, i.e. you can write Python in Lisp, and also you can just import Hy modules into existing Python code, and even use the Python debugger for Lisp code.
I personally find the Python syntax super annoying; the colons after `if` and `else`, the `@` decorator syntax, its whitespace sensitivity, all the __underscores__, etc., so it's nice that someone are doing for Python what Clojure is doing for Java.
> this doesn't mean that most people don't care -- because plenty people I know get real pissed off...
Those people you know who get pissed are either A: not representative of 'the web' or B: not caring enough to stop doing what they are doing. So I will stand by "most of the web does not care" (and yes I am inferring you have to care enough).
> I'd never recommend anyone to use a browser that runs non-free JS.
"most of the web does not care"
Unfortunately this is the world we live in and trust is currently a staple of the internet even as scary as that is to some people.
I have to trust that stripe.js is secure - that's what I'm paying them for and if they get a bad reputation like Paypal has then people, including myself, will stop using them and stop paying them. Frankly for a cc payment type script I think their code should be audited by professionals that can see more than just keystroke loggers and if there are any vulnerabilites then the auditors should have the power to shut them down.
If at all you think I'm not on your side I'll suggest you're wrong as:
* I deleted my facebook account 10 years ago.
* I deleted my minimal Linked-in account 2 years ago.
* I don't use an ad-blocker, but I:
* make mental notes not to buy their products because the ad pop'd up.
* don't revisit websites that have ad pop up.
* avoid sites that have ads.
Using an ad-blocker is admitting defeat and I'm not there yet!
You do have a point there, as probably most people on the web run non-free JS.
I'd of course argue that this doesn't mean that most people don't care -- because plenty people I know get real pissed off about video ads, anti-adblockers, pop-up forms, and all that jazz -- but they don't know that this is almost always non-free JS.
So, even if we were to assume that the non-free Stripe JS code is trustable, and ask people to run it, then I'd never recommend anyone to use a browser that runs non-free JS.
Yes, people can use adblockers, but there's plenty more nasty stuff non-free JS code can do, and does[1][2][3][4], so I wouldn't ask anyone to do that.
Yes, there could be free/libre JS malware, but like who'd ever do
<script> /* Code to log users' keystrokes before they send their message */ </script>
BTW I'm thaddeus. I check in once in a blue moon, but decided to vote on this and forgot my password so I created another account;).
And it looks as though, because I created the account seconds before voting, I failed at least one test in 'legit-user':
new-karma-threshold* 2
It's possible I failed new-age-threshold* too, but I wasn't all that interested in investigating further.
I dunno; I understand the reasoning, but it still seems like a bad design choice. I'd much rather, circumstantially, be put through a better legit-user test on account creation than to see a forum introduction like that. Oh well, the odds are low for a new user to vote on a poll as a first action anyway. I just seem to always beat the odds :) haha.
We don't have access to the actual code running live on the site, but looking at policies from back in 2009 or so, it looks like you may have run afoul of the sockpuppet detector: https://github.com/arclanguage/anarki/blob/8e330f1242/lib/ne.... That's too bad; sockpuppet detection is overkill for a site this niche.