ZL: A C/C++ Compatible Language with Hygienic Macros

AppId is over the quota

maintained by Kevin Atkinson

ZL is a C compatible and C++ like programming language that focuses on extensiblilty and giving the programmer control over how high-level constructs (such as classes) are implemented. ZL archives the first goal by means of a customizable grammar and a powerful Scheme-like macro system. ZL archives the second goal by using the macro system to define high-level constructs from a C-like core language in a similar in spirit to Scheme.

Details ZL and its application are given in my dissertation. The version of ZL corresponding the dissertation is 0.03. Since version 0.03 there has been a few API changes. The most notable one is that match_args is now match_f and match is now match_parts_f.

There has also been numerous enhancements to ZL since the dissertation was published. I have partly addressed enhancements to error messages and debugging support outlined in Section 11.2; implemented the ideas of Section 11.5.1 (Always Reparsing); and added basic support for extending the parser without having to modify the grammar as mentioned in Section 11.6. In addition I have added a high-level syntax for procedural macros, which included support for quasi and anti-quotes.

The more up-to-date ZL Manual includes many parts of my dissertation and if you are interested in ZL itself it best to consult it instead. What the documentation leaves out is how ZL can be used to mitigate ABI compatibility issues, which is the topic of my dissertation.

Parts of ZL and its applications are also described in the GPCE’10 paper (uses version 0.02) and details on parsing and the macro system is described in the Scheme’11 paper (uses version 0.03). The examples for both papers can be found in the test/ directory.

View the original article here

Open Character-Based Deep Convolutional Models: From Sentiment to Ad-Blocking

AppId is over the quota
Character-Based Deep Convolutional Models Character-Based Deep Convolutional Models

Parse 2.0

AppId is over the quota
A shockwave was felt across the internet on January 28th, when Parse announced the service would be turned off a year from now. It was trending on Twitter within hours, and continued to rank for the next two days. Many were sad because they loved the Parse service, many were upset at Facebook, and many were discussing the tradeoffs of external dependencies for your business. There’s a wrinkle… In the announcement, and in a subsequent post, Parse announced the launch of an open-source replacement for the service. Since these things happened on the same day, some have mistakenly assumed that they were related.

I have been a Developer Advocate at Parse for three years this month, and I take that title literally. I have always argued on behalf of developers internally, and provided external developers with realistic and honest answers. I heard every complaint, wish-list item, and limitation, and experienced them too. I always wanted Parse to be better than it was, to have more features, and to offer more customization. Now it can.

In May of this past year during a Facebook Hackathon, I hacked together a prototype of what is now Parse Server. It supported only the simplest of Parse functionality, saving and retrieving an object using one of the open-source client SDKs. It was the logical next step to me, after all of the open-source work we were doing, and I really wanted it personally. It took a lot of time and effort to convince others on the team that we should build this, some even said it was crazy and/or impossible (hah!). In the end, it was Parse co-founder Kevin Lacker who jumped in with me over a period of several months to build it. It’s not perfect and it’s not done, but it is a great start.

January 28th was supposed to be the release of Parse Server. Blog posts were written and reviewed, and I anticipated seeing the news headline “Facebook Open-Sources Parse.” I arrived at work excited, before learning that the headlines that day would be quite different. Undeterred, I finished the launch procedures to open the code and publish the modules, and set out on social media to make sure developers knew about the open-source server and that it wasn’t some migration tool.

Just one week after release, Parse Server has accumulated 4,700 stars, 900 forks, 200 issues, and 90 pull requests on GitHub. A large community is forming, discussing feature implementations, adding new features, and helping each other diagnose and fix problems. Many infrastructure providers are getting involved, coming up with easier and easier ways to host the server for developers, and I expect we’ll see at least one service pop up to host and manage Parse Server instances for developers who don’t want to touch the back-end.

Parse is an amazing tool, accelerating mobile development with fantastic client frameworks. Now that the server is open-source, the Parse community is free to grow far beyond what was possible before, and you are invited to be a part of it.

If you’d like to discuss Parse Server, feel free to email me at [email protected] .

View the original article here

JavaScript Fatigue? Or something more?

AppId is over the quota

There has been a lot of talk recently about a phenomenon called “Javascript Fatigue”, whereby developers are getting confused and worn out trying to keep up with the slew of new tools and methodologies when it comes to developing Javascript.

The problem is they are right, especially when it comes to the tooling. We’ve suddenly gone from a couple a javascript files included into our html to needing to do the following:

Decide what type of new, cutting edge version of JS/ES you are going to write inFigure out what combination of Babel + plugins will work properly with this versionBuild your package.json and figure out what dependencies you neededfigure out the correct Babel config that will compile your codeonce you’ve compiled your code, you need to compile it again so that the module system you are using works properlybut wait, you might also have an intermediary compile step depending on if you are using JSX and the tooling around that

For someone like me who wrote jQuery and was overly concerned with the issues that sprang from cross-browser compatibility before ES2015 or Node was even a thing, this current process is completely baffling. Suddenly, one day we are compiling our Javascript multiple times just to get it work.

I’m not saying these are bad things. Things like React and Node have done wonderful things for web development and allowed us to move forward in the web space in new and exciting ways. The problem is that we’ve created such a barrier around these great things that it is becoming increasingly harder to take advantage of them in a way that is reasonable.

Let me tell you about the times I have tried to use Babel. Yes, plural, because up until recently I could never get it to work. I wrote my ES2015 code, went to find Babel, only to be told that I needed babel-cli, once I got babel-cli I then figure out that it does nothing by itself and you need to download plugins. At this point I was confused, isn’t the point of Babel to compile modern JS code down to code we can use now? Why do we need to install plugins for something to do what it was built to do?

So I go and find the plugin, install that (thus my node_modules folder gets ever larger), and try again. Only this time it doesn’t work because I need to learn the arcane combination of compiler flags to even get it to just work. When did it become easier to compile using GCC/Clang than it is to compile Javascript?

Ok, so now I have a collection of Javascript files that have been transformed into something the browser can use, except it can’t because now I have this whole module system littered throughout my code. Apparently, figuring out how to deal with modules is a challenge left to the coder. So now, I have to figure out how to concatenate my compiled Javascript code so that it will work with these modules. So I go and grab Browserify because at the time, it seems like the most commonly used software to achieve this.

After again figuring out the ancient incantations of the compiler flags that would make even kernel developers blush, and again writing more config files, I finally have compiled my ES2015 code down to something I can include in my browser.

After all this, I load up my test.html file, and bask in the ‘hello world’ application I have just written in the new hotness that is ES2015. Except, now I have 2 actual source files (I was experimenting with ES2015 modules), 2 compiled files, 1 concatenated build.min.js file, and a ridiculous amount of configuration and build files.

This is what Javascript fatigue is, a complicated maze of dependencies that don’t quite fit together, tools which require mountains of configuration and cli flags, and multiple steps that should be folded down into one tool but for some reason are not.

If Javascript is to continue to thrive as it has been, then we need to as a community have a serious discussion about how we do things. We need to build tools that are easy to use and require one, if not a small number of steps. Imagine being able to download just one build tool that by default allows you to write something like ‘jsc /some/js/dir -o /some/build/dir –enable-jsx’ which not only does all the compilation but handles things such as modules etc for you, so that we can spend more time on building great applications and spend less time on downloading and configuring tools.

View the original article here

Convert curl commands to Go code

AppId is over the quota

This tool turns a curl command into Go code. (To do the reverse, check out sethgrid/gencurl.) Currently, it knows the following options: -d/–data, -H/–header, -I/–head, -u/–user, and -X/–request. It also understands JSON content types. As always, be sure to go fmt your code and pay attention to imports. There’s probably bugs; please contribute on GitHub! For a quick way to generate a Go struct from JSON, see JSON-to-Go.

Example 1 · Example 2 · Example 3 · Example 4 · Example 5

© 2016 Matt Holt – View on GitHub

View the original article here

The Command Line Murders

Permalink Failed to load latest commit information.

.OOOOOOOOOOOOOOO @@ @@ OOOOOOOOOOOOOOOO.OOOOOOOOOOOOOOOO @@ @@ OOOOOOOOOOOOOOOOOOOOOOOOOO””” @@ @@ “““`OOOOOOOOOOOOOO” aaa@@@@@@@@@@@@@@@@@@@@””” “””””””””@@aaaa `OOOOOOOOO,””””@@@@@@@@@@@@@@”””” a@”” OOOAOOOOOOOOOoooooo, |OOoooooOOOOOSOOOOOOOOOOOOOOOOo, |OOOOOOOOOOOOCOOOOOOOOOOOOOOOOOO ,|OOOOOOOOOOOOIOOOOOOOOOOOOOOOOOO @ THE |OOOOOOOOOOOOOIOOOOOOOOOOOOOOOOO’@ COMMAND OOOOOOOOOOOOOObOOOOOOOOOOOOOOO’a’ LINE |OOOOOOOOOOOOOyOOOOOOOOOOOOOO” MURDERS aa`OOOOOOOOOOOPOOOOOOOOOOOOOOb,.. `@aa“OOOOOOOhOOOOOOOOOOOOOOOOOOo `@@@aa OOOOoOOOOOOOOOOOOOOOOOOO| @@@ OOOOeOOOOOOOOOOOOOOOOOOO@ aaaaaaa @@’,OOOOnOOOOOOOOOOOOOOOOOOO@ aaa@@@@@@@@”” @@ OOOOOiOOOOOOOOOO~~ aaaaaa”a aaa@@@@@@@@@@”” @@ OOOOOxOOOOOO aaaa@”””””””” “” @@@@@@@@@@@@”” @@@|`OOOO’OOOOOOOo`@@a aa@@ @@@@@@@”” a@ @@@@ OOOO9OOOOOOO’ `@@a @@a@@ @@”” a@@ a |@@@ OOOO3`OOOO’ `@ aa@@ aaa””” @a a@ a@@@’,OOOO’There’s been a murder in Terminal City, and TCPD needs your help.

To figure out whodunit, you need access to a command line.

Once you’re ready, clone this repo, or download it as a zip file.

Open a Terminal, go to the location of the files, and start by reading the file ‘instructions’.

One way you can do this is with the command:

cat instructions

(cat is a command that will print the contents of the file called instructions for you to read.)

To get started on how to use the command line, open cheatsheet.md or cheatsheet.pdf (from the command line, you can type ‘nano cheatsheet.md’).

Don’t use a text editor to view any files except these instructions, the cheatsheet, and hints.

By Noah Veltman
Projects: noahveltman.com
GitHub: veltman
Twitter: @veltman

View the original article here