Vue router is elegant and type safe enough for most use cases. But I want to experiment how far typescript can help us.
Typesafe routing isn’t easy to do.
The easiest way to declare multiple class and instantatiate their paramater type. For example, typed-url. But this is too verbose.
Another approach is to use functional combinator. To put it simple, combinator is high order function that can abstract various operation. Routing combinators usually are a bunch of functions that can accept strings as static url segment or a function as dynamic url parameter. Both purescript and swift. But monad is too monad-y. My head just explodes.
One unique way to provide type routing is using reified generic! A demo video has illustrted how to implment it.(Spoiler: for a function with type A => Response, one can access the class by A.type and cast value by guard let param: A = .... in swift. Whoa, reification is powerful). Github repo is here: https://github.com/NougatFramework/Switchboard
Compile time reflection is ideal for routing thing. Yesod uses template haskell to do this, Example. Macro paradise!
Scala has yet another unique construct called pattern match. Tumblr’s colossus is a great example to use pattern match for type safe routing.
And of course, haskell has many type safe routing library. Check out the review for more info.
JavaScript does not have powerful constructs like macro/pattern match. Combinator is the only way to achieve type safety but for client side component based routing, declaring more functions solely for routing doesn’t feel natural. And specifically TypeScript is still too feeble to describe routing. However, by combining tagged template, function overloading (or fun-dep), and intersection type (or row polymorphism), we can still do some interesting thing. If this were written in flow-type, more interesting thing could happen.
Frankly type safety in router does not grant you much: it cannot check tempalte code, it can only help you to double check the shape of parameters in $route. It can help you to type router instance better, but requires all routes to have a name field.
This is only a sketch for type safe routing design. Useful? No. Concise? Partly. Safety outweighs ease of use? No. Maybe it’s only suitable for type safe paranoid.
Static typing has become a hot word in frontend land: hundreds of tweets and blogs appears on social network and XXXX weekly, rivaling type checkers compete their features with each other.
Correspondingly new frameworks have a consciousness of type safety in their API design. Angular2 has partial type safety in ViewModel code notwithstanding template code. (There has been some efforts to pursue more type safety, though)
React has full, and strict, when checked with flow, type safety by embedding templates in JavaScript(X).
But stakeholders of Vue.js, the thirdwhee.. another popular MVVM framework , might be disappointed by Vue’s type-checker hostility…
Type Checker Hostile API
Vue provides a set of simple and elegant API via heavy use of reflection that extinguishes compiler’s type inference.
It’s not Vue’s fault. Up to now static type checkers in JavaScript land have several limitations:
They cannot understand modification to objects’ type or perform key-wise type inference. (more elaboration later)
Some cannot annotate function’s this type. (not in flowtype 0.30, supported in TypeScript)
Suppose we are going to provide a type definition file for Vue’s config option.
1 2 3 4 5 6 7 8
interfaceVueConfig<D, P, PD, C, M, W> { data?: D props?: P propData?: PD computed?: C & {[k: string]: (this:D & P & C & M ) => {}} methods?: M & {[k: string]: (this:D & P & C & M) => {}} watch?: W & {[k: string]: (this:D & P & C & M) => {}} }
This is not very precise but does highlight some basic idea of Vue’s API. computed is a field of which the value is a function with this pointed to the object that has mixed in data, props and method. this in Vue’s option is an object made out of reflection.
Then we will write some function like this.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
function getVue<D, P, PD, C, M, W>(opt: VueConfig<D, P, PD, C, M, W>): D & P & C & M { returnnull// placeholder }
let a = getVueConfig({ data: { a: 123, }, watch: { a: function() { console.log(this.a) // oops } } })
Compiler will complain about this.a in the watch function. Why? this cannot be inferred. To infer this, compiler will have to first infer D, P, C, M respectively. To infer D & P & C & M, compiler will have to first infer the whole expression for resolving all the type arguments. But to infer the whole expression we need first infer watch, where we need to infer this. So comes a recursion. Compiler cannot be too eager to infer type otherwise it will jump into a recursion trap. Sloth is a virtue here, even Betelgeuse cannot blame.
Alternative API
Vue’s original API is doomed to be hard to infer. However, we can build a thin layer of wrapper to leverage type checkers.
I have two alternatives to present here. One is chaining DSL, a novel approach to induct type checking and inference into Vue. The other alternative is more established and angular like: class decorator.
Chaining DSL
We can work around the recursion problem by nudging compiler to do more diligent work. Because every method/function call return a new type symbol, we can use it to escape from recursion trap:
VueTyped.new() .data({msg: 'hehehe'}) // return a new type symbol with field `msg` .method({method() {return'hello: ' + this.msg}}) // create a new type symbol with `msg` and `method` .get() // type as {msg: string, method(): string}
Quick explanation. data has a signature like data<D>(d: D): VueTyped<D & T>. The intersection type in return position mocks mixin behavior. method<M extends {[k:string]: (this: T) => any}>(m: M): VueTyped<T & M> is more complicated. Parameter M is required for compiler to garner properties of option passed to method call. M is bounded by a constraint that every function in option must have this typed as the object we defined previously and that only defined property can be accessed via this. The final returning intersection type acts the same as data.
Note: method does not work for current TypeScript. Probably it is a bug
But step-wise inference still cannot resolve watch and computed property. $Keys magic type or keysof type does not exist in TS yet! Meanwhile, flowtype does not support this-typed function.
computed option is even harder to handle. There is no way to define this type in a getter/setter method. If we do not pass a getter/setter but a plain function as value, we cannot merge the computed properties into the resulting object.
var a = VueTyped.new() .data({ msg: 'hehe' }) .computed({ getcomputed(this: typeof a) { // forward reference returnthis.msg + ' from WET computed!' } }) .get()
It’s not DRY.
Furthermore, this approach does not support language service feature like “looking for definition” or “finding all usage” because intersection type needs casting in implementation to work.
Irreparable! Irredeemable! Irremediable!
However, this approach has some benefits. First, it is easier to extend its functionality. If one would like to add vuex field in option, it just requires defining a new method. It also prevents cyclic dependency because you cannot use fields before declaring. The API itself is akin to its original version, and thusly the implementation is very thin.
Class Decorator
This approach is much more conventional, and is discussed broadly in Vue’s issue.
The basic idea is to define as many methods as possible in a class and to decorate fields to add Vue specific logic.
@PropsomeObjProp:{some_default:string} = {some_default: 'value'}; //vue-typescript makes sure to deep clone default values for array and object types
@PropsomeFuncProp(){ //defined functions decorated with prop are treated as the default value console.log('logged from default function!'); }
someVar:string = 'Hello!';
doStuff() { console.log('I did stuff'); } }
This TSish approach enables more capability of compiler tooling such as usage finding and definition lookup. Class decorator also guarantees every method’s this correctly points to class instance, which cannot be achieved in Chaining DSL approach.
With higher abstraction comes more confusion. Indeed, class decorator smooths out the discrepancy between Vue and type checker. But syntactically its API is much further from Vue’s original one. Adding new API is also harder because every decorator is hard coded in VueComponent decorator’s code. For example, adding @vuex is almost impossible without rolling out a new VuexComponent. It also cannot transform all Vue’s API, such as watch and computed: { cache: false }, into idiomatic TypeScript, leaving some orifices in type safety.
I have an alternative API bike shedding but not ready to present. Maybe I will try it later.
Conclusion
This article presents type-safety problem in Vue and two ways to mitigate the problem. Rewritten in ES015 and type-checked by one of the most advanced type checkers, Vue is designed in ES5 era and, satirically, is still designed for ES5 code.
Vue doesn’t come with type safety in mind. But this is might be a mirroring of some part in the community where some developers have almost kind of Stockholm Syndrome: they encounter so many type unsafe ordeals that they are very happy and proud with their lavish use of reflection which backfires to themselves.
Yet one should always keep a leery eye a Static Typist‘s maniacal malarkey. Static typing system works the same way as BDSM: the more constraints, the more pleasure. Once having tasted the relish of bondage, a bottom will avariciously demand more complex tricks and more powerful constraints from typing system. That urge is so strong that the bottom loses incentives to lumber out of the fifty shades of types.
Vim already has a lot of plugin managers. But our Dark Vim Master has released neobundle’s successor, a brand-new plugin manger called dein.vim.
Dein.vim is a dark powered Vim/NeoVim plugin manager.
To put it in plain English, dein.vim is a plugin manager focusing on both installation performance and startup performance. It looks like a neobunle with vim-plug’s speed, or a vim-plug with neobunle’s feature. The best of both worlds.
In this blog post I will first show minimal configuration for dein.vim, and then try to explain the dark power of dein, if you are interested in what makes dein.vim so fast.
Minimal Configuration
Though dark powered, dein.vim supports vim and neovim.
Sadly, there is no installation script for dein.vim for now. So let’s manually install it.
If you are familiar with neobundle/vundle, you will find dein.vim’s path so different. It is because dein uses a new approach to manage plugin’s source.
Optionally, you can backup your vimrc for profiling, as I will show later.
Fire up your neovim/vim, call dein.vim’s installation function
1
:call dein#install()
Wait and brew yourself a cup of (instant) coffee. Then you can confirm your installation, for example call :Unite dein
More features
The most important feature of dein.vim is lazy load. Here are some typical usages worth mentioning.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
" lazy load on filetype call dein#add('justmao945/vim-clang', \{'on_ft': ['c', 'cpp']})
" lazy load on command executed call dein#add('scrooloose/nerdtree', \{'on_cmd': 'NERDTreeToggle'})
" lazy load on insert mode call dein#add('Shougo/deoplete.nvim', \{'on_i': 1})
" lazy load on function call call dein#add('othree/eregex.vim', \{'on_func': 'eregex#toggle'})
The last two lazy loading conditions are not available in Vimplug. While lazy loading according to mode change is very convenient.
Two Tales of Plugin Manager
From here on I will talk about dein’s internal feature. It’s my personal observation, please pardon my mistake and misunderstanding.
Vim’s plugin managers have to take two aspects into consideration: installation time and startup time. You can read junegunn‘s blogs article, this and this, for more details on plugin manager.
dein.vim is fast because it uses dark power
JKJK. Actually, dein.vim optimizes both two performance, by following measures:
parallel installation dein.vim uses either vimproc or neovim’s async job to download plugins concurrently.
precomputed runtimepath dein.vim will copy all plugins’ subdirectory into a cache directory. This merges all runtime VimScript files into one directory. So dein doesn’t need to compute runtimepath on startup.
Dein.vim also ditches command usage in favor of function calling, which may also contribute to performance(I’m not sure, though).
Troubleshooting
Because dein.vim uses parallel processing, when errors occur in installation it may be hard to figure out which plugin went wrong. Usually you can see the error messages by :message after all plugin finished fetching (regardless of success or not). Or you can use dein#check_install function.
Also, precomputed cache makes modifying plugin harder. You will need to call dein#recache_runtimepath() after modification. This also applies to disabling plugins.
Lastly, if you happen to live in a country with stupid censorship which prevents github access. You will need a proxy and set g:dein$install_process_timeout to a larger value.
DI is getting more popularity in JavaScript. Though, those solution is far way beyond JSR-330 compatible libraries in terms of type safety and performance.
This snippet strives to dig more type-safety from TS’ type system. However, it can hardly achieve equivalent type safety like Java’s counterpart, e.g., Dagger, Guice.
Solution
This DI snippet can, ideally, ensure every binding is resolved in compile time, which is a hard task for other DI solution. The main idea is an Injector can statically know its current binding, and judge whether dependencies of a new added binding can be already resolved by itself. Since dependency graph is a DAG, there exists a topological sorting order that every binding’s dependency can be resolved solely by those of preceding bindings . So once the injector is created and bindings are attached to it, we can assert that the dependencies can be resolved.
Say, there is a minimal example to illustrate this:
var inj = Injector.create() .bind(Clerk).toClass(Clerk) .bind(Shop).toClass(Shop) // compiles. Clerk resolved before Shop
To implement this, Injector has a shadow type Base that indicates resolved bindings. When new binding is added to injector, compiler will verify the new coming constructor/function will only depend on classes the injector has already resolved. Concretely, every argument in newly added constructor must be a subtype of Base.
1 2 3 4 5 6 7 8 9 10
typeCtor = new (...args: Base[]): T
injector<Base> .bind(NewClass) .toClass(NewClassasCtor) /* Make sure here `toClass` is defined like type toClass = (Ctor) => Injector<Base | T> the union type indicates resolved type, and `T extends Base|T` holds valid */
Base is a large union type storing all binding types, so every resolved type is a subtype of Base. And bind will return a binder that has toClass / toFactory method which further returns an injector whose resolved binding is a union of the previous binding type and the newly added binding type. Hence, after bind ... toClass, the injector has a new class appended to its resolved type list.
The implementation and test can be found at Github Gist.
Problem
But TS’ type system does not allow a full-fledged DI in this way.
First, runtime types are erased. One must annotate dependency for function in toFactory method. toClass is better because TS supports emitDecoratorMetadata. (maybe resolved in TS2.0). TS’ specific metadata implementation is also problematic. For cyclic dependent classes, at least one class’ annotation is undefined(ES3/5), or the script is crashed before it can run (ES6). Because metadata is attached to class declaration, in cyclic case there must be one class is used before it’s declared.
TypeScript has a double-edged sutructural type system. To fully exploit DI’s type check, user has to add a privatebrand field to every injectable class. This is not a good UI, though.
But even metadata is not enough. Runtime type data is first-order (in type-system’s view), that is, every type is represented by its constructor, no generic information is emitted. To work around this, token is introduced.
Token alleviates runtime type-system’s weakness, and enables binding multiple implementations to one single type. It also introduces more problem this DI wants to resolve in the first place. To work around point 1, we attached runtime types to constructor. Binding token will make type system think a type has resolved, but a following binding may not resolve it in runtime because it depends on constructor to find resolution.
1 2 3 4 5
injector .bind(clerkToken).toClass(Clerk) .bind(Shop).toClass(Shop) // compiles. but runtime error // toClass will analyze Shop's signature and extract the Clerk constructor // it can be found in type-level because Token<Clerk> enable injector to resolve Clerk, but at runtime injector can only resolve clerkToken, not Clerk
Also, tokens with same types cannot avoid this.
The workaround is, well, abusing string literal type. So every token is different at type-level. This requires users to type more types, and casts string literal from string type to string literal type. (TS’s generic inference does not have something like T extends StringLiteral so that T is inferred as string literal type)
Also, the toClass and toFactory signature should differentiate what can be resolved by constructor and by token. This is technically possible, just override these signature to support distinguishing between token and constructor. But the number of resulting overriding is exponential to the number of argument. 2 ^ n, where n is the number of arguments.
Conclusion
To fully support type-safe DI and higher performance, a compiler extension or a code generator is needed. Java’s DI relies on annotations and code generation.
Maybe Babel can do this right now. But TypeScript still needs a long way to go for a customizable emitter.
NeoVim is awesome. But after its 0.1 release, neovim is not that awesome, after all, in a old vimmer’s eye.
To support XDG configuration, NeoVim changed default config paths. After that patch, neovim search ~/.config/nvim/init.vim rather than our old friend ~/.nvimrc. So I changed my zshrc to alias v to nvim -u ~/.nvimrc. So I can use old configuration without relocating files.
It works fine, except Deoplete always complain about its remote plugin is not registered. When I execute UpdateRemotePlugins as Deoplete’s doc said, a new .-rplugin- file always spawn in my working directory. Without that weird file, Deoplete will never work.
I think this is configuration problem. But how can I figure out what happened? I start to resolving this by guess.
.-rplugin is the critical file on which Deoplete depends. So NeoVim should search for it when booting. I searched for NeoVim’s repository for its usage. Yes, it does appear in neovim’s source, in neovim/runtime/autoload/remote/host.vim.
It reads:
1 2
let s:remote_plugins_manifest = fnamemodify(expand($MYVIMRC, 1), ':h') \.'/.'.fnamemodify($MYVIMRC, ':t').'-rplugin~'
Hmmm, NeoVim will find .-rplugin in the same directory of $MYVIMRC. But where does $MYVIMRC come from?
Searching neovim’s doc gives me the answer. In :h starting, it writes:
If Vim was started with “-u filename”, the file “filename” is used. All following initializations until 4. are skipped. $MYVIMRC is not set.
Oh, so I have to relocate my vimrc file. After that, every thing works :).
Error tracking is one of the awful part of JavaScript. Server side error tracking requires several configuration, which is not hard because server is under developers’ full control, after all. For Android/iOS applications, error tracking is integrated into integrated into platform framework. Unfortunately, error tracking in browser is like survival in wild jungle.
Here are some common pitfalls.
Incompatible API
JavaScript errors in different browsers have different field names, as usual.
One should never bother to care about api incompatibility among browsers. Here is the snippet to normalize those quirks. Curse the variable ieEvent.
You will see a lot of Scritp error message in your CDN hosted javascript file. Browsers report this useless error intentially. Revealing error details to web page in different domain is a security issue. It can leak one’s privacy and helps phishing and social engineering. To quote the SO answer
This behavior is intentional, to prevent scripts from leaking information to external domains. For an example of why this is necessary, imagine accidentally visiting evilsite.com, that serves up a page with <script src="yourbank.com/index.html">. (yes, we’re pointing that script tag at html, not JS). This will result in a script error, but the error is interesting because it can tell us if you’re logged in or not. If you’re logged in, the error might be 'Welcome Fred...' is undefined, whereas if you’re not it might be 'Please Login ...' is undefined. Something along those lines.
And in Chromium’s source code, we can see error is sanitized, if the corsStatus does not satisify some condition.
To enable SharableCrossOrigin scripts, one can add crossorigin attribute to script tags, and, add a Access-Control-Allow-Origin head in scripts’ server response, just like cross orgin XHR.
Modern browser will protect users’ privacy and respect developers’ CORS setting. But IE may screw both. In some unpatched Internet Exploers, all script errors are accessible in onError handler, regardless of their origins. But some Internet Explorers, patched, just ignore the CORS head and swallow all crossorigin error messages.
To catch errors in certain IEs, developers must manually wrap their code in try {...} catch (e){report(e)} block. Alternatively, one can use build process to wrap function, like this.
Zone should also be a good candidate for error tracking, and does not require build process. Though I have not tried it.
Another issue in error tracking is ISP and browser extensions. onError callbacks will receive all error in the host page. It usually contains many ISP injected script and extension script which trigger false alarm errors. So wrapping code in try ... catch may be a better solution.
UPDATE:
It seems Zone like hijacking method has been used in error tracking product. Like BugSnag. The basic idea is: If code is executed synchronously, then it can be try ... catch ...ed in one single main function. If code is executed asynchronously, then, by wrapping all function that takes callback, one can wrap all callbacks in try ...catch ....
1 2 3 4 5 6 7 8 9 10 11 12 13 14
functionwrap(func) { // Ensure we only wrap the function once. if (!func._wrapped) { func._wrapped = function () { try{ func.apply(this, arguments); } catch(e) { console.log(e.message, "from", e.stack); throw e; } } } return func._wrapped; };
The above code will wrap all func in try and catch. So when error occurs, it will be always logged. However, calling wrapper function in every async code usage is impractical. We can invert it! Not wrapping callbacks, but wrapping functions that consume callbacks, say, setTimeout, addEventListener, etc. Once these async code entries have been wrapped, all callbacks are on track.
And, because JavaScript is prototype based language, we can hijack the EventTarget prototype and automate our error tracking code.
1 2 3 4
var addEventListener = window.EventTarget.prototype.addEventListener; window.EventTarget.prototype.addEventListener = function (event, callback, bubble) { addEventListener.call(this, event, wrap(callback), bubble); }
IE9 and friends
Sadly, IE does not give us stack on error. But we can hand-roll our call stack by traversing argument.callee.caller.
1 2 3 4 5 6 7 8 9 10 11
// IE <9 window.onerror = function (message, file, line, column) { var column = column || (window.event && window.event.errorCharacter); var stack = []; var f = arguments.callee.caller; while (f) { stack.push(f.name); f = f.caller; } console.log(message, "from", stack); }
Garbage Collector Issue
Error reporting is usually done by inserting an Image of which the url is the address of logging server comprised of encoded error info in query string.
1 2
var url = 'xxx'; newImage().src = url;
But the Image has no reference to itself, and JS engine’s garbage collector will collect it before the request is sent. So one can assign the Image to a variable to hold its reference, and withdraw the reference in the onload/onerror callback.
1 2 3 4 5 6 7
var win = window; var n = 'jsFeImage_' + _make_rnd(), img = win[n] = newImage(); img.onload = img.onerror = function () { win[n] = null; }; img.src = url;
Angular2 official page wants you to make some dirty hack to get the fastest hellow world in Angular2. But it immediately requires to correct your first sin in the same 5-min quickstart page. Maybe it is possible for a newcomert to set up Angular2 properly in 5 minutes, but reverting previous dirty hack and then setting the correct thing up are annoying. So here is the REAL Angular2 quickstart that does not piss you off.
DISCLAIMER: Knowledge about npm, TypeScript and SystemJS is recommended. This quickstart deliberately skips explanation on the config and shell code for real real speed.
Step 1: Create a new folder for our application project.
1 2 3
mkdir angular2-quickstart cd angular2-quickstart mkdir -p src/app
Step 2: Install npm packages
First step towards front end project as usual…
1 2 3
npm init -y npm i angular2@2.0.0-alpha.44 systemjs@0.19.2 --save --save-exact npm i typescript live-server --save-dev
Version number sucks, but it will be removed after Angular2’s public stable release. We need to install angular2 and TypeScript as dependency, of course. SystemJS is used to load our app(alternatively one can use webpack or browserify). Live-server gives us live-reloading in developement.
Step 4: Set up npm script tasks
Let’s define some useful command. Find and replace the script section in package.json
It inherits the spirit of Golang’s method, Switf’s extension, Scala’s implicit class, Ruby’s instance_exec, Haskell’s typeclass. While ES6 normalizes, and thus constrains, inheritance in JavaScript, :: brings about ad-hoc virtual method to extends class’s behavior.
Thanks to JavaScript’s prototype based and dynamic typing nature, Paamayim Nekudata introduces least semantic complexity and grants best expressiveness. It is much more concise than method.call in previous JS or instance_exec in Ruby. Using double colon ::, other than dot . provides visual cue to the source and definition of virtual method, which is more clear and less confusing than Swift’s extension or Scala’s implicit class. Extension to native object is trivial if one uses this new syntax. We can easily write a underscore like itertool library and apply its API directly on native array. This cannot be done without hacking (or screwing up) Array.prototype in ES6-. Both proposal and implementation on Function Bind Syntax are easy and straightforward, again, thanks to JS’ nature.
However, Function Bind Syntax does not work well with type checking, for now. Current candidate proposals on ES type system does not cover this keyword in function body. TypeScript simply waives type checking or forbids referencing this in function body. Flow is the only type checker that open method which is aware of this context. However, the type checking on open method is implicit to code authors. One cannot explicitly annotate function’s this type and type checking on open method is a sole compiler’s matter.
Nontheless, it is great feature! I’m lovin it! Try it out!