It inherits the spirit of Golang’s method, Switf’s extension, Scala’s implicit class, Ruby’s instance_exec, Haskell’s typeclass. While ES6 normalizes, and thus constrains, inheritance in JavaScript, :: brings about ad-hoc virtual method to extends class’s behavior.
Thanks to JavaScript’s prototype based and dynamic typing nature, Paamayim Nekudata introduces least semantic complexity and grants best expressiveness. It is much more concise than method.call in previous JS or instance_exec in Ruby. Using double colon ::, other than dot . provides visual cue to the source and definition of virtual method, which is more clear and less confusing than Swift’s extension or Scala’s implicit class. Extension to native object is trivial if one uses this new syntax. We can easily write a underscore like itertool library and apply its API directly on native array. This cannot be done without hacking (or screwing up) Array.prototype in ES6-. Both proposal and implementation on Function Bind Syntax are easy and straightforward, again, thanks to JS’ nature.
However, Function Bind Syntax does not work well with type checking, for now. Current candidate proposals on ES type system does not cover this keyword in function body. TypeScript simply waives type checking or forbids referencing this in function body. Flow is the only type checker that open method which is aware of this context. However, the type checking on open method is implicit to code authors. One cannot explicitly annotate function’s this type and type checking on open method is a sole compiler’s matter.
Nontheless, it is great feature! I’m lovin it! Try it out!
For better syntax and user friendliness (and a lot more), Java 8 introduces SAM type, Single Abstract Method type, starting to embrace the functional programming world.
Previous to 1.8, Java already has somewhat a (bulky and leaky) type of closure: annonyous inner classes. For example, to start a working thread in Java usually requires following trivial but bloated statements, without a lexical this:
I’m not explaining Java8’s new features here, as a Scala user has already relished the conciseness and expressiveness of functional programming.
So, why would scala user cares about SAM type in Java? I would say that interoperation with Java and performance are the main concerns here.
Java8 introduces Function type, which is widely used in library like stream. Sadly, Function is not that of scala. Scala compiler just frowns at you using scala native function with Stream.
1 2 3 4 5 6
import java.util.Arrays Arrays.asList(1,2,3).stream.map((i: Int) => i * 2)
// <console>: error: type mismatch; // found : Int => Int // required: java.util.function.Function[_ >: Int, _]
Side notes: because function parameter is in a contravariant position, Function[_ >: Int, _] has a lower bound Int rather than a upper bound. That is, the function passed as argument must accept types that are super types of Int.
One can manually provide implicit conversion here to transform scala function types to java function types. However, implementing such implicit conversion is of no fun. The implementation is either not generic enough, or requires mechanical code duplication(another alternative is advanced macro generation). Compiler support is more ideal, not only because it generates more efficient byte code, but also because this precludes incompatibility across different implementations.
SAM type is enabled by -Xexperimental flag in scala 2.11.x flags. Specifically, in scala 2.11.5, SAM type is better supported. SAM gets eta-expansion(one can use a method from another class as SAM), overloading(overloaded function/method can also accept functions as SAM) and existential type support in scala 2.11.5.
Basic usage os SAM is quite simple, if a trait/abstract class with exactly oneabstract method, then a Function of the same parameter and return type of the abstract method can be converted into the trait/abstract class.
1 2 3 4 5 6 7 8 9 10 11 12
traitFlyable{ // exactly one abstracg method deffly(miles: Int): Unit // optional concrete object val name = "Unidentified Flyable Object" }
// to reference SAM type itself // create a named self-referencing lambda expression val ufo: Flyable = (m: Int) => println(s"${ufo.name} flies $m miles!") ufo.fly(123) // Unidentified Flyable Object flies 123 miles!
Easy Peasy. So for the stream example, if compiler has the -Xexperimental flag, scala will automatically change the function to java’s function, which grant scala user a seamless experience with the library.
Usually, you don’t need SAM in scala, as scala already has first class generic function type, eta-expansion and a lot more. SAM reduces the readability as implicit conversion does. One can always use type alias to give function a more understandable name, instead of using SAM. SAM type cannot be pattern matched, at least for now.
However, interoperating with Java requires SAM. And self-referencing SAM gives you additional flexibity in designing API. SAM also generates more efficient byte code since SAM has a native byte code counterpart. Using annonymous class for event handler or callback can be more pleasant in Scala just as in Java.
Anyway, adding a feature is easy, but adding a feature that couples with edge case is hard. Scala already has bunches of features(variance, higher kinded, type level, continuation), whether SAM will gain its popularity is still an open question.
Recently TypeScript team has released TypeScript 1.4, adding a new feature called Union Type which is intended for better incorporation into native JavaScript. Type guard, as a natural dyad of union type, also comes into TypeScript world. But sadly, Microsoft choose a bizzare way to introduce type guard as they said in the manual
TypeScript now understands these conditions and will change type inference accordingly when used in an if block.
It accepts and only accepts conditional statement like if (typeof x === 'string') as type guard. TypeScript now creates new type like number | string meaning a type of either number or string. Users can further refine the type by comparing the value typeof gives, like the example below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
functioncreateCustomer(name: { firstName: string; lastName: string } | string) { if (typeof name === "string") { // Because of the typeof check in the if, we know name has type string return { fullName: name }; } else { // Since it's not a string, we know name has // type { firstName: string; lastName: string } return { fullName: name.firstName + " " + name.lastName }; } }
// Both customers have type { fullName: string } var customer = createCustomer("John Smith"); var customer2 = createCustomer({ firstName: "Samuel", lastName: "Jones" });
I would rather say this is a bad idea because:
it intermixes type level constucts with value level constructs.
for complex and flexible language like javascript, microsoft’s approach is unable to handle various expressions regarding types.
Value level contructs are expression or statements dealing with values, e.g. assignment, comparison. typeof and instanceof in JavaScript are value level constructs because they generate boolean value, and their values can be passed to other variable or compared with other variable. Value type constucts do imply types, say, creating a new object of specific type, but they do not explicitly manipulate the types of expressions. There is no type casting JavaScript can do. On the other hand, type level constructs deals with types, for example, type annotation and generics.
Doubling typeof as type guard blurs the demarcation between type level and value level, and naturally reduces program’s readability(somewhat subjective claim though). A variable can, without distinct syntax, change its type in a conditional block. if branching is ubiquitous typescript programs, from hello-world toys to cathedral-like projects. It’s quite hard to find the “type switch” for union type among other irrelevant ifs. Also, one has to pay attention to call correct method of a same variable in different branches. So TypeScript’s type guard introduces a new type scope different from both lexical scope and function scope. It also cripples the compiler because now compiler has to check whether the condition in a if parentheses is type guard.
What’s worse? Type guard is a value level constructs so it can interact with all other language constucts. But microsoft does not intend to support that. None of the following code compiles in TypeScript 1.4.1, but they ought to run correctly in plain javascript, if they can be compiled.
Indeed, TypeScript is not the first to mix type level and value level. Language constructs like Pattern Match also do that(and usually introduce bugs related to type inference, see scala bug track). But at least Pattern Match is a specialized syntax that does not interact much with other syntax. But Type guard is, well, too ubiquitous to be good.
Type level programming is a technique that exploits type system to represents information and logic, to the extent of language’s limit.
Since values are encoded by variable’s type, type level drives compiler to validate logic or even determine program’s output. All validation and computation are conducted statically at compiling phase, so the greatest benefit of type-level programming is its safety and reliability.
As a rule of thumb, the more dynamic code is, the more flexible it can computes. Type level programming require all logic encoded into the source code. It is to hard to cram all logic into type system, as handling whimsical input from external source is either impossible or reduce source code to unwieldy state machine. So the niche of type level programming is usually encoding, pickling or parsing.
But there is field where code is statically written: GUI. HTML template is hard coded in source. Type level can be used as linting and validation in html edition, especially in authoring web components. A piece of HTML fragment can be encoded in ordinary object, with type denoting its structure. Once the structure of HTMl is fixed, the output of JavaScript and css can be determined as well. This is more helpful when one wants to make component. A tab-container must have tab-pane as child, and a tab-pane must live within a tab-container. Current approach of constraining HTML structure is encoding requirements in JavaScript and checking it in runtime. For example, angular uses require: '^parentDirective' to express the constraints and enable directive communication. If the component is programmatically constructed, using type annotation is a natural way to express the constraints. (As in Angular 2.0, query<ChildDirective>). We can go further in a language with full-bloomed type system.
trait Tag class Concat[A <: Tag, B <: Tag](a: A, b: B) extends Tag trait NestTag[A <: Tag] extends Tag { type Child = A } trait Inline extends Tag trait Block extends Tag case class div[T <: Tag](t :T = null) extends NestTag[T] with Block case class p[T <: Tag](t :T = null) extends NestTag[T] with Block case class a[T <: Inline](t :T = null) extends NestTag[T] with Inline
implicit class PlusTag[A <: Tag](a: A) { def +[B <: Tag](b: B) = new Concat(a, b) }
val ele = div( p( a() ) ) val r = jQ(ele).has[p] println(r)
The code above is just a demo. All html elements has type that denotes its structure. And one can tell whether a tag is in a html elemnt by calling jQ(ele).has[Tag]. (Note: ele is value level variable and Tag is a type level constructor.) And inline element cannot contains block element, because inline element’s child must be a subtype of inline.
Programmatical markup has several benefits:
no switching between script and template
static type checking
component dependency requirements
component communication
subtyping, inheritance… Classical OOP features
relatively clean layout (though not as concise as Jade/Slim)
The biggest problem is, well, type level templates is strongly constrained by host language. Dynamic languages simply cannot have it. Users of classical static typed language without type inference cannot afford the verbosity of deeply nested type. And languages capable of type level have different approaches and implementation towards Type Level Programming.
After all, type level is too crazy…, at least for daily business logic.
Generalized Type Constraints, also known as <:<, <%<(deprecated though) and =:=, also known as type relation operator, or call whatever you want, are not operators but identifiers. It’s quite confusing for new comers to distinguish them from operators, well…, identifiers which are not that esoteric.
This is just plain Scala feature that non-alphanum symbols can act as legal identifiers, just like + method. More specifically, they are type-constructors. But before we inspect their implementations, let’s first consider their usage.
Usage
You want to implement a generic container for every type, however, you also want to add a special method that only applies to Special type. (notice: this is different from the annotation @specialized which deals with JVM’s primitive type. Here Special is just a plain old scala type)
1 2 3 4 5 6 7
classContainer[A](value: A) { defdiff[A <: Int](b: Int) = value - b }
// BOOM // error: value - is not a member of type parameter A // def diff[A <: Int](b: A) = value - b
Why? The type bound A <: Int does not work. A has been defined at the class declaration, in the class body Scala compiler requries every type bound is consistent with A’s definition. Here, A has no bound so it is bounded by Any, not Int.
Instead of setting type bound, methods may ask for some kinds of specific ad-hoc “evidence” for a type.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
scala> classContainer[A](value: A) { // other generic methods for A /* blah blah */
// specialized method for Int defaddIt(implicit evidence: A =:= Int) = 123 + value } defined classContainer
scala> (newContainer(123)).addIt res11: Int = 246
scala> (newContainer("123")).addIt <console>:10: error: could not find implicit value for parameter evidence: =:=[java.lang.String,Int]
Cool, evidence is an implicit provided by scala predef. And A =:= Int is just a type like Map[Int, String], but is infixed due to scala’s syntactic sugar.
Scala does not impose type constraints until the specific method is called, so addIt does not violate A‘s definition. Still, given the implicit evidence, compiler can still infer that value in addIt is an sub-instance of Int.
As stated before, type constraints are ad-hoc. So it can achieve type inference more specific than type bound. (Fairly, this is the power of implicit).
1 is clearly Int but why does compiler infer it as Any? The B <: A bound requires the first argument type is a super type of the second. A is inferred as the most general type between Int and List[Int], Any.
<:< comes to help.
1 2 3 4
def bar[A,B](a: A, b: B)(implicit ev: B <:< A) = (a,b)
scala> bar(1,List(1,2,3)) <console>:9: error: Cannot prove that List[Int] <:< Int.
Because generalized type constraints does not interfere with inference, A is Int here. Only then does the compiler find evidence for <:<[Int, List[Int]] and then fails. (Actually, implicit can feedback type information back to inference, see typelevel programming’sHList and scala collection library’s CanBuildFrom)
Also implicit conversion does not impact <:<
1 2 3 4 5 6 7 8 9 10 11 12 13
scala> def foo[B, A<:B] (a:A,b:B) = print("OK")
scala> class A; class B;
scala> implicit def a2b(a:A) = new B
scala> foo(new A, new B) // implicit conversion! OK
scala> bar(new A, new B) // does not work <console>:17: error: Cannot prove that A <:< B.
Implementation
Actually =:= is just a type constructor in scala. It is somewhat like Map[A, B], that is, =:= is defined like
1
class =:=[A, B]
so in the implictly’s bracket, Int =:= Int is just a type A =:= B is the infix form of type parameterization for non-alphanumeric identifier. It is equivalent to =:=[A, B]
so one can define implicts for =:=, so that compiler can find
1
implicit def EqualTypeEvidence[A]: =:=[A, A] = new =:=[A, A]
So, when implictly[A =:= B] is compiled, compiler tries to find the correct implicit evidence.
If and only If A and B are the same, say Int, the compiler can find =:=[Int, Int], by the result of implicit function EqualTypeEvidence[Int]
More compelling is <:<, the conformance evidence, it leverages variance annotation in scala
1 2
class <:<[-A, +B] implict def Conformance[A]: <:<[A, A] = new <:<[A, A]
Consider, when String <:< java.io.Serializable is needed, compiler tries to find an instance of <:<[String, j.i.Serializable] It can only find instance of the type <:<[String, String] (or another alternative <:<[Serializable, Serializable]) But given the variance annotation of <:<, since String is the very type String and String is a subtype of Serializable and B is in a covariant position , or, in another direction snice Serializable is a supertype of String and A is in a contravariant position and Serializable is the very type Serializable
<:<[String, String] is a subtype of <:<[String, Serializable] So compiler finds the correct implicit instance as the evidence that String is a subtype of Serializable. By the principle of subtype subsititution. (Liskov)
Similarly we can define
1 2 3 4 5 6 7 8
Conversion evidence
class <%<[A <% B, B] implicit def Conversion[A, B] = new <%<[A, B]
Contra-conformance class >:>[+A, -B] implicit def Contra[A] = new >:>[A, A]
Magic, Right? The actual implementations uses singleton pattern so it is more efficient. For this illustration post, sloppy implementation is just fine :).
WTF have I done? I just typed mechanically what I have ferreted on Google and StackOverflow. WTF is play.core.StaticApplication that is just one confusing page on the doc? Speciously tantializing is the magical code under which lurk complicated dependencies.
Create a directory named project within your project and add the file project/plugins.sbt, in it, add the following line: addSbtPlugin(“com.hanhuy.sbt” % “android-sdk-plugin” % “1.2.20”)
Create project/build.properties and add the following line:
sbt.version=0.12.4 # newer versions may be used instead
Create build.sbt in root directory(example). Remember to import android.Keys._
HTML.js is a library full of syntactic sugar. It changes html elements dynamically so that the methods reflect their children node. For example, code like HTML.body.header.hgroup.h1 utilizes chain methods to mirror the structure of dom.
ES5 Object.defineProperty and mutationObserver conjure up the magic. HTML.js provides an eponymous HTML api object initialized by an internal node method, which add all tag methods to its argument object. All tag methods are defined by Object.defineProperty with get option. So tag methods behave like getter methods: every time user access these attributes, tag methods return HTMLifyed elements that are ready to be chained (HTMLifyed elements are normal HTMLElements that have been extended by the internal node method mentioned above).
Enable tag methods responsive to dom manipulation, HTML opts for mutationObserver to keep an eye on the root element. Once elements have been changed, mutationObserver detects the change and notifies HTML to refresh methods of the corresponding elements.
However, syntactic sweetheart fails to belie some design deficits and practical problems in this library. Getter methods abstain legacy browsers that still occupy about 10% market share. MutationObserver itself is not that horribly slow, but registering a watchdog on is almost certainly a performance killer for massive dom manipulations.
But the most notorious code smell comes from yet another place, a pure design decision that functions returning either element or array. It is certainly one of the most sloppy practice in dynamic language. In static typed language those function can only have return type Any. Surely this is not informative and bothers user to take the risk of casting results. Indeed, the author mentioned this on the homepage and tried defend this api design by the excuse of conditional context where users can avoid quandary. But a good library shall be as much care-free as possible. Providing api that returns one single element is probably better than leaving the users to guarantee element’s uniqueness. Ad-hoc polymorphism is determined by function arguments, not return type.
HTML’s api reminds me of the keyword null. Admittedly it is theoretically feasible to entrust programmers with the role of checking uniqueness/existence. But why ぬるぽ is still one of the most prominent haunting apparition in our code?