Paamayim Nekudota Operator in ES7

Source: ななろば華 - 雨上がりタペストリー

Recently Babel supports Function Bind Syntax in ES7. A.K.A Paamayim Nekudotayim Operator.(it should be Paamayim Nekudatayim, though)

It inherits the spirit of Golang’s method, Switf’s extension, Scala’s implicit class, Ruby’s instance_exec, Haskell’s typeclass. While ES6 normalizes, and thus constrains, inheritance in JavaScript, :: brings about ad-hoc virtual method to extends class’s behavior.

Thanks to JavaScript’s prototype based and dynamic typing nature, Paamayim Nekudata introduces least semantic complexity and grants best expressiveness. It is much more concise than method.call in previous JS or instance_exec in Ruby. Using double colon ::, other than dot . provides visual cue to the source and definition of virtual method, which is more clear and less confusing than Swift’s extension or Scala’s implicit class. Extension to native object is trivial if one uses this new syntax. We can easily write a underscore like itertool library and apply its API directly on native array. This cannot be done without hacking (or screwing up) Array.prototype in ES6-. Both proposal and implementation on Function Bind Syntax are easy and straightforward, again, thanks to JS’ nature.

However, Function Bind Syntax does not work well with type checking, for now. Current candidate proposals on ES type system does not cover this keyword in function body. TypeScript simply waives type checking or forbids referencing this in function body. Flow is the only type checker that open method which is aware of this context. However, the type checking on open method is implicit to code authors. One cannot explicitly annotate function’s this type and type checking on open method is a sole compiler’s matter.

Nontheless, it is great feature! I’m lovin it! Try it out!

Source: http://babeljs.io/blog/2015/05/14/function-bind/

Explaining Scala SAM type

source

For better syntax and user friendliness (and a lot more), Java 8 introduces SAM type, Single Abstract Method type, starting to embrace the functional programming world.

Previous to 1.8, Java already has somewhat a (bulky and leaky) type of closure: annonyous inner classes. For example, to start a working thread in Java usually requires following trivial but bloated statements, without a lexical this:

1
2
3
4
5
6
7
8
9
// from android developer guide
public void onClick(View v) {
new Thread(new Runnable() {
public void run() {
Bitmap b = loadImageFromNetwork("http://www.example.org/image.gif");
mImageView.setImage(b);
}
}).start()
}

Two classes, one method. The introduction of SAM type greatly reduces the syntactical overhead of Java.

1
2
3
4
5
6

public void onClick(View v) {
new Thread(#{ ->
mImage.setImage(loadImageFromNetwork("/image.gif"));
}).start();
}

I’m not explaining Java8’s new features here, as a Scala user has already relished the conciseness and expressiveness of functional programming.

So, why would scala user cares about SAM type in Java? I would say that interoperation with Java and performance are the main concerns here.

Java8 introduces Function type, which is widely used in library like stream. Sadly, Function is not that of scala. Scala compiler just frowns at you using scala native function with Stream.

1
2
3
4
5
6
import java.util.Arrays
Arrays.asList(1,2,3).stream.map((i: Int) => i * 2)

// <console>: error: type mismatch;
// found : Int => Int
// required: java.util.function.Function[_ >: Int, _]

Side notes: because function parameter is in a contravariant position, Function[_ >: Int, _] has a lower bound Int rather than a upper bound.
That is, the function passed as argument must accept types that are super types of Int.

One can manually provide implicit conversion here to transform scala function types to java function types. However, implementing such implicit conversion is of no fun. The implementation is either not generic enough, or requires mechanical code duplication(another alternative is advanced macro generation). Compiler support is more ideal, not only because it generates more efficient byte code, but also because this precludes incompatibility across different implementations.

SAM type is enabled by -Xexperimental flag in scala 2.11.x flags. Specifically, in scala 2.11.5, SAM type is better supported.
SAM gets eta-expansion(one can use a method from another class as SAM), overloading(overloaded function/method can also accept functions as SAM) and existential type support in scala 2.11.5.

Basic usage os SAM is quite simple, if a trait/abstract class with exactly one abstract method, then a Function of the same parameter and return type of the abstract method can be converted into the trait/abstract class.

1
2
3
4
5
6
7
8
9
10
11
12
trait Flyable {
// exactly one abstracg method
def fly(miles: Int): Unit
// optional concrete object
val name = "Unidentified Flyable Object"
}

// to reference SAM type itself
// create a named self-referencing lambda expression
val ufo: Flyable = (m: Int) => println(s"${ufo.name} flies $m miles!")
ufo.fly(123)
// Unidentified Flyable Object flies 123 miles!

Easy Peasy. So for the stream example, if compiler has the -Xexperimental flag, scala will automatically change the function to java’s function, which grant scala user a seamless experience with the library.

Usually, you don’t need SAM in scala, as scala already has first class generic function type, eta-expansion and a lot more. SAM reduces the readability as implicit conversion does. One can always use type alias to give function a more understandable name, instead of using SAM. SAM type cannot be pattern matched, at least for now.

However, interoperating with Java requires SAM. And self-referencing SAM gives you additional flexibity in designing API. SAM also generates more efficient byte code since SAM has a native byte code counterpart. Using annonymous class for event handler or callback can be more pleasant in Scala just as in Java.

Anyway, adding a feature is easy, but adding a feature that couples with edge case is hard. Scala already has bunches of features(variance, higher kinded, type level, continuation), whether SAM will gain its popularity is still an open question.

Rant on TypeScript type guard

source: Ad: hisuitei

Recently TypeScript team has released TypeScript 1.4, adding a new feature called Union Type which is intended for better incorporation into native JavaScript.
Type guard, as a natural dyad of union type, also comes into TypeScript world. But sadly, Microsoft choose a bizzare way to introduce type guard as they said in the manual

TypeScript now understands these conditions and will change type inference accordingly when used in an if block.

It accepts and only accepts conditional statement like if (typeof x === 'string') as type guard.
TypeScript now creates new type like number | string meaning a type of either number or string.
Users can further refine the type by comparing the value typeof gives, like the example below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function createCustomer(name: { firstName: string; lastName: string } | string) {
if (typeof name === "string") {
// Because of the typeof check in the if, we know name has type string
return { fullName: name };
}
else {
// Since it's not a string, we know name has
// type { firstName: string; lastName: string }
return { fullName: name.firstName + " " + name.lastName };
}
}

// Both customers have type { fullName: string }
var customer = createCustomer("John Smith");
var customer2 = createCustomer({ firstName: "Samuel", lastName: "Jones" });

I would rather say this is a bad idea because:

  1. it intermixes type level constucts with value level constructs.
  2. for complex and flexible language like javascript, microsoft’s approach is unable to handle various expressions regarding types.

Value level contructs are expression or statements dealing with values, e.g. assignment, comparison. typeof and instanceof in JavaScript are value level constructs because they generate boolean value, and their values can be passed to other variable or compared with other variable. Value type constucts do imply types, say, creating a new object of specific type, but they do not explicitly manipulate the types of expressions. There is no type casting JavaScript can do. On the other hand, type level constructs deals with types, for example, type annotation and generics.

Doubling typeof as type guard blurs the demarcation between type level and value level, and naturally reduces program’s readability(somewhat subjective claim though). A variable can, without distinct syntax, change its type in a conditional block. if branching is ubiquitous typescript programs, from hello-world toys to cathedral-like projects. It’s quite hard to find the “type switch” for union type among other irrelevant ifs. Also, one has to pay attention to call correct method of a same variable in different branches. So TypeScript’s type guard introduces a new type scope different from both lexical scope and function scope. It also cripples the compiler because now compiler has to check whether the condition in a if parentheses is type guard.

What’s worse? Type guard is a value level constructs so it can interact with all other language constucts. But microsoft does not intend to support that. None of the following code compiles in TypeScript 1.4.1, but they ought to run correctly in plain javascript, if they can be compiled.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
function testNot(x: string|number|Function) {
var isNonNum = typeof x !== 'number'
if (isNonNum) return x.length
}

function testReturn(x: string|number) {
if (typeof x === 'number') return;
return x.length
}

function testReturn(x: string|number) {
if (typeof x === 'number') throw new Error('error type')
return x.length
}

function testFor(xs: (string|number)[]) {
for(var i = 0, x = xs[i]; typeof x === 'string'; i++) {
console.log(x.length)
}
}

function testWhile(xs: [](string|number)) {
var i = 0;
while (typeof xs[i] === 'string') {
console.log(xs[i].length)
i++
}
}

function testFilter(xs: (string|number)[]) {
xs.filter((x) => typeof x === 'string').map((x) => x.length)
}

Indeed, TypeScript is not the first to mix type level and value level. Language constructs like Pattern Match also do that(and usually introduce bugs related to type inference, see scala bug track). But at least Pattern Match is a specialized syntax that does not interact much with other syntax. But Type guard is, well, too ubiquitous to be good.

大雅雅于俗

问君何能尔,心远地自偏。

我跟你讲,八神老师的LL本才是横跨氪金手游和同人薄本两界的旗舰之作。

不要看每本本子只有薄薄三十几页,但每一页都体现了老师对氪金手游业的大见识:一切氪金的本质,与“恶女榨金”无二。老师的本子虽薄,却为ACG产业,乃至现代文化消费形式,描绘了一张鸟瞰图。

如若看官仅仅以无鸟事为鸟找事的心态去看LL本, 看到的不过是人物单方面榨取主人公体液的画面。进入了贤者状态的上级读者,才能看到老师在本子中表达的良苦用心。八神老师的整个作品都是个隐喻,从注射器到温度计,无一不暗示了氪金手游的运营手段。它们都有一个简单明了的目的:尽一切奇技淫巧,榨出玩家钱包里最后一滴金。

本子中的主人公被描写为一个被动的体液机器,这和免费增值(Freemium)的商业模式中将消费者定位成消费机器的思路实际是异曲同工的。偶像不断用各色工具刺激玩家产生快感,玩家在这个过程中形成了氪金与成就的关联概念,在老师本子中,就是以受虐与快感关联的受虐情节所体现出来的。

运营在情节和宣传中构造出的“偶像”、“奋斗”等概念,实际上是对玩家消费欲望的一个“改装”(distortion),让消费欲能被玩家自身和玩家所在的社会环境所接受。在本子中刺激腺体的桥段,影射了游戏对玩家消费欲的刺激(生理层次上,对腹侧被盖区的刺激)。然而八神老师的文笔隽永之处就在于,他的笔触白虚伪的改装一一清除,入目三分地刻画了氪金手游的榨金本色。这一点在老师SC60的作品《 soldier money game》中展现无余。

称LL薄本为俗萌之物,恰恰是没有理解八神秋一老师的用心之处。视角从薄本本身放开,提升到一个“元薄本”(meta-usuihon)的高度,才能品读出LL本中的精妙之处。八神老师的作品看似大俗,但作品的旨味却是大雅。在大俗的故事中反思了现代人的消费行为,才是艺术家在当代社会中的应有作为。

作为人民艺术家,如果一味追求艺术形式上的“雅”,只会和观众拉开远非“一步之遥”的距离。所谓小雅雅于形,中雅雅于意,大雅雅于俗,八神老师的作品,不可不谓大俗大雅、雅俗共赏之作。

typelevel-html

Type level programming is a technique that exploits type system to represents information and logic, to the extent of language’s limit.

Since values are encoded by variable’s type, type level drives compiler to validate logic or even determine program’s output.
All validation and computation are conducted statically at compiling phase, so the greatest benefit of type-level programming is its safety and reliability.

As a rule of thumb, the more dynamic code is, the more flexible it can computes. Type level programming require all logic encoded into the source code. It is to hard to cram all logic into type system, as handling whimsical input from external source is either impossible or reduce source code to unwieldy state machine. So the niche of type level programming is usually encoding, pickling or parsing.

But there is field where code is statically written: GUI. HTML template is hard coded in source. Type level can be used as linting and validation in html edition, especially in authoring web components. A piece of HTML fragment can be encoded in ordinary object, with type denoting its structure. Once the structure of HTMl is fixed, the output of JavaScript and css can be determined as well.
This is more helpful when one wants to make component. A tab-container must have tab-pane as child, and a tab-pane must live within a tab-container. Current approach of constraining HTML structure is encoding requirements in JavaScript and checking it in runtime. For example, angular uses require: '^parentDirective' to express the constraints and enable directive communication. If the component is programmatically constructed, using type annotation is a natural way to express the constraints. (As in Angular 2.0, query<ChildDirective>). We can go further in a language with full-bloomed type system.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
trait Tag
class Concat[A <: Tag, B <: Tag](a: A, b: B) extends Tag
trait NestTag[A <: Tag] extends Tag {
type Child = A
}
trait Inline extends Tag
trait Block extends Tag
case class div[T <: Tag](t :T = null) extends NestTag[T] with Block
case class p[T <: Tag](t :T = null) extends NestTag[T] with Block
case class a[T <: Inline](t :T = null) extends NestTag[T] with Inline

implicit class PlusTag[A <: Tag](a: A) {
def +[B <: Tag](b: B) = new Concat(a, b)
}

class Contains[A <: Tag, C[_ <: Tag] <: NestTag[_], T[_ <: Tag] <: NestTag[_]]

case class jQ[A <: Tag, C[_ <: Tag] <: NestTag[_]](c: C[A]) {
def has[T[_ <: Tag] <: NestTag[_]](implicit ev: Contains[A, C, T]) = true
}

implicit def htmlEq[A <: Tag, C[_ <: Tag] <: NestTag[_], T[_ <: Tag] <: NestTag[_]](implicit ev: C[A] =:= T[A]) =
new Contains[A, C, T]
implicit def recurEq[A <: Tag, B[_ <: Tag] <: NestTag[_], C[_ <: Tag] <: NestTag[_], T[_ <: Tag] <: NestTag[_]]
(implicit ev: Contains[A, B, T]) = new Contains[B[A], C, T]

val ele = div(
p(
a()
)
)
val r = jQ(ele).has[p]
println(r)

The code above is just a demo. All html elements has type that denotes its structure. And one can tell whether a tag is in a html elemnt by calling jQ(ele).has[Tag]. (Note: ele is value level variable and Tag is a type level constructor.) And inline element cannot contains block element, because inline element’s child must be a subtype of inline.

Programmatical markup has several benefits:

  1. no switching between script and template
  2. static type checking
  3. component dependency requirements
  4. component communication
  5. subtyping, inheritance… Classical OOP features
  6. relatively clean layout (though not as concise as Jade/Slim)

The biggest problem is, well, type level templates is strongly constrained by host language. Dynamic languages simply cannot have it. Users of classical static typed language without type inference cannot afford the verbosity of deeply nested type. And languages capable of type level have different approaches and implementation towards Type Level Programming.

After all, type level is too crazy…, at least for daily business logic.

Scala Generalized Type constraints

Generalized Type Constraints, also known as <:<, <%<(deprecated though) and =:=, also known as type relation operator, or call whatever you want, are not operators but identifiers. It’s quite confusing for new comers to distinguish them from operators, well…, identifiers which are not that esoteric.

This is just plain Scala feature that non-alphanum symbols can act as legal identifiers, just like + method.
More specifically, they are type-constructors. But before we inspect their implementations, let’s first consider their usage.

Usage

You want to implement a generic container for every type, however, you also want to add a special method that only applies to Special type. (notice: this is different from the annotation @specialized which deals with JVM’s primitive type. Here Special is just a plain old scala type)

1
2
3
4
5
6
7
class Container[A](value: A) {
def diff[A <: Int](b: Int) = value - b
}

// BOOM
// error: value - is not a member of type parameter A
// def diff[A <: Int](b: A) = value - b

Why? The type bound A <: Int does not work. A has been defined at the class declaration, in the class body Scala compiler requries every type bound is consistent with A’s definition. Here, A has no bound so it is bounded by Any, not Int.

Instead of setting type bound, methods may ask for some kinds of specific ad-hoc “evidence” for a type.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
scala> class Container[A](value: A) {
// other generic methods for A
/* blah blah */

// specialized method for Int
def addIt(implicit evidence: A =:= Int) = 123 + value
}
defined class Container

scala> (new Container(123)).addIt
res11: Int = 246

scala> (new Container("123")).addIt
<console>:10: error: could not find implicit value for parameter evidence: =:=[java.lang.String,Int]

Cool, evidence is an implicit provided by scala predef. And A =:= Int is just a type like Map[Int, String], but is infixed due to scala’s syntactic sugar.

Scala does not impose type constraints until the specific method is called, so addIt does not violate A‘s definition. Still, given the implicit evidence, compiler can still infer that value in addIt is an sub-instance of Int.

As stated before, type constraints are ad-hoc. So it can achieve type inference more specific than type bound. (Fairly, this is the power of implicit).

1
2
3
4
def foo[A, B <: A](a: A, b: B) = (a,b)

scala> foo(1, List(1,2,3))
res1: (Any, List[Int]) = (1,List(1, 2, 3))

1 is clearly Int but why does compiler infer it as Any? The B <: A bound requires the first argument type is a super type of the second. A is inferred as the most general type between Int and List[Int], Any.

<:< comes to help.

1
2
3
4
def bar[A,B](a: A, b: B)(implicit ev: B <:< A) = (a,b)

scala> bar(1,List(1,2,3))
<console>:9: error: Cannot prove that List[Int] <:< Int.

Because generalized type constraints does not interfere with inference, A is Int here. Only then does the compiler find evidence for <:<[Int, List[Int]] and then fails.
(Actually, implicit can feedback type information back to inference, see typelevel programming’s HList and scala collection library’s CanBuildFrom)

Also implicit conversion does not impact <:<

1
2
3
4
5
6
7
8
9
10
11
12
13
scala> def foo[B, A<:B] (a:A,b:B) = print("OK")

scala> class A; class B;

scala> implicit def a2b(a:A) = new B

scala> foo(new A, new B) // implicit conversion!
OK

scala> def bar[A,B](a:A,b:B)(implicit ev: A<:<B) = print("OK")

scala> bar(new A, new B) // does not work
<console>:17: error: Cannot prove that A <:< B.

Implementation

Actually =:= is just a type constructor in scala.
It is somewhat like Map[A, B], that is,
=:= is defined like

1
class =:=[A, B]

so in the implictly’s bracket, Int =:= Int is just a type
A =:= B is the infix form of type parameterization for
non-alphanumeric identifier. It is equivalent to =:=[A, B]

so one can define implicts for =:=, so that compiler can find

1
implicit def EqualTypeEvidence[A]: =:=[A, A] = new =:=[A, A]

So, when implictly[A =:= B] is compiled,
compiler tries to find the correct implicit evidence.

If and only If A and B are the same, say Int, the compiler can find
=:=[Int, Int], by the result of implicit function EqualTypeEvidence[Int]

More compelling is <:<, the conformance evidence,
it leverages variance annotation in scala

1
2
class <:<[-A, +B]
implict def Conformance[A]: <:<[A, A] = new <:<[A, A]

Consider, when String <:< java.io.Serializable is needed,
compiler tries to find an instance of <:<[String, j.i.Serializable]
It can only find instance of the type <:<[String, String]
(or another alternative <:<[Serializable, Serializable])
But given the variance annotation of <:<,
since String is the very type String
and String is a subtype of Serializable and B is in a covariant position
, or, in another direction
snice Serializable is a supertype of String and A is in a contravariant position
and Serializable is the very type Serializable

<:<[String, String] is a subtype of <:<[String, Serializable]
So compiler finds the correct implicit instance as the evidence that
String is a subtype of Serializable. By the principle of subtype subsititution.
(Liskov)

Similarly we can define

1
2
3
4
5
6
7
8
Conversion evidence

class <%<[A <% B, B]
implicit def Conversion[A, B] = new <%<[A, B]

Contra-conformance
class >:>[+A, -B]
implicit def Contra[A] = new >:>[A, A]

Magic, Right?
The actual implementations uses singleton pattern so it is more efficient. For this illustration post, sloppy implementation is just fine :).

Reference:
http://hongjiang.info/scala-type-contraints-and-specialized-methods/
http://apocalisp.wordpress.com/2010/07/17/type-level-programming-in-scala-part-6d-hlist%C2%A0zipunzip/

play framework with scalate and activerecord

WRYYYYYYYYYYYYYYYY

Dio Brando on Scala crazy dependencies

Scala works like CSS selectors in that every successor overrides its predecessor.

You will have to work as a detective to figure out the correct recipe to manage a huge casserole of hodgepodge.

To achieve a working Play configuration with scalate and activerecord needs:

In build.sbt:

1
2
3
4
5
6
7
8
9
scalaVersion :=  "2.10.3"

libraryDenpendencies ++= Seq(
jdbc,
"org.scalatra.scalate" %% "scalate-core" % "1.7.0",
"com.github.aselab" %% "scala-activerecord" % "0.2.3",
"com.github.aselab" %% "scala-activerecord-play2" % "0.2.3",
"com.h2database" % "h2" % "1.3.170"
)

Several notes:

  1. Currently scala-activerecord only supports 2.10.3
  2. scalate must be 1.7.0+ for better support on scala 2.10 but the current stable version is 1.6.0

Then in the root path of play project, create a new file located at app/lib/ScalateIntegration.scala

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56

package controllers

import play.api._
import http.{Writeable, ContentTypeOf, ContentTypes}
import mvc.Codec
import play.api.Play.current
import org.fusesource.scalate.layout.DefaultLayoutStrategy
import collection.JavaConversions._

object Scalate {

import org.fusesource.scalate._
import org.fusesource.scalate.util._

var format = Play.configuration.getString("scalate.format") match {
case Some(configuredFormat) => configuredFormat
case _ => "scaml"
}

lazy val scalateEngine = {
val engine = new TemplateEngine
engine.resourceLoader = new FileResourceLoader(Some(Play.getFile("app/views")))
engine.layoutStrategy = new DefaultLayoutStrategy(engine, "app/views/layouts/default." + format)
engine.classpath = "tmp/classes"
engine.workingDirectory = Play.getFile("tmp")
engine.combinedClassPath = true
engine.classLoader = Play.classloader
engine
}

def apply(template: String) = Template(template)

case class Template(name: String) {

def render(args: java.util.Map[String, Any]) = {
ScalateContent{
scalateEngine.layout(name, args.map {
case (k, v) => k -> v
} toMap)
}
}

}

case class ScalateContent(val cont: String)

implicit def writeableOf_ScalateContent(implicit codec: Codec): Writeable[ScalateContent] = {
Writeable[ScalateContent]((scalate:ScalateContent) => codec.encode(scalate.cont))
}

implicit def contentTypeOf_ScalateContent(implicit codec: Codec): ContentTypeOf[ScalateContent] = {
ContentTypeOf[ScalateContent](Some(ContentTypes.HTML))
}
}

Again only works on scalate 1.6.

Finally, according to activerecord‘s doc.
To load the plugin, in conf/play.plugin

1
9999:com.github.aselab.activerecord.ActiveRecordPlugin

To configure database, in conf/application.conf

1
2
3
4
5
6
7
8
9
10
11
12
# Database configuration
# ~~~~~
#

# Scala ActiveRecord configurations
db.activerecord.driver=org.h2.Driver
db.activerecord.url="jdbc:h2:mem:play"
db.activerecord.user="sa"
db.activerecord.password=""

# Schema definition class
activerecord.schema=models.Tables

And in app/models/person.scala

1
2
3
4
5
6
7
package models

import com.github.aselab.activerecord._
import com.github.aselab.activerecord.dsl._

case class Person(@Required name: String) extends ActiveRecord
object Person extends ActiveRecordCompanion[Person] with PlayFormSupport[Person]

In app/models/tabels.scala

1
2
3
4
5
6
7
8
package models

import com.github.aselab.activerecord._
import com.github.aselab.activerecord.dsl._

object Tables extends ActiveRecordTables with PlaySupport {
val models = table[Person]
}

And finally you can try this in console

1
2
3
4
5
6
7
8
9
activator console

import models._
import play.core.StaticApplication

new StaticApplication(new java.io.File("."))

Person("f@ck").save
Person.findBy("name", "f@ck")

WTF have I done? I just typed mechanically what I have ferreted on Google and StackOverflow.
WTF is play.core.StaticApplication that is just one confusing page on the doc?
Speciously tantializing is the magical code under which lurk complicated dependencies.

Reference: PlayframeWork Quick Tip

How to set up a scaloid project from scratch

TL;DR; It’s hard to set up android development environment without the aid of IDE. And it’s harder for scaloid.

  1. Download standalone SDK

  2. Download Android SDK tools/ SDK platform tools/ SDK build tools
    NB> f@ck GFW, I got around that bastard by modifying /etc/hosts.

  3. Install sbt, reference for GFW: here

  4. android create project --target <target-id> --name scaloidApp --path <path>/scaloidApp --activity MainActivity --package com.example.scaloidapp

  5. Create a directory named project within your project and add the file project/plugins.sbt, in it, add the following line:
    addSbtPlugin(“com.hanhuy.sbt” % “android-sdk-plugin” % “1.2.20”)

  6. Create project/build.properties and add the following line:

sbt.version=0.12.4 # newer versions may be used instead

  1. Create build.sbt in root directory(example). Remember to import android.Keys._
1
2
3
4
5
6
7
8
9
10
11
12
import android.Keys._

android.Plugin.androidBuild

name := "scaloidApp"

scalaVersion := "2.11.0"

platformTarget in Android := "android-20"

libraryDependencies += "org.scaloid" %% "scaloid" % "3.4-10"

UPDATE: scaloid-android-plugin fixed the building =.=

magic HTML.js

source: pixiv

HTML.js is a library full of syntactic sugar. It changes html elements dynamically so that the methods reflect their children node. For example, code like HTML.body.header.hgroup.h1 utilizes chain methods to mirror the structure of dom.

ES5 Object.defineProperty and mutationObserver conjure up the magic. HTML.js provides an eponymous HTML api object initialized by an internal node method, which add all tag methods to its argument object. All tag methods are defined by Object.defineProperty with get option. So tag methods behave like getter methods: every time user access these attributes, tag methods return HTMLifyed elements that are ready to be chained (HTMLifyed elements are normal HTMLElements that have been extended by the internal node method mentioned above).

Enable tag methods responsive to dom manipulation, HTML opts for mutationObserver to keep an eye on the root element. Once elements have been changed, mutationObserver detects the change and notifies HTML to refresh methods of the corresponding elements.

However, syntactic sweetheart fails to belie some design deficits and practical problems in this library. Getter methods abstain legacy browsers that still occupy about 10% market share. MutationObserver itself is not that horribly slow, but registering a watchdog on is almost certainly a performance killer for massive dom manipulations.

But the most notorious code smell comes from yet another place, a pure design decision that functions returning either element or array. It is certainly one of the most sloppy practice in dynamic language. In static typed language those function can only have return type Any. Surely this is not informative and bothers user to take the risk of casting results. Indeed, the author mentioned this on the homepage and tried defend this api design by the excuse of conditional context where users can avoid quandary. But a good library shall be as much care-free as possible. Providing api that returns one single element is probably better than leaving the users to guarantee element’s uniqueness. Ad-hoc polymorphism is determined by function arguments, not return type.

HTML’s api reminds me of the keyword null. Admittedly it is theoretically feasible to entrust programmers with the role of checking uniqueness/existence. But why ぬるぽ is still one of the most prominent haunting apparition in our code?

dark
sans