Layer 1 Should Be Innovative in the Short Term but Less in the Long Term


 Celer Network官方翻譯自Vitalik最新博客  




One of the key tradeoffs in blockchain design is whether to build more functionality into base-layer blockchains themselves (“layer 1”), or to build it into protocols that live on top of the blockchain, and can be created and modified without changing the blockchain itself (“layer 2”). The tradeoff has so far shown itself most in the scaling debates, with block size increases (and sharding) on one side and layer-2 solutions like Plasma and channels on the other, and to some extent blockchain governance, with loss and theft recovery being solvable by either the DAO fork or generalizations thereof such as EIP 867, or by layer-2 solutions such as Reversible Ether (RETH). So which approach is ultimately better? Those who know me well, or have seen me out myself as a dirty centrist, know that I will inevitably say “some of both”. However, in the longer term, I do think that as blockchains become more and more mature, layer 1 will necessarily stabilize, and layer 2 will take on more and more of the burden of ongoing innovation and change.

 在區塊鏈設計中,我們往往需要考慮一個很重要的取舍:到底是把更多的功能放在底層公鏈中(所謂的“第一層”),還是將更多的功能放到更容易演進和修改的鏈下的協議當中去(所謂的“第二層”架構)。這種取舍問題的一個常見的具體實例就是關于區塊鏈擴容方案的爭論。擴大區塊容量(以及分片【1】)算是第一層的代表,Plasma和廣義狀態通道【譯者注:Celer Network正是這類代表】則是第二層解決方案的代表。從某種程度上來說,區塊鏈治理也算是第二層協議,具體的例子包括代幣丟失和盜竊的恢複解決方案,類似于DAO硬分叉(the DAO fork【2】)或者是對這類解決方案的提案如EIP 867【3】,抑或是其他的解決方案類似于可逆ETH轉賬(Reversible Ether (RETH)【4】)。那麽哪種方案最終來看是更好的呢?了解我的人都知道,我屬于典型的中庸派【5】,所以我顯然會說:我們要兩手抓,兩手都要硬。但是從長期來看,我確實認爲隨著底層公鏈越來越成熟,第一層會逐步地趨于穩定,那麽第二層將會持續地承擔越來越多的創新和改變。







There are several reasons why. The first is that layer 1 solutions require ongoing protocol change to happen at the base protocol layer, base layer protocol change requires governance, and it has still not been shown that, in the long term, highly “activist” blockchain governance can continue without causing ongoing political uncertainty or collapsing into centralization.


To take an example from another sphere, consider Moxie Marlinspike’s defense of Signal’s centralized and non-federated nature. A document by a company defending its right to maintain control over an ecosystem it depends on for its key business should of course be viewed with massive grains of salt, but one can still benefit from the arguments. Quoting:

舉一個非幣圈的例子來說,大家可以考慮看看Moxie Marlinspike’s 的文章defense of Signal’s centralized and non-federated nature【6】【譯者注:Signal是一個類似于Telegram的消息服務】。當然,這篇文章是一個爲了保護對自己生意所依賴的生態的中心化控制而寫的文章,所以軟文的嫌疑不小,但是裏面的一些論斷還是有價值的。這裏引用如下:



" One of the controversial things we did with Signal early on was to build it as an unfederated service. Nothing about any of the protocols we’ve developed requires centralization; it’s entirely possible to build a federated Signal Protocol-based messenger, but I no longer believe that it is possible to build a competitive federated messenger at all. "





" Their retort was “that’s dumb, how far would the internet have gotten without interoperable protocols defined by 3rd parties?” I thought about it. We got to the first production version of IP, and have been trying for the past 20 years to switch to a second production version of IP with limited success. We got to HTTP version 1.1 in 1997, and have been stuck there until now. Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s. To answer his question, that’s how far the internet got. It got to the late 90s.

That has taken us pretty far, but it’s undeniable that once you federate your protocol, it becomes very difficult to make changes. And right now, at the application level, things that stand still don’t fare very well in a world where the ecosystem is moving … So long as federation means stasis while centralization means movement, federated protocols are going to have trouble existing in a software climate that demands movement as it does today."

 他們反駁我說:這不是傻X麽,你覺得互聯網如果沒有一個可以共治的協議層的話,能到達今天的樣子?我仔細想了想,覺得他們說的完全沒道理。拿IPv4做例子,二十年前我們第一次把IPv4産品化,過了二十年,我們把IPv4升級到IPv6的努力基本上沒成功。再拿HTTP 1.1來說,1997年就出現了,然後我們就卡在這個標准直到現在。類似的例子還有很多,SMTPIRCDNDSXMPP,這些協議的演進都停留在了90年代。能達到今天的樣子?我看今天的樣子和90年代末沒區別。



Celer Network譯者按:這段話說成大白話的意思其實就是,如果一個協議層已經獲得了廣泛的共識,再去修改他其實是很難的,如果需要快速的叠代,就要從哪些還處在初期的,沒有獲得廣泛共識的協議去入手。


At this point in time, and in the medium term going forward, it seems clear that decentralized application platforms, cryptocurrency payments, identity systems, reputation systems, decentralized exchange mechanisms, auctions, privacy solutions, programming languages that support privacy solutions, and most other interesting things that can be done on blockchains are spheres where there will continue to be significant and ongoing innovation. Decentralized application platforms often need continued reductions in confirmation time, payments need fast confirmations, low transaction costs, privacy, and many other built-in features, exchanges are appearing in many shapes and sizes including on-chain automated market makers, frequent batch auctions, combinatorial auctions and more. Hence, “building in” any of these into a base layer blockchain would be a bad idea, as it would create a high level of governance overhead as the platform would have to continually discuss, implement and coordinate newly discovered technical improvements. For the same reason federated messengers have a hard time getting off the ground without re-centralizing, blockchains would also need to choose between adopting activist governance, with the perils that entails, and falling behind newly appearing alternatives.






Even Ethereum’s limited level of application-specific functionality, precompiles, has seen some of this effect. Less than a year ago, Ethereum adopted the Byzantium hard fork, including operations to facilitate elliptic curve operations needed for ring signatures, ZK-SNARKs and other applications, using the alt-bn128 curve. Now, Zcash and other blockchains are moving toward BLS-12-381, and Ethereum would need to fork again to catch up. In part to avoid having similar problems in the future, the Ethereum community is looking to upgrade the EVM to E-WASM, a virtual machine that is sufficiently more efficient that there is far less need to incorporate application-specific precompiles.








But there is also a second argument in favor of layer 2 solutions, one that does not depend on speed of anticipated technical development: sometimes there are inevitable tradeoffs, with no single globally optimal solution. This is less easily visible in Ethereum 1.0-style blockchains, where there are certain models that are reasonably universal (eg. Ethereum’s account-based model is one). In shardedblockchains, however, one type of question that does not exist in Ethereum today crops up: how to do cross-shard transactions? That is, suppose that the blockchain state has regions A and B, where few or no nodes are processing both A and B. How does the system handle transactions that affect both A and B?


The current answer involves asynchronous cross-shard communication, which is sufficient for transferring assets and some other applications, but insufficient for many others. Synchronous operations (eg. to solve the train and hotel problem) can be bolted on top with cross-shard yanking, but this requires multiple rounds of cross-shard interaction, leading to significant delays. We can solve these problems with a synchronous execution scheme, but this comes with several tradeoffs:

·      The system cannot process more than one transaction for the same account per block

·      Transactions must declare in advance what shards and addresses they affect

·      There is a high risk of any given transaction failing (and still being required to pay fees!) if the transaction is only accepted in some of the shards that it affects but not others

目前的答案【14】涉及到非同步的跨片通信。這樣的架構對于傳輸資産和一些其他應用是夠了,但是對于其他很多應用還是不夠的。同步的操作(比如解決train and hotel problem【15】)可以通過“跨片拖拽”(cross-shard yanking【16】)的方式來實現,但跨片拖拽往往需要多輪跨片互動,從而導致極高的延遲。我們可以通過一個同步運行策略(synchronous execution scheme【17】)來解決這些問題,但這個策略有如下幾個取舍需要:

·      這個系統對一個賬戶,只能在一個區塊裏面處理一個transaction。

·      Transaction必須實現定義好他們會影響那個分片和地址

·      任何一個transaction都有很高的風險失敗,而且失敗的transaction仍然要支付手續費。這種失敗情況出現在如果這個transaction被某些分片接受了但被另外一些拒絕了。






It seems very likely that a better scheme can be developed, but it would be more complex, and may well have limitations that this scheme does not. There are known results preventing perfection; at the very least, Amdahl’s law puts a hard limit on the ability of some applications and some types of interaction to process more transactions per second through parallelization.

當然,很可能有比這個跨片拖拽更好的機制,但這種更好的機制肯定會更加複雜,而且說不定反而會有一些這個機制沒有的局限性。同時,一些衆所周知的結論也使得分片無法達到完美;舉個最基本的常識,Amdahl’s law【18】告訴我們,通過並行化來提高TPS對于一些應用來說,是有一個無法突破的上限的。【譯者按:Amdahl’s law是並行計算裏面的一個重要常識,基本的意思就是說,在並行計算系統當中,只要有一部分計算是不可並行的(在分片區塊鏈當中就是跨片transaction),那麽增加並行度(分片數目)對效率的提升的增益是不斷下降的,如下圖】



So how do we create an environment where better schemes can be tested and deployed? The answer is an idea that can be credited to Justin Drake: layer 2 execution engines. Users would be able to send assets into a “bridge contract”, which would calculate (using some indirect technique such as interactive verification or ZK-SNARKs) state roots using some alternative set of rules for processing the blockchain (think of this as equivalent to layer-two “meta-protocols” like Mastercoin/OMNI and Counterparty on top of Bitcoin, except because of the bridge contract these protocols would be able to handle assets whose “base ledger” is defined on the underlying protocol), and which would process withdrawals if and only if the alternative ruleset generates a withdrawal request.

那麽我們如何創造一個環境使得更好的機制能被測試和部署呢?答案是(Justin Drake提出的這個說法):第二層執行引擎。用戶可以將資産發送到一個“橋接合約”當中,這個“橋接合約”會使用類似于interactive verification 【19】和ZK-SNARKs【20】的技術來非直接的計算state root,計算的方法則是根據一些別的區塊鏈處理規則。你可以類比的bitcoin中的一些第二層“元協議”,例如Mastercoin/OMNI【21】和Counterparty【22】,但有點不一樣的是,由于這樣的橋接合約的存在,這些協議可以直接處理底層協議上面的資産,並且只會處理根據這些鏈下執行引擎規則所産生的贖回請求。 






 【Celer Network譯者按:這段話說的特別的抽象以及不好理解,我來嘗試用通俗的話解釋一下,其實這裏面的意思就是說,底層公鏈很多時候不是一個適合做靈活改變和創新的地方,那麽第二層的作用就來了,第二層不論是側鏈,通道,還是別的什麽,他們的基本工作方法都是在底層公鏈上面綁定一個橋接合約,可以把所需要處理的資産橋接進入這個“第二層”的範疇,然後在第二層中,根據第二層的“創新規則”來進行處理和計算之後,再將這個資産返還給底層鏈,這樣的哲學思想其實是適用于所有的第二層擴容,計算,存儲等等架構的,也正是爲什麽第二層是未來創新集中地的原因。】

Note that anyone can create a layer 2 execution engine at any time, different users can use different execution engines, and one can switch from one execution engine to any other, or to the base protocol, fairly quickly. The base blockchain no longer has to worry about being an optimal smart contract processing engine; it need only be a data availability layer with execution rules that are quasi-Turing-complete so that any layer 2 bridge contract can be built on top, and that allow basic operations to carry state between shards (in fact, only ETH transfers being fungible across shards is sufficient, but it takes very little effort to also allow cross-shard calls, so we may as well support them), but does not require complexity beyond that. Note also that layer 2 execution engines can have different state management rules than layer 1, eg. not having storage rent; anything goes, as it’s the responsibility of the users of that specific execution engine to make sure that it is sustainable, and if they fail to do so the consequences are contained to within the users of that particular execution engine.



In the long run, layer 1 would not be actively competing on all of these improvements; it would simply provide a stable platform for the layer 2 innovation to happen on top. Does this mean that, say, sharding is a bad idea, and we should keep the blockchain size and state small so that even 10 year old computers can process everyone’s transactions? Absolutely not. Even if execution engines are something that gets partially or fully moved to layer 2, consensus on data ordering and availability is still a highly generalizable and necessary function; to see how difficult layer 2 execution engines are without layer 1 scalable data availability consensus, see the difficulties in Plasma research, and its difficulty of naturally extending to fully general purpose blockchains, for an example. And if people want to throw a hundred megabytes per second of data into a system where they need consensus on availability, then we need a hundred megabytes per second of data availability consensus.

 從長期來看,底層公鏈不會主動的去在所有這些效能的提升上面去競爭;它的使命是提供一個穩定的底層平台,讓layer2的創新去成長。那麽是不是說,底層的創新就徹底別要了,比如,分片就是不靠譜的,我們應該保持區塊鏈的尺寸和狀態很小,即使十年前的老計算機還能處理所有人的transaction呢?絕對不是這樣的。即使執行引擎部分或者全部遷移到layer2,對于數據順序和可用性的共識仍然是一個高度通用化和必要的功能;如果沒有一個底層的高擴展性的數據可用性共識機制,layer2的執行引擎也是很難搞的,比如Plasma research【23】中的各類【24】挑戰,以及將plasma自然地拓展到通用智能合約區塊鏈的挑戰【25】。再說如果人們希望每秒對100MB的數據做可用性共識,我們就需要這麽高速的底層共識機制做支撐。





Additionally, layer 1 can still improve on reducing latency; if layer 1 is slow, the only strategy for achieving very low latency is state channels, which often have high capital requirements and can be difficult to generalize. State channels will always beat layer 1 blockchains in latency as state channels require only a single network message, but in those cases where state channels do not work well, layer 1 blockchains can still come closer than they do today.





Hence, the other extreme position, that blockchain base layers can be truly absolutely minimal, and not bother with either a quasi-Turing-complete execution engine or scalability to beyond the capacity of a single node, is also clearly false; there is a certain minimal level of complexity that is required for base layers to be powerful enough for applications to build on top of them, and we have not yet reached that level. Additional complexity is needed, though it should be chosen very carefully to make sure that it is maximally general purpose, and not targeted toward specific applications or technologies that will go out of fashion in two years due to loss of interest or better alternatives.


And even in the future base layers will need to continue to make some upgrades, especially if new technologies (eg. STARKs reaching higher levels of maturity) allow them to achieve stronger properties than they could before, though developers today can take care to make base layer platforms maximally forward-compatible with such potential improvements. So it will continue to be true that a balance between layer 1 and layer 2 improvements is needed to continue improving scalability, privacy and versatility, though layer 2 will continue to take up a larger and larger share of the innovation over time.


關于Celer Network:

Celer Network以分層架構建立了一套鏈下擴容體系,提出了廣義狀態通道和側鏈靈活結合的cChannel,在不犧牲信任與安全保障的情況下不僅加速簡單支付,同時也支持智能合約和各類複雜應用的加速擴容。Celer 團隊同時提出了第一個最優化的鏈下支付和狀態路由算法cRoute,一個簡單易用的應用開發框架和用戶移動端接口cOS作爲區塊鏈應用的新入口。Celer Network擁有極強的普適性,可以廣泛兼容主流區塊鏈。在技術創新的同時,Celer Network獨家首創了第一個基于博弈論和拍賣理論的鏈下擴容加密貨幣經濟學和代幣模型,系統和完整地提供了鏈下擴容平台中的核心激勵和安全保障機制。


作爲鏈下擴容解決方案代表, Celer Network希望用自己的強大技術實力,和對行業生態的把握,來推動區塊鏈鏈下生態的成熟與發展,真正讓區塊鏈走入千家萬戶。感興趣的夥伴們可以通過以下方式學習更多關于區塊鏈鏈下擴容解決方案:

 1) 免費領取由Celer Network聯合創始人董沫爲主講的《智能合約全棧開發及鏈下擴容技術實訓營》專欄 !關注Celer Network公衆號回複“我要上課”獲取免費專欄~