Repository: loadnetwork/gitbook-sync Directory: .gitbook Directory: .gitbook/assets File: .gitbook/assets/ELF (1).png (139.10 KB) --------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/ELF%20(1).png File: .gitbook/assets/ELF.png (133.27 KB) ----------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/ELF.png File: .gitbook/assets/EVM scale (1).png (1.35 MB) --------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/EVM%20scale%20(1).png File: .gitbook/assets/EVM scale.png (1.52 MB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/EVM%20scale.png File: .gitbook/assets/Frame 6 (1).png (192.09 KB) ------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/Frame%206%20(1).png File: .gitbook/assets/WeaveVM architecture (1).png (1.93 MB) -------------------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/WeaveVM%20architecture%20(1).png File: .gitbook/assets/Wordmark_lg.png (34.68 KB) ------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/Wordmark_lg.png File: .gitbook/assets/alphanet-mm.png (16.97 KB) ------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/alphanet-mm.png File: .gitbook/assets/big_logo_banner.png (373.09 KB) ----------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/big_logo_banner.png File: .gitbook/assets/big_logo_banner_bk.png (22.10 KB) -------------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/big_logo_banner_bk.png File: .gitbook/assets/cost-comparison.jpg (44.57 KB) ----------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/cost-comparison.jpg File: .gitbook/assets/image (1) (1) (1) (1).png (693.48 KB) ----------------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(1)%20(1)%20(1)%20(1).png File: .gitbook/assets/image (1) (1) (1).png (693.48 KB) ------------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(1)%20(1)%20(1).png File: .gitbook/assets/image (1) (1).png (1.94 MB) --------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(1)%20(1).png File: .gitbook/assets/image (1).png (560.05 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(1).png File: .gitbook/assets/image (10).png (43.41 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(10).png File: .gitbook/assets/image (11).png (41.81 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(11).png File: .gitbook/assets/image (12).png (49.45 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(12).png File: .gitbook/assets/image (13).png (455.07 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(13).png File: .gitbook/assets/image (14).png (294.28 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(14).png File: .gitbook/assets/image (15).png (326.33 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(15).png File: .gitbook/assets/image (16).png (44.11 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(16).png File: .gitbook/assets/image (17).png (45.02 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(17).png File: .gitbook/assets/image (18).png (658.24 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(18).png File: .gitbook/assets/image (19).png (50.00 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(19).png File: .gitbook/assets/image (2) (1).png (171.79 KB) --------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(2)%20(1).png File: .gitbook/assets/image (2).png (1.94 MB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(2).png File: .gitbook/assets/image (20).png (642.31 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(20).png File: .gitbook/assets/image (21).png (13.40 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(21).png File: .gitbook/assets/image (22).png (24.96 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(22).png File: .gitbook/assets/image (23).png (70.96 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(23).png File: .gitbook/assets/image (24).png (70.96 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(24).png File: .gitbook/assets/image (25).png (85.72 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(25).png File: .gitbook/assets/image (26).png (62.58 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(26).png File: .gitbook/assets/image (27).png (23.66 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(27).png File: .gitbook/assets/image (28).png (19.71 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(28).png File: .gitbook/assets/image (29).png (49.29 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(29).png File: .gitbook/assets/image (3) (1).png (26.71 KB) --------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(3)%20(1).png File: .gitbook/assets/image (3).png (48.11 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(3).png File: .gitbook/assets/image (30).png (46.26 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(30).png File: .gitbook/assets/image (31).png (279.88 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(31).png File: .gitbook/assets/image (32).png (140.27 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(32).png File: .gitbook/assets/image (33).png (103.99 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(33).png File: .gitbook/assets/image (34).png (33.24 KB) ------------------------------------ Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(34).png File: .gitbook/assets/image (4).png (217.30 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(4).png File: .gitbook/assets/image (5).png (11.02 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(5).png File: .gitbook/assets/image (6).png (10.85 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(6).png File: .gitbook/assets/image (7).png (10.18 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(7).png File: .gitbook/assets/image (8).png (153.46 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(8).png File: .gitbook/assets/image (9).png (43.99 KB) ----------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image%20(9).png File: .gitbook/assets/image.png (151.77 KB) ------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/image.png File: .gitbook/assets/wvm-req (2).png (6.94 MB) ------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/wvm-req%20(2).png File: .gitbook/assets/wvm_network_flow (1).png (94.20 KB) ---------------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/wvm_network_flow%20(1).png File: .gitbook/assets/wvm_network_flow (3).png (96.43 KB) ---------------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/wvm_network_flow%20(3).png File: .gitbook/assets/wvm_network_flow (5).png (73.84 KB) ---------------------------------------------- Binary file, content not included. Download URL: https://raw.githubusercontent.com/loadnetwork/gitbook-sync/main/.gitbook/assets/wvm_network_flow%20(5).png File: README.md (427 B) --------------- --- description: >- Load Network is a high performance blockchain for data storage - cheaply and verifiably store, access, and compute with any data. --- # Load Network

Load Network ≈ The onchain data center

{% hint style="warning" %} Load Network did not issue a token yet. Currently running on testnet. {% endhint %} File: SUMMARY.md (3.48 KB) ---------------- # Table of contents * [Load Network](README.md) * [Quickstart](quickstart.md) ## About Load Network * [Overview](about-load-network/overview.md) * [Network Releases Nomenclature](about-load-network/network-releases-nomenclature.md) * [Load Network Alphanets](about-load-network/load-network-alphanets.md) * [Key Features ](about-load-network/key-features.md) * [ELI5](about-load-network/eli5.md) ## Using Load Network * [Compatibility & Performance](using-load-network/compatibility-and-performance.md) * [Network configurations](using-load-network/network-configurations.md) * [Load Network Bundler](using-load-network/load-network-bundler.md) * [0xbabe2: Large Data Uploads](using-load-network/0xbabe2-large-data-uploads.md) * [Load Network Bundler Gateways](using-load-network/load-network-bundler-gateways.md) * [Load Network Precompiles](using-load-network/load-network-precompiles.md) * [LN-Native JSON-RPC Methods](using-load-network/ln-native-json-rpc-methods.md) * [load:// Data Protocol](using-load-network/load-data-protocol.md) * [Self-Hosted RPC Proxies](using-load-network/self-hosted-rpc-proxies/README.md) * [Rust Proxy](using-load-network/self-hosted-rpc-proxies/rust-proxy.md) * [JavaScript Proxy](using-load-network/self-hosted-rpc-proxies/javascript-proxy.md) * [Code & Integrations Examples](using-load-network/code-and-integrations-examples/README.md) * [ethers (etherjs)](using-load-network/code-and-integrations-examples/ethers-etherjs.md) * [Deploying an ERC20 Token](using-load-network/code-and-integrations-examples/deploying-an-erc20-token.md) ## load hyperbeam * [About Load HyperBEAM](load-hyperbeam/about-load-hyperbeam.md) * [\~evm@1.0 device](load-hyperbeam/evm-1.0-device.md) *** * [\~kem@1.0 device](kem-1.0-device.md) * [\~riscv-em@1.0 device](riscv-em-1.0-device.md) ## Load Network Cloud Platform * [Cloud Platform (LNCP)](load-network-cloud-platform/cloud-platform-lncp.md) * [Load S3 Protocol](load-network-cloud-platform/load-s3-protocol.md) * [load0 data layer](load-network-cloud-platform/load0-data-layer.md) ## Load Network for evm chains * [Ledger Archiver (any chain)](load-network-for-evm-chains/ledger-archiver-any-chain.md) * [Ledger Archivers: State Reconstruction](load-network-for-evm-chains/ledger-archivers-state-reconstruction.md) * [DA ExEx (Reth-only)](load-network-for-evm-chains/da-exex-reth-only.md) * [Deploying OP-Stack Rollups](load-network-for-evm-chains/deploying-op-stack-rollups.md) ## Load Network ExEx * [About ExExes](load-network-exex/about-exexes.md) * [ExEx.rs](load-network-exex/exex.rs.md) * [Load Network ExExes](load-network-exex/load-network-exexes/README.md) * [Google BigQuery ETL](load-network-exex/load-network-exexes/google-bigquery-etl.md) * [Borsh Serializer](load-network-exex/load-network-exexes/borsh-serializer.md) * [Arweave Data Uploader](load-network-exex/load-network-exexes/arweave-data-uploader.md) * [Load Network DA ExEx](load-network-exex/load-network-exexes/load-network-da-exex.md) * [Load Network WeaveDrive ExEx](load-network-exex/load-network-exexes/load-network-weavedrive-exex.md) ## Load Network Arweave Data Protocols * [LN-ExEx Data Protocol](load-network-arweave-data-protocols/ln-exex-data-protocol.md) * [Load Network Precompiles Data Protocol](load-network-arweave-data-protocols/load-network-precompiles-data-protocol.md) ## DA Integrations * [LN-EigenDA Proxy Server](da-integrations/ln-eigenda-proxy-server.md) * [LN-Dymension: DA client for RollAP](da-integrations/ln-dymension-da-client-for-rollap.md) Directory: about-load-network File: about-load-network/eli5.md (5.99 KB) -------------------------------- --- description: ELI5 Load Network --- # ELI5 ### What is Load Network? Load is a high-performance blockchain built towards the goal of solving the EVM storage dilemma with [Arweave](https://arweave.org) ao [hyperbeam](https://github.com/permaweb/HyperBEAM). It gives the coming generation of high-performance chains a place to settle and store onchain data, without worrying about cost, availability, or permanence. Load Network offers scalable and cost-effective permanent storage by using Arweave as a decentralized hard drive, both at the node and smart contract layer, and hyperbeam as stack decentralization leveraging a set of custom-build devices. This makes it possible to store large data sets and run web2-like applications without incurring EVM storage fees. Load Network's storage as calldata [costs around $0.05/MB, compared with Ethereum’s $450/MB.](https://wvm.dev/calculator)

Load Network Highlights

### Decentralized Full Data Storage Stack Load Network mainnet is being built to be the highest performing EVM blockchain focusing on data storage, having the largest baselayer transaction input size limit (\~16MB), the largest ever EVM transaction (\~0.5TB 0xbabe transaction), very high network data throughput (multi-gigagas per second), high TPS, decentralization, full data storage stack offering (permanent and temporal), decentralized data gateways and data bundlers. Load Network achieves high decentralization by using Arweave as decentralized hard drive, hyperbeam as decentralized cloud stack & extended consensus, and allowing network participation (nodes) . Load Network will offer both of permanent data storage and temporal data storage while maintaining decentralized and censorship-resistant data retrieval & ingress (gateways, bundling services, etc). ### Use Cases and How to Integrate #### Ledger Data Storage Chains like Metis, RSS3 and Dymension use Load Network to permanently store onchain data, acting as a decentralized archival node. If you look at the common problems that are flagged up on [L2Beat](https://l2beat.com/scaling/summary), a lot of it has to do with centralized sources of truth and data that can’t be independently audited or reconstructed in a case where there’s a failure in the chain. LN adds a layer of protection and transparency to L2s, ruling out some of the failure modes of centralization. Learn more about the [wvm-archiver tool here](../load-network-for-evm-chains/ledger-archiver-any-chain.md). #### High-Throughput Data Availability (DA) Load Network can plug in to a typical EVM L2's stack as a DA layer that's 10-15x cheaper than solutions like [Celestia and Avail](https://wvm.dev/calculator), and guarantees data permanence on Arweave. LN was built to handle DA for the coming generation of supercharged rollups. With a throughput of \~62MB/s, it could handle DA for [every major L2](https://rollup.wtf) and still have 99%+ capacity left over. You can check out the custom [DA-ExEx](../load-network-for-evm-chains/da-exex-reth-only.md) to make use of LOAD-DA in any Reth node in less than 80 LoCs, also the [EigenDA-LN Sidecar Server Proxy](../da-integrations/ln-eigenda-proxy-server.md) to use EigenDA's data availability along with Load Network securing its archiving. #### Storage Heavy dApps Load Network offers scalable and cost-effective storage by using Arweave as a decentralized hard drive, and hyperbeam as a decentralized cloud. This makes it possible to store large data sets and run web2-like applications without incurring EVM storage fees. We have developed the first-ever Reth precompiles to facilitate, natively, a [bidirectional data pipeline with Arweave](https://blog.wvm.dev/weavevm-arweave-precompiles/) from the smart contract API level. Check out the full list of LN precompiled contracts [here](../using-load-network/load-network-precompiles.md). #### Foundational Layer (L1) For Rollups Load Network is an EVM compatible blockchain, therefore, rollups can be deployed on LN as same as the rollups state on Ethereum. In contrast to Ethereum or other EVM L1s, rollups deployed on top of LN benefit out-of-the-box from the data-centric features provided by LN (for rollup data settlement and DA). Rollups deployed on Load Network use the native LN gas token (tLOAD on Alphanet), similar to how ETH is used for OP rollups on Ethereum. For example, we released a technical guide for developers interested in deploying OP-Stack rollups on LN. [Check it out here](https://github.com/weaveVM/developers/blob/main/guides/op-rollup-deployment.md). ### Explore Load Network Ecosystem Dapps (Evolving) * [Load Network Cloud Platform](../load-network-cloud-platform/cloud-platform-lncp.md) — The UI of the onchain data center * [Permacast](https://permacast.app) — A decentralized media platform on Load Network * [Tapestry Finance ](https://www.tapestry.fi/)— Uniswap V2 fork * [shortcuts.bot ](https://shortcuts.bot/)— short links for Load Network txids * [load.yachts](https://www.load.yachts/) — subdomain resolver for Load Network content * [onchain.rs ](https://onchain.rs)— Dropbox onchain alternative * [relic.bot ](https://relic.bot)— Onchain Instagram * [fairytale.sh ](https://fairytale.sh)— onchain publishing toolkit * [tokenize.rs ](https://app.gitbook.com/s/z2gd4Irh30FSnal6SJnL/)— Tokenize any data on Load Network * [bridge.load.network ](https://bridge.load.network)— Hyperlane bridge (Load Alphanet <> Ethereum Holesky) * [mediadao.xyz](https://mediadao.xyz) — a club for permanent content preservation. * [Dymension.xyz Roll-Apps ](https://portal.dymension.xyz/rollapps)— deploy a dym roll-app using Load DA Useful Links * [Documentation](overview.md) * [GitHub Organization](https://github.com/weaveVM) * [Blog](https://blog.wvm.dev) * [Twitter](https://x.com/weavevm) * [Discord](https://dsc.gg/wvm) * [Explorer](https://explorer.wvm.dev) * [Data storage price calculator](https://wvm.dev/calculator) * [Alphanet faucet](https://wvm.dev/faucet)\ File: about-load-network/key-features.md (4.25 KB) ---------------------------------------- --- description: Exploring Load Network key features --- # Key Features Let's explore the key features of Load Network: ### Beefy Block Producer Load Network achieves enterprise-like performance by limiting block production to beefy hardware nodes while maintaining trustless and decentralized block validation. What this means, is that anyone with a sufficient amount of $LOAD tokens meeting the PoS staking threshold, plus the necessary hardware and internet connectivity (super-node, enterprise hardware), can run a node. This approach is inspired by Vitalik Buterin's work in ["The Endgame"](https://vitalik.eth.limo/general/2021/12/06/endgame.html) post. > **Block **_**production**_** is centralized, block **_**validation**_** is trustless and highly decentralized, and censorship is still prevented**. These "super nodes" producing Load Network blocks result in a high-performance EVM network. ### Large Block Size Raising the gas limit increases the block size and operations per block, affecting both History growth and State growth (mainly relevant for our point here). Load Network Alphanet V2 (formerly WeaveVM V2) has raised the gas limit to 500M gas (doing 500 mg/s), and lowered the gas per non-zero byte to 8. These changes have resulted in a larger max theoretical block size of 62 MB, and consequently, the network data throughput is \~62 MBps. This high data throughput can be handled thanks to the approach of beefy block production by super nodes and hardware acceleration.

Ethereum Scaling Bottlenecks

### High-Throughput DA Up until now, there's been no real-world, scalable DA layer ready to handle high data throughput with permanent storage. In LOAD Alphanet V2, we've already reached 62 MBps with a projection to hit 125 MBps in mainnet. ### Parallel Execution To reduce the gas fees consumed by EVM opcode execution, we're aiming to use a parallel execution EVM client for the Reth node in mainnet. ### EVM interface for Arweave Data: Permanent History Load Network uses a set of Reth execution extensions (ExExes) to serialize each block in Borsh, then compress it in Brotli before sending it to Arweave. These computations ensure a cost-efficient, permanent history backup on Arweave. This feature is crucial for other L1s/L2s using LOAD for data settling (LOADing). In the [diagrams & benchmarks here](https://github.com/weaveVM/wvm-research), we show the difference between various compression algorithms applied to Borsh-serialized empty block (zero transactions) and JSON-serialized empty block. And we can see that Borsh serialization combined with Brotli compression gives us the most efficient compression ratio in the data serialization-compression process. ### Cost Efficient Data Settling LOAD's hyper computation, supercharged hardware, and interface with Arweave result in significantly cheaper data settlement costs on Load Network, which include the Arweave fees to cover the archiving costs. [Check the comparison calculator for realtime data](https://www.wvm.dev/calculator).

Data LOADing cost comparison

Even compared to temporary blob-based solutions, Load Network still offers a significantly cheaper permanent data solution (calldata). ### Load is L0 for EVM L1s/L2s Load Network can be used as either a DA solution or for data settlement (like Ethereum). Since storing data on Load Network is very cheap compared to other EVM solutions, the network can be labeled as an L0 for other L1s or L2s. Load Network offers self-DA secured by network economics along with a permanent data archive, secured by [Arweave](https://arweave.org). ### Bidirectional data pipeline with Arweave The LOAD team has developed the first precompiles that achieve a native bidirectional data pipeline with the Arweave network. In other words, with these precompiles (currently supported by Load Network testnet), you can read data from Arweave and send data to Arweave trustlessly and natively from a Solidity smart contract. [Learn more about Load Network precompiles in this section.](../using-load-network/load-network-precompiles.md)\ File: about-load-network/load-network-alphanets.md (1.87 KB) -------------------------------------------------- --- description: A list of Load Network Alphanet Releases --- # Load Network Alphanets The table below does not include the list of minor releases between major Alphanet releases. For the full changelogs and releases, check them out here: [https://github.com/weaveVM/wvm-reth/releases](https://github.com/weaveVM/wvm-reth/releases) | Alphanet | Blog Post | Changelogs | | --------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | | `v1` | [https://blog.wvm.dev/testnet-is-live/](https://blog.wvm.dev/testnet-is-live/) | [https://github.com/weaveVM/wvm-reth/releases/tag/v0.1.0](https://github.com/weaveVM/wvm-reth/releases/tag/v0.1.0) | | `v2` | [https://blog.wvm.dev/alphanet-v2/](https://blog.wvm.dev/alphanet-v2/) | [https://github.com/weaveVM/wvm-reth/releases/tag/v0.2.2](https://github.com/weaveVM/wvm-reth/releases/tag/v0.2.2) | | `v3` | [https://blog.wvm.dev/alphanet-v3/](https://blog.wvm.dev/alphanet-v3/) | [https://github.com/weaveVM/wvm-reth/releases/tag/v0.3.0](https://github.com/weaveVM/wvm-reth/releases/tag/v0.3.0) | | `v4 (LOAD Inception)` | [https://blog.wvm.dev/alphanet-v4/](https://blog.wvm.dev/alphanet-v4/) | [https://github.com/weaveVM/wvm-reth/releases/tag/v0.4.0](https://github.com/weaveVM/wvm-reth/releases/tag/v0.4.0) | | `v5` | [https://blog.load.network/alphanet-v5/](https://blog.load.network/alphanet-v5/) | [https://github.com/weaveVM/wvm-reth/releases/tag/v0.5.3](https://github.com/weaveVM/wvm-reth/releases/tag/v0.5.3) | File: about-load-network/network-releases-nomenclature.md (770 B) --------------------------------------------------------- --- description: 'Load Network Releases: Understanding Our Testnets' --- # Network Releases Nomenclature {% hint style="info" %} Both Alphanets and Devnets are testnet networks with no monetary value tied to the Load Network mainnet. They serve different purposes in our development pipeline {% endhint %} ### Alphanets: Stable Testnets * Designed for user/dev exploration and testing. * Features low frequency of breaking changes. * Provides a more reliable environment for developers and users to interact with Load Network. ### Devnets: Experimental Testnets * Acts as a testing ground for the Alphanets. * Characterized by frequent breaking changes and potential instability. * Playground for testing new features, EIPs, and experimental concepts. File: about-load-network/overview.md (1.03 KB) ------------------------------------ --- description: Defining Load Network --- # Overview ### Abstract Load Network is a high performance blockchain for onchain data storage - cheaply and verifiably store and access any data . As a high-performance data-centric EVM network, Load Network maximizes scale and transparency for L1s, L2s and data-intensive dApps. Load Network is dedicated to solving the problem of onchain data storage.\ \ Load Network offloads storage to [Arweave](https://arweave.org/), and achieve high performance computation -decoupled from the EVM L1 itself- by utilizing [ao-hyperbeam](../load-hyperbeam/about-load-hyperbeam.md) custom devices, giving any other chain a way to easily plug in a robust permanent storage layer powered by a hyperscalable network of EVM nodes with bleeding edge throughput capacity. ### Load Network is ex-WeaveVM Network Before March 2025, Load Network (abbreviations: LOAD or LN) was named WeaveVM Network (WVM). All existing references to WeaveVM (naming, links, etc.) in the documentation should be treated as Load Network. Directory: da-integrations File: da-integrations/ln-dymension-da-client-for-rollap.md (6.52 KB) ---------------------------------------------------------- --- description: >- Description of Laod Network integration as a Data Availability client for Dymension RollApps --- # LN-Dymension: DA client for RollAP #### Links [https://dymension.xyz](https://dymension.xyz) {% embed url="https://github.com/dymensionxyz/rollapp-evm" %} {% embed url="https://github.com/dymensionxyz/dymint/" %} #### Key Details * Load Network provides a gateway for Arweave's permanent with its own (LN) high data throughput of the permanently stored data into . * Current maximum encoded blob size is 8 MB (8\_388\_608 bytes). * _**Laod Network currently operating in public testnet (Alphanet) - not recommended to use it in production environment.**_ #### Prerequisites and Resources 1. Understand how to boot basic Dymension RollApp and how to configure it. 2. Obtain test tLOAD tokens through our [faucet](https://wvm.dev/faucet) for testing purposes. 3. Monitor your transactions using the [Load Network explorer](https://explorer.load.network). _**How it works**_ You may choose to use Load Network as a DataAvailability layer of your RollApp. We assume that you know how to boot and configure basics of your dymint RollApp. As an example you may use \ [https://github.com/dymensionxyz/rollapp-evm](https://github.com/dymensionxyz/rollapp-evm) repository. \ \ Example uses "mock" DA client. To use Load Network you should simply set next environment variable \ before config generation step using init.sh\ `export DA_CLIENT="weavevm" # This is the key change`\ `export WVM_PRIV_KEY="your_hex_string_wvm_priv_key_without_0x_prefix"`\ \ init.sh will generate basic configuration for `da_config.json` in `dymint.toml` which should look like.\ \ `da_config = '{"endpoint":"https://alphanet.load.network","chain_id":9496,"timeout":60000000000,"private_key_hex":"your_hex_string_load_priv_key_without_0x_prefix"}'`\ \ In this example we use `PRIVATE_KEY` of your LN address. It's not the most secure way to handle transaction signing and that's why we also provide an ability to use web3signer as a signing method. To enable web3signer you will need to change init.sh script and add correspondent fields or change `da_config.json` in `dymint.toml` directly. \ e.g\ \ `da_config = '{"endpoint":"https://alphanet.load.network","chain_id":9496,"timeout":"60000000000","web3_signer_endpoint":"http://localhost:9000"}'`\ and to enable tls next fields should be add to the json file: `web3_signer_tls_cert_file`\ `web3_signer_tls_key_file`\ `web3_signer_tls_ca_cert_file`\ \ \ Web3 signer [Web3Signer](https://docs.web3signer.consensys.net/en/latest/) is a tool by Consensys which allows _remote signing_. ### Warnings Using a remote signer comes with risks, please read the web3signer docs. However this is a recommended way to sign transactions for enterprise users and production environments.\ Web3Signer is not maintained by Load Network team.\ \ Example of the most simple local web3signer deployment (for testing purposes): [https://github.com/allnil/web3signer\_test\_deploy](https://github.com/allnil/web3signer_test_deploy) \ \ Example of used configuration: ``` # Set environment variables export DA_CLIENT="weavevm" # This is the key change export WVM_PRIV_KEY="your_hex_string_wvm_priv_key_without_0x_prefix" export ROLLAPP_CHAIN_ID="rollappevm_1234-1" export KEY_NAME_ROLLAPP="rol-user" export BASE_DENOM="arax" export MONIKER="$ROLLAPP_CHAIN_ID-sequencer" export ROLLAPP_HOME_DIR="$HOME/.rollapp_evm" export SETTLEMENT_LAYER="mock" # Initialize and start make install BECH32_PREFIX=$BECH32_PREFIX export EXECUTABLE="rollapp-evm" $EXECUTABLE config keyring-backend test sh scripts/init.sh # Verify dymint.toml configuration cat $ROLLAPP_HOME_DIR/config/dymint.toml | grep -A 5 "da_config" dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "max_idle_time" -v "2s" dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "max_proof_time" -v "1s" dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "batch_submit_time" -v "30s" dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "p2p_listen_address" -v "/ip4/0.0.0.0/tcp/36656" dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "settlement_layer" -v "mock" dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "node_address" -v "http://localhost:36657" dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "settlement_node_address" -v "http://127.0.0.1:36657" # Start the rollapp $EXECUTABLE start --log_level=debug \ --rpc.laddr="tcp://127.0.0.1:36657" \ --p2p.laddr="tcp://0.0.0.0:36656" \ --proxy_app="tcp://127.0.0.1:36658" ``` \ in rollap-evm log you will eventually see something like this: ```log INFO[0000] weaveVM: successfully sent transaction[tx hash 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a] module=weavevm INFO[0000] wvm tx hash[hash 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a] module=weavevm DEBU[0000] waiting for receipt[txHash 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a attempt 0 error get receipt failed: failed to get transaction receipt: not found] module=weavevm INFO[0002] Block created.[height 35 num_tx 0 size 786] module=block_manager DEBU[0002] Applying block[height 35 source produced] module=block_manager DEBU[0002] block-sync advertise block[error failed to find any peer in table] module=p2p INFO[0002] MINUTE EPOCH 6[] module=x/epochs INFO[0002] Epoch Start Time: 2025-01-13 09:21:03.239539 +0000 UTC[] module=x/epochs INFO[0002] commit synced[commit 436F6D6D697449447B5B3130342038203131302032303620352031323920393020343520313633203933203235322031352031343320333920313538203131342035382035352031352038322038203939203132392032333520313731203230382031392032343320313932203139203233352036355D3A32337D] DEBU[0002] snapshot is skipped[height 35] INFO[0002] Gossipping block[height 35] module=block_manager DEBU[0002] Gossiping block.[len 792] module=p2p DEBU[0002] indexed block[height 35] module=txindex DEBU[0002] indexed block txs[height 35 num_txs 0] module=txindex INFO[0002] Produced empty block.[] module=block_manager DEBU[0002] Added bytes produced to bytes pending submission counter.[bytes added 786 pending 15719] module=block_manager INFO[0003] data available in weavevm[wvm_tx 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a wvm_block 0xe897eab56aee50b97a0f2bd1ff47af3c834e96ca18528bb869c4eafc0df583be wvm_block_number 5651207] module=weavevm DEBU[0003] Submitted blob to DA successfully.[] module=weavevm ``` File: da-integrations/ln-eigenda-proxy-server.md (4.95 KB) ------------------------------------------------ --- description: Permanent EigenDA blobs --- # LN-EigenDA Proxy Server ### Links EigenDA proxy: [repository](https://github.com/weaveVM/wvm-eigenda-proxy/tree/feat/eigenda-wvm-code-integration) ### About EigenDA Side Server Proxy LN-EigenDA wraps the [high-level EigenDA client](https://github.com/Layr-Labs/eigenda/blob/master/api/clients/eigenda_client.go), exposing endpoints for interacting with the EigenDA disperser in conformance to the [OP Alt-DA server spec](https://specs.optimism.io/experimental/alt-da.html), and adding disperser verification logic. This simplifies integrating EigenDA into various rollup frameworks by minimizing the footprint of changes needed within their respective services. ### About LN-EigenDA Side Server Proxy Integration It's a Load Network integration as a secondary backend of eigenda-proxy. In this scope, Load Network provides an EVM gateway/interface for EigenDA blobs on Arweave's Permaweb, removing the need for trust assumptions and relying on centralized third party services to sync historical data and provides a "pay once, save forever" data storage feature for EigenDA blobs. #### Key Details * Current maximum encoded blob size is 8 MiB (8\_388\_608 bytes). * _**Load Network currently operating in public testnet (Alphanet) - not recommended to use it in production environment.**_ #### Prerequisites and Resources 1. Review the configuration parameters table and `.env` file settings for the Holesky network. 2. Obtain test tLOAD tokens through our [faucet](https://wvm.dev/faucet) for testing purposes. 3. Monitor your transactions using the [Load Network explorer.](https://explorer.load.network) ### Usage Examples Please double check .env file values you start eigenda-proxy binary with env vars.\ They may conflict with flags. \ Start eigenda proxy with LN private key: ``` ./bin/eigenda-proxy \ --addr 127.0.0.1 \ --port 3100 \ --eigenda.disperser-rpc disperser-holesky.eigenda.xyz:443 \ --eigenda.signer-private-key-hex $PRIVATE_KEY \ --eigenda.max-blob-length 8Mb \ --eigenda.eth-rpc https://ethereum-holesky-rpc.publicnode.com \ --eigenda.svc-manager-addr 0xD4A7E1Bd8015057293f0D0A557088c286942e84b \ --weavevm.endpoint https://alphanet.load.network/ \ --weavevm.chain_id 9496 \ --weavevm.enabled \ --weavevm.private_key_hex $WVM_PRIV_KEY \ --storage.fallback-targets weavevm \ --storage.concurrent-write-routines 2 ``` POST command: ``` curl -X POST "http://127.0.0.1:3100/put?commitment_mode=simple" \ --data-binary "some data that will successfully be written to EigenDA" \ -H "Content-Type: application/octet-stream" \ --output response.bin ``` \ GET command: ``` COMMITMENT=$(xxd -p response.bin | tr -d '\n' | tr -d ' ') curl -X GET "http:/127.0.0.1:3100/get/0x$COMMITMENT?commitment_mode=simple" \ -H "Content-Type: application/octet-stream" ``` ## Examples using Web3signer as a remote signer ### Web3 signer [Web3Signer](https://docs.web3signer.consensys.net/en/latest/) is a tool by Consensys which allows _remote signing_. ### Warnings Using a remote signer comes with risks, please read the web3signer docs. However this is a recommended way to sign transactions for enterprise users and production environments.\ Web3Signer is not maintained by Load Network team.\ \ Example of the most simple local web3signer deployment (for testing purposes): [https://github.com/allnil/web3signer\_test\_deploy](https://github.com/allnil/web3signer_test_deploy) start eigenda proxy with signer: ``` ./bin/eigenda-proxy \ --addr 127.0.0.1 \ --port 3100 \ --eigenda.disperser-rpc disperser-holesky.eigenda.xyz:443 \ --eigenda.signer-private-key-hex $PRIVATE_KEY \ --eigenda.max-blob-length 8MiB \ --eigenda.eth-rpc https://ethereum-holesky-rpc.publicnode.com \ --eigenda.svc-manager-addr 0xD4A7E1Bd8015057293f0D0A557088c286942e84b \ --weavevm.endpoint https://alphanet.load.network/ \ --weavevm.chain_id 9496 \ --weavevm.enabled \ --weavevm.web3_signer_endpoint http://localhost:9000 \ --storage.fallback-targets weavevm \ --storage.concurrent-write-routines 2 ``` start web3signer tls: ``` ./bin/eigenda-proxy \ --addr 127.0.0.1 \ --port 3100 \ --eigenda.disperser-rpc disperser-holesky.eigenda.xyz:443 \ --eigenda.signer-private-key-hex $PRIVATE_KEY \ --eigenda.max-blob-length 8MiB \ --eigenda.eth-rpc https://ethereum-holesky-rpc.publicnode.com \ --eigenda.svc-manager-addr 0xD4A7E1Bd8015057293f0D0A557088c286942e84b \ --weavevm.endpoint https://testnet-rpc.wvm.dev/ \ --weavevm.chain_id 9496 \ --weavevm.enabled \ --weavevm.web3_signer_endpoint https://localhost:9000 \ --storage.fallback-targets weavevm \ --storage.concurrent-write-routines 2 \ --weavevm.web3_signer_tls_cert_file $SOME_PATH_TO_CERT \ --weavevm.web3_signer_tls_key_file $SOME_PATH_TO_KEY \ --weavevm.web3_signer_tls_ca_cert_file $SOME_PATH_TO_CA_CERT ``` File: kem-1.0-device.md (4.66 KB) ----------------------- --- description: The Kernel Execution Machine device --- # \~kem@1.0 device ## About The `kernel-em` NIF (kernel execution machine - `kem@1.0` device) is a HyperBEAM Rust device built on top of [wgpu](https://github.com/gfx-rs/wgpu) to offer a general GPU-instructions compute execution machine for `.wgsl` functions (shaders, kernels). With `wgpu` being a cross-platform GPU graphics API, hyperbeam node operators can add the KEM device to offer a compute platform for KEM functions. And with the ability to be called from within an ao process through `ao.resolve` (`kem@1.0` device), KEM functions offer great flexibility to run as GPU compute sidecars alongside ao processes. {% hint style="warning" %} _**This device is experimental, in PoC stage**_ {% endhint %} ### KEM Technical Architecture KEM function source code is deployed on Arweave (example, double integer: [btSvNclyu2me\_zGh4X9ULVRZqwze9l2DpkcVHcLw9Eg](https://arweave.net/btSvNclyu2me_zGh4X9ULVRZqwze9l2DpkcVHcLw9Eg)), and the source code TXID is used as the KEM function ID. ```rust fn execute_kernel( kernel_id: String, input_data: rustler::Binary, output_size_hint: u64, ) -> NifResult> { let kernel_src = retrieve_kernel_src(&kernel_id).unwrap(); let kem = pollster::block_on(KernelExecutor::new()); let result = kem.execute_kernel_default(&kernel_src, input_data.as_slice(), Some(output_size_hint)); Ok(result) } ``` A KEM function execution takes 3 parameters: function ID, binary input data, and output size hint ratio (e.g., `2` means the output is expected to be no more than 2x the size of the input). The KEM takes the input, retrieves the kernel source code from Arweave, and executes the GPU instructions on the hyperbeam node operator's hardware against the given input, then returns the byte results.

Technical Architecture Diagram

### On Writing Kernel Functions As the kernel execution machine (KEM) is designed to have I/O as bytes, and having the shader entrypoint standardized as `main`, writing a kernel function should have the function's entrypoint named `main`, the shader's type to be `@compute`, and the function's input/output should be in bytes; here is an example of skeleton function: ```wgsl // SPDX-License-Identifier: GPL-3.0 // input as u32 array @group(0) @binding(0) var input_bytes: array; // output as u32 array @group(0) @binding(1) var output_bytes: array; // a work group of 256 threads @compute @workgroup_size(256) // main compute kernel entry point fn main(@builtin(global_invocation_id) global_id: vec3) { } ``` ### Uniform Parameters Uniform parameters have been introduced as well, allowing you to pass configuration data and constants to your compute shaders. Uniforms are read-only data that remains constant across all invocations of the shader. Here is an example of a skeleton function with uniform parameters: ```wgsl // SPDX-License-Identifier: GPL-3.0 // input as u32 array @group(0) @binding(0) var input_bytes: array; // output as u32 array @group(0) @binding(1) var output_bytes: array; // uniform parameters for configuration @group(0) @binding(2) var params: vec2; // example: param1, param2 // a work group of 256 threads @compute @workgroup_size(256) // main compute kernel entry point fn main(@builtin(global_invocation_id) global_id: vec3) { // Access uniform parameters let param1 = i32(params.x); let param2 = i32(params.y); // your kernel logic here } ``` ### Example: Image Glitcher Using the image glitcher kernel function - [source code](https://github.com/loadnetwork/load_hb/blob/main/native/kernel_em_nif/src/kernels/glitch-berlin.wgsl)

original image

glitched via the kernel function - minted as AO NFT on Bazar https://bazar.arweave.net/#/asset/0z8MNwaRpkXhEgIxUv8ESNhtHxVGNfFkmGkoPtu0amY

### References * device source code: [native/kernel\_em\_nif](https://github.com/loadnetwork/load_hb/tree/main/native/kernel_em_nif) * hb device interface: [dev\_kem.erl](https://github.com/loadnetwork/load_hb/blob/main/src/dev_kem.erl) * nif tests: [kem\_nif\_test.erl](https://github.com/loadnetwork/load_hb/blob/main/src/kem_nif_test.erl) * ao process example: [kem-device.lua](https://github.com/loadnetwork/load_hb/blob/main/test/kem-device.lua) Directory: load-hyperbeam File: load-hyperbeam/about-load-hyperbeam.md (1.54 KB) -------------------------------------------- --- description: Load Network custom HyperBEAM devices --- # About Load HyperBEAM

hb.load.rs

### About HyperBEAM HyperBeam is a client implementation of the AO-Core protocol, written in Erlang. It can be seen as the 'node' software for the decentralized operating system that AO enables; abstracting hardware provisioning and details from the execution of individual programs. HyperBEAM node operators can offer the services of their machine to others inside the network by electing to execute any number of different `devices`, charging users for their computation as necessary. Each HyperBEAM node is configured using the `~meta@1.0` device, which provides an interface for specifying the node's hardware, supported devices, metering and payments information, amongst other configuration options. For more details, check out the HyperBEAM codebase: [https://github.com/permaweb/HyperBEAM](https://github.com/permaweb/HyperBEAM) ### load\_hb: Load Network HyperBEAM node with custom devices The [load\_hb](https://github.com/loadnetwork/load_hb) repository is our HyperBEAM fork with custom devices such as [\~evm@1.0](evm-1.0-device.md), [\~kem@1.0](../kem-1.0-device.md), and [\~riscv-em@1.0](../riscv-em-1.0-device.md) Our development motto is driven by the [Hyperbeam Accelerationism (hb/acc) ](https://blog.decent.land/hb-acc/)manifesto initiated during Arweave Day Berlin 2025. Our main hyperbeam development is hosted on [hb.load.rs](https://hb.load.rs/) File: load-hyperbeam/evm-1.0-device.md (2.06 KB) -------------------------------------- --- description: The first Revm EVM device on HyperBEAM --- # \~evm@1.0 device ## About The `@evm1.0` device: an EVM bytecode emulator built on top of Revm (version [v22.0.1](https://github.com/bluealloy/revm/releases/tag/v69)). The device not only allows evaluation of bytecode (signed raw transactions) against a given state db, but also supports appchain creation, statefulness, EVM context customization (gas limit, chain id, contract size limit, etc.), and the elimination of the block gas limit by substituting it with a transaction-level gas limit. {% hint style="warning" %} _**This device is experimental, in PoC stage**_ Live demo at [ultraviolet.load.network](https://github.com/loadnetwork/load_hb/blob/main/native/load_revm_nif/ultraviolet.load.network) {% endhint %} ## Technical Architecture `eval_bytecode()` takes 3 inputs, a signed raw transaction (N.B: chain id matters), a JSON-stringified state db and the output state path (here in this device it's in [./appchains](https://github.com/loadnetwork/load_hb/blob/main/native/load_revm_nif/appchains)) ```rust #[rustler::nif] fn eval_bytecode(signed_raw_tx: String, state: String, cout_state_path: String) -> NifResult { let state_option = if state.is_empty() { None } else { Some(state) }; let evaluated_state: (String, String) = eval(signed_raw_tx, state_option, cout_state_path)?; Ok(evaluated_state.0) } #[rustler::nif] fn get_appchain_state(chain_id: &str) -> NifResult { let state = get_state(chain_id); Ok(state) } ```
### References * device source code: [native/load\_revm\_nif](https://github.com/loadnetwork/load_hb/tree/main/native/load_revm_nif) * hb device interface: [dev\_evm.erl](https://github.com/loadnetwork/load_hb/blob/main/src/dev_evm.erl) * nif tests: [load\_revm\_nif\_test.erl](https://github.com/loadnetwork/load_hb/blob/main/src/load_revm_nif_test.erl) * ao process example: [evm-device.lua](https://github.com/loadnetwork/load_hb/blob/main/test/evm-device.lua) Directory: load-network-arweave-data-protocols File: load-network-arweave-data-protocols/ln-exex-data-protocol.md (2.75 KB) ------------------------------------------------------------------ --- description: About LN-ExEx Data Protocol on Arweave --- # LN-ExEx Data Protocol ### About The `LN-ExEx` data protocol on Arweave is responsible for archiving Load Network's full block data, which is posted to Arweave using the [Arweave Data Uploader Execution Extension (ExEx).](../load-network-exex/load-network-exexes/arweave-data-uploader.md) {% hint style="warning" %} After the rebrand from WeaveVM to Load Network, all the data protocol tags have changed the "\*WeaveVM\*" onchain term (Arweave tag) to "\*LN\*" {% endhint %} ### Protocol Specifications The data protocol transactions follow the ANS-104 data item specifications. Each Load Network block is posted on Arweave, after borsh-brotli encoding, with the following tags: | Tag Name | Tag Value | Description | | ---------------- | -------------------------- | ---------------------------------------------------------------------------------------------------- | | `Protocol` | `LN-ExEx` | Data protocol identifier | | `ExEx-Type` | `Arweave-Data-Uploader` | The Load Network ExEx type | | `Content-Type` | `application/octet-stream` | Arweave data transaction MIME type | | `LN:Encoding` | `Borsh-Brotli` | Transaction's data encoding algorithms | | `Block-Number` | `$value` | Load Network block number | | `Block-Hash` | `$value` | Load Network block hash | | `Client-Version` | `$value` | Load Network Reth client version | | `Network` | `Alphanet vx.x.x` | Load Network Alphanet semver | | `LN:Backfill` | `$value` | Boolean, if the data has been posted by a backfiller (true) or archiver (false or not existing data) | ### LN-ExEx Data Items Uploaders * Reth ExEx Archiver Address: [5JUE58yemNynRDeQDyVECKbGVCQbnX7unPrBRqCPVn5Z](https://viewblock.io/arweave/address/5JUE58yemNynRDeQDyVECKbGVCQbnX7unPrBRqCPVn5Z?tab=items) * Arweave-ExEx-Backfill Address: [F8XVrMQzsHiWfn1CaKtUPxAgUkATXQjXULWw3oVXCiFV](https://viewblock.io/arweave/address/F8XVrMQzsHiWfn1CaKtUPxAgUkATXQjXULWw3oVXCiFV?tab=items) File: load-network-arweave-data-protocols/load-network-precompiles-data-protocol.md (1.92 KB) ----------------------------------------------------------------------------------- --- description: About the Data Protocol of Load Network Precompile Contracts --- # Load Network Precompiles Data Protocol ### About Load Network have precompiled contracts that push data directly to Arweave as ANS-104 data items. One such precompile is the [`0x17`](https://docs.wvm.dev/using-weavevm/weavevm-precompiles#id-1-precompile-0x17-upload-data-from-solidity-to-arweave) precompile (`arweave_upload)`. {% hint style="warning" %} After the rebrand from WeaveVM to Load Network, all the data protocol tags have changed the "\*WeaveVM\*" onchain term (Arweave tag) to "\*LN\*" {% endhint %} #### Protocol Specifications The data protocol transactions follow the ANS-104 data item specifications. Each LN precompile transaction is posted on Arweave, after brotli compression, with the following tags: | Tag Name | Tag Value | Description | | ----------------------- | -------------------------- | ------------------------------------------------------------------ | | `LN:Precompile` | `true` | Data protocol identifier | | `Content-Type` | `application/octet-stream` | Arweave data transaction MIME type | | `LN:Encoding` | `Brotli` | Transaction's data encoding algorithms | | `LN:Precompile-Address` | `$value` | The decimal precompile number (e.g. 0x17 have the Tag Value of 23) | #### Load Network Precompile Data Items Uploaders * Load Network Reth Precompiles Address: [5JUE58yemNynRDeQDyVECKbGVCQbnX7unPrBRqCPVn5Z](https://viewblock.io/arweave/address/5JUE58yemNynRDeQDyVECKbGVCQbnX7unPrBRqCPVn5Z?tab=items) Directory: load-network-cloud-platform File: load-network-cloud-platform/cloud-platform-lncp.md (2.96 KB) -------------------------------------------------------- --- description: About Load Network Cloud Platform --- # Cloud Platform (LNCP)
Uploading data onchain shouldn’t be any more difficult than using Google Drive. The reason tools like Google Drive are popular is because they just work and are cheap/free. Their hidden downsides? You don’t own your data, it’s not permanent, and – especially for blockchain projects – it’s not useful for application developers. Users just want to put their data somewhere and forget about the upkeep. Developers just want a permanent reference to their data that resolves in any environment. Whichever you are, we built [cloud.load.network](http://cloud.load.network/) for you. The Load Network Cloud is an all-in-one tool to interact with various Load Network storage interfaces and pipelines: one UI, one API key, various integrations, with web2 UX. {% hint style="info" %} Using the API keys generated on cloud.load.network - you can access other features such as load0 and Load S3 storage. {% endhint %} ### The Rationale Since we started Load Network, we’ve had the vision of an onchain data center – a decentralized network of high performance, cost effectiveness, high-liveness, fault tolerance, low latency and fast finalization, data-centric features and availability. Building a cloud platform, similar to Google Cloud Platform, means abstracting the robust infrastructure of the (onchain) data center into a single UI, providing a smooth – as straightforward as using Google Cloud Platform to interface with their several services, that are built on top of their robust infrastructure of data centers and what comes along it. In today’s web3 world, too many teams relies on third-party hosted-IPFS pinning services (e.g. pinata, nft.storage), AWS S3 object storage and its alternatives (Google Cloud Bucket, etc), and other centralized data storage solutions – they are compromising the decentralization and liveness needed for permanent apps for ephemeral unsustainable short-term solutions. Other teams are already using battle-tested web3 native solutions such as Arweave and Filecoin, however these protocols lack the unification of a single cloud platform that lets developers use them like they’d use AWS S3. This creates engineering overhead for teams to integrate with web3 native solutions, keeping web3 devs in the web2 trap. We’re solving this with the Load Cloud. ### Introducing Load Network Cloud Platform: Going Onchain
As a response to the lack of web3 storage solution abstraction and interoperability with the web2 standard interfaces, we have worked on the Load Cloud, a one stop solution to use existing data storage standards, without compromising the core features of web3 data storage provided by Load Network. [Start using Load Network Cloud Platform today](https://cloud.load.network) File: load-network-cloud-platform/load-s3-protocol.md (2.79 KB) ----------------------------------------------------- --- description: Migrate to a permanent S3-compatible object storage in a single line change --- # Load S3 Endpoint ### About Load.Network provides an S3 implementation which enables developers to store files permanently in a decentralized manner by using the common AWS S3 Patterns with minimal change. ### Installation Load.Network is compatible with the S3 SDKs, because of this, you are able to use existing libraries. #### NodeJS To install the official S3 library in NodeJS, run the following command ```shell $ yarn add @aws-sdk/client-s3 ``` **Initialization** In order to initialize the S3 client connected to Load Network, you can do the following: ```typescript import { S3Client } from "@aws-sdk/client-s3"; const accessKeyId = process.env.LOAD_ACCESS_KEY; const secretAccessKey = ""; // It's meant to be empty const s3Client = new S3Client({ region: "eu-west-2", // Required -- current supported region endpoint: "https://s3.load.rs", // Load.Network S3 endpoint credentials: { accessKeyId, secretAccessKey, }, forcePathStyle: true, // Required }); ``` * `process.env.LOAD_ACCESS_KEY`: Contains your private service key in [cloud.load.network](https://cloud.load.network). * It looks similar to `load_acc_*******` * `https://s3.load.rs` is the endpoint for the S3 interface provided by Load -- [codebase](https://github.com/weaveVM/wvm-aws-sdk-s3) * `forcePathStyle` set to `true` is _always_ necessary. ### Rust Examples ```rust use aws_sdk_s3::error::SdkError; use aws_sdk_s3::operation::create_bucket::CreateBucketError; use aws_sdk_s3::Client; pub async fn create_client() -> Client { let config = aws_config::from_env() .endpoint_url("https://s3.load.rs") .region("eu-west-2") .load() .await; let s3_config = aws_sdk_s3::config::Builder::from(&config) .force_path_style(true) .build(); Client::from_conf(s3_config) } pub async fn s3_create_bucket() -> Result<(), SdkError> { let client = create_client().await; match client.create_bucket() .bucket("LoadNetworkBucketTest") .send() .await { Ok(output) => { println!("✅ Bucket created: {}", output.location().unwrap_or("(no location)")); Ok(()) }, Err(err) => { println!("❌ Error creating bucket: {}", err); Err(err) } } } ``` for more examples, checkout the [rust-examples](https://github.com/loadnetwork/s3-examples/tree/main/rust-examples). Github repo: [https://github.com/weaveVM/wvm-aws-sdk-s3](https://github.com/weaveVM/wvm-aws-sdk-s3) For more code examples, checkout this repository: [https://github.com/loadnetwork/s3-examples](https://github.com/loadnetwork/s3-examples) File: load-network-cloud-platform/load0-data-layer.md (2.50 KB) ----------------------------------------------------- --- description: About Load Network optimistic & high performance data layer --- # load0 data layer `load0` is Bundler's [Large Bundle](https://github.com/weaveVM/bundler?tab=readme-ov-file#large-bundle) on steroids -- a cloud-like experience to upload and download data from [Load Network](https://docs.load.network) using the Bundler's `0xbabe2` transaction format powered with [SuperAccount](https://github.com/weaveVM/bundler?tab=readme-ov-file#superaccount) & S3 under the hood. {% hint style="info" %} To obtain API key and unlock higher limits, create an API key on [cloud.load.network](https://cloud.load.network) {% endhint %} ### Technical Architecture First, the user sends data to the load0 REST API `/upload` endpoint -- the data is pushed to load0's S3 bucket and returns an optimistic hash (keccak hash) which allows the users to instantly retrieve the object data from load0. After being added to the load0 bucket, the object gets added to the orchestrator queue that uploads the optimistic cached objects to Load Network. Using Large Bundle & SuperAccount, the S3 bucket objects get sequentially uploaded to Load and therefore, permanently stored while maintaining very fast uploads and downloads. _Object size limit: 1 byte -> 2GB_.

tx lifecycle

### REST API #### 1- Upload object ```bash curl -X POST "https://load0.network/upload" \ --data-binary "@./video.mp4" \ -H "Content-Type: video/mp4" \ -H "X-Load-Authorization: $YOUR_LNCP_AUTH_TOKEN" ``` #### 2- Download object (browser) ```bash GET https://load0.network/download/{optimistic_hash} ``` Also, to have endpoints similiarity as in `bundler.load.rs`, you can do: ```bash GET https://load0.network/resolve/{optimistic_hash} ``` #### 3- Retrieve Bundle metadata using optimistic hash or bundle txid (once settled) ```bash GET https://load0.network/bundle/optimistic/{op_hash} ``` ```bash GET https://load0.network/bundle/load/{bundle_txid} ``` Returns: ```rust pub struct Bundle { pub id: u32, pub optimistic_hash: String, pub bundle_txid: String, pub data_size: u32, pub is_settled: bool, pub content_type: String } ``` An object data can be accessed via: * optimistic caching: `https://load0.network/resolve/{Bundle.optimistic_hash}` * from Load Network (once settled): `https://bundler.load.rs/v2/resolve/{Bundle.bundle_txid}` Source code: [https://github.com/loadnetwork/load0/](https://github.com/loadnetwork/load0/) Directory: load-network-exex File: load-network-exex/about-exexes.md (523 B) --------------------------------------- --- description: About Reth Execution Extensions (ExEx) --- # About ExExes ExEx is a framework for building performant and complex off-chain infrastructure as post-execution hooks. Reth ExExes can be used to implement rollups, indexers, MEV bots and more with >10x less code than existing methods. Check out the Reth ExEx announcement by Paradigm [https://www.paradigm.xyz/2024/05/reth-exex](https://www.paradigm.xyz/2024/05/reth-exex) In the following pages we will list the ExExes developed and used by Load Network. File: load-network-exex/exex.rs.md (424 B) ---------------------------------- --- description: An open source directory of Reth ExExes --- # ExEx.rs ### About [ExEx.rs](https://exex.rs) is an open source directory for Reth's ExExes. You can think of it as an "chainlist of ExExes". We believe that curating ExExes will accelerate their development by making examples and templates easily discoverable. [Add you ExEx today!](https://github.com/weaveVM/exex.rs?tab=readme-ov-file#add-an-exex-object) Directory: load-network-exex/load-network-exexes File: load-network-exex/load-network-exexes/README.md (195 B) ----------------------------------------------------- --- description: Explore Load Network developed ExExes --- # Load Network ExExes In the following section you will explore the Execution Extensions developed by our team to power WeaveVM File: load-network-exex/load-network-exexes/arweave-data-uploader.md (425 B) -------------------------------------------------------------------- --- description: Reth -> Arweave data pipeline --- # Arweave Data Uploader ### About This ExEx is the first data upload pipeline between an Ethereum client (reth) and Arweave, the permanent data storage network. The ExEx uses [AR.IO Turbo Bundler](https://ardrive.io/turbo-bundler/) to bundle data and send it to Arweave. [Get the ExEx code](https://github.com/weaveVM/wvm-reth/tree/main/wvm-apps/wvm-exexed/crates/irys). File: load-network-exex/load-network-exexes/borsh-serializer.md (649 B) --------------------------------------------------------------- --- description: Borsh binary serializer ExEx --- # Borsh Serializer ### About [Borsh](https://github.com/near/borsh) stands for Binary Object Representation Serializer for Hashing and is a binary serializer developed by the [NEAR](https://near.org) team. It is designed for security-critical projects, prioritizing consistency, safety, and speed, and comes with a strict specification. The ExEx utilizes Borsh to serialize and deserialize block objects, ensuring a bijective mapping between objects and their binary representations. [Get the ExEx code](https://github.com/weaveVM/wvm-reth/tree/main/wvm-apps/wvm-exexed/crates/wevm-borsh) File: load-network-exex/load-network-exexes/google-bigquery-etl.md (291 B) ------------------------------------------------------------------ --- description: Load Network GBQ ETL ExEx --- # Google BigQuery ETL ### About This ExEx is an Extract-Transform-Load (ETL) process of the JSON-serialized blocks into Google BigQuery. [Get the ExEx code](https://github.com/weaveVM/wvm-reth/tree/main/wvm-apps/wvm-exexed/crates/bigquery) File: load-network-exex/load-network-exexes/load-network-da-exex.md (551 B) ------------------------------------------------------------------- --- description: LN-DA plugin ExEx --- # Load Network DA ExEx ### About This introduces a new DA interface for EVM rollups that doesn't require changes to the sequencer or network architecture. It's easily added to any Reth client with just 80 lines of code by importing the DA ExEx code into the ExExes directory, making integration simple and seamless. [Get the code here](https://github.com/weaveVM/wvm-reth/tree/dev/wvm-apps/wvm-exexed/crates/exex-wvm-da) & [installing setup guide here](../../load-network-for-evm-chains/da-exex-reth-only.md) File: load-network-exex/load-network-exexes/load-network-weavedrive-exex.md (436 B) --------------------------------------------------------------------------- --- description: Load Network AO's WeaveDrive ExEx --- # Load Network WeaveDrive ExEx Load Network has created the first Reth ExEx that attest data to AO network following the WeaveDrive data protocol specification — check [code integration](https://github.com/weaveVM/wvm-reth/blob/main/wvm-apps/wvm-exexed/crates/reth-exexed/src/exex/ar_actor.rs#L299) & learn more about [WeaveDrive (AOP-5)](https://hackmd.io/@ao-docs/H1JK_WezR) Directory: load-network-for-evm-chains File: load-network-for-evm-chains/da-exex-reth-only.md (1.61 KB) ------------------------------------------------------ --- description: Plug Load Network high-throughput DA into any Reth node --- # DA ExEx (Reth-only) ### About Adding a DA layer usually requires base-level changes to a network’s architecture. Typically, DA data is posted either by sending calldata to the L1 or through blobs, with the posting done at the sequencer level or by modifying the rollup node’s code. This ExEx introduces an emerging, non-traditional DA interface for EVM rollups. No changes are required at the sequencer level, and it’s all handled via the ExEx, which is easy to add to any Reth client in just 80 lines of code. ### Integration Tutorial First, you’ll need to add the following environment variables to your Reth instance:

.env

The `archiver_pk` refers to the private key of the LN wallet, which is used to pay gas fees on the LN for data posting. The `network` variable points to the path of your network configuration file used for the ExEx. A typical network configuration file looks like this:

network.json

For a more detailed setup guide for your network, check out this [guide](https://github.com/weaveVM/wvm-archiver?tab=readme-ov-file#add-your-network). Finally, to implement the Load Network DA ExEx in your Reth client, simply import the DA ExEx code into your ExExes directory and it will work off the shelf with your Reth setup. [Get the code here](https://github.com/weaveVM/wvm-reth/tree/dev/wvm-apps/wvm-exexed/crates/exex-wvm-da). File: load-network-for-evm-chains/deploying-op-stack-rollups.md (2.28 KB) --------------------------------------------------------------- --- description: Guidance on How To Deploy OP-Stack Rollups on Laod Network --- # Deploying OP-Stack Rollups ### About the OP Stack The [OP Stack](https://docs.optimism.io/stack/getting-started) is a generalizable framework spawned out of Optimism’s efforts to scale the Ethereum L1. It provides the tools for launching a production-quality Optimistic Rollup blockchain with a focus on modularity. Layers like the sequencer, data availability, and execution environment can be swapped out to create novel L2 setups. The goal of optimistic rollups is to increase L1 transaction throughput while reducing transaction costs. For example, when Optimism users sign a transaction and pay the gas fee in ETH, the transaction is first stored in a private mempool before being executed by the sequencer. The sequencer generates blocks of executed transactions every two seconds and periodically batches them as call data submitted to Ethereum. The “optimistic” part comes from assuming transactions are valid unless proven otherwise. In the case of Laod Network, we have modified OP Stack components to use LN as the data availability and settlement layer for L2s deployed using this architecture. ### OP Stack Rollups on Load Network We’ve built on top of the [Optimism Monorepo](https://github.com/ethereum-optimism/optimism) to enable the deployment of optimistic rollups using LN as the L1. The key difference between deploying OP rollups on Load Network versus Ethereum is that when you send data batches to LN, your rollup data is also permanently archived on Arweave via to [LN’s Execution Extensions (ExExes).](../load-network-exex/exex.rs.md) As a result, OP Stack rollups using LN for data settlement and data availability (DA) will benefit from the cost-effective, permanent data storage offered by Load Network and Arweave. Rollups deployed on LN use the native network gas token (tLOAD on Alphanet), similar to how ETH is used for OP rollups on Ethereum. **We’ve released a detailed technical guide on GitHub for developers looking to deploy OP rollups on Load Network. Check it out** [**here**](https://github.com/weaveVM/developers/blob/main/guides/op-rollup-deployment.md) **and the** [**LN’s fork of Optimism Monorepo here**.](https://github.com/weaveVM/optimism/tree/deploy-op-stack-rollup-on-wvm-l1) File: load-network-for-evm-chains/ledger-archiver-any-chain.md (3.18 KB) -------------------------------------------------------------- --- description: Connect any EVM network to Load Network --- # Ledger Archiver (any chain) ### About Load Network Archiver is an ETL archive pipeline for EVM networks. It's the simplest way to interface with LN's permanent data feature without smart contract redeployments. ### Load Network Archiver Usage LN Archiver is the ideal choice if you want to: * Interface with LN's permanent data settlement and high-throughput DA * Maintain your current data settlement or DA architecture * Have an interface with LN without rollup smart contract redeployments * Avoid codebase refactoring Run An Instance To run your own node instance of the `load-archiver` tool, check out the detailed setup guide on github: [https://github.com/WeaveVM/wvm-archiver](https://github.com/WeaveVM/wvm-archiver) ### Networks Using LN Archiver | Network | Archiver Repo | Archiver Endpoint | | ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------- | | [Metis](https://metis.io) | [https://github.com/WeaveVM/wvm-archiver ](https://github.com/WeaveVM/wvm-archiver) | [https://metis.load.rs/v1/info ](https://metis.load.rs/v1/info) | | [RSS3](https://rss3.io) | [https://github.com/WeaveVM/rss3-wvm-archiver ](https://github.com/WeaveVM/rss3-wvm-archiver) | [https://rss3.load.rs/v1/info ](https://rss3.load.rs/v1/info) | | [GOAT Network](https://goat.network) | [https://github.com/WeaveVM/goat-wvm-archiver ](https://github.com/WeaveVM/goat-wvm-archiver) | [https://goat.load.rs/v1/info ](https://goat.load.rs/v1/info) | | [Avalanche c-chain](https://subnets.avax.network/c-chain) | [https://github.com/WeaveVM/avalanche-wvm-archiver](https://github.com/WeaveVM/avalanche-wvm-archiver) | [https://avalanche.load.rs/v1/info ](https://avalanche.load.rs/v1/info) | | [Dymension L1 Hub](https://dymension.xyz) | [https://github.com/WeaveVM/dymension-wvm-archiver](https://github.com/WeaveVM/dymension-wvm-archiver) | [https://dymension.load.rs/v1/info](https://dymension.load.rs/v1/info) | | [Humanode EVM](https://humanode.io/) | [https://github.com/weaveVM/humanode-wvm-archiver](https://github.com/weaveVM/humanode-wvm-archiver) | [https://humanode.load.rs/v1/info](https://humanode.load.rs/v1/info) | | [Scroll Mainnet](https://scroll.io/) | [https://github.com/weaveVM/scroll-wvm-archiver](https://github.com/weaveVM/scroll-wvm-archiver) | [https://scroll.load.rs/v1.info](https://scroll.load.rs/v1.info) | | [phala-mainnet-0](https://hub.conduit.xyz/phala-mainnet-0) | [https://github.com/weaveVM/phala-wvm-archiver](https://github.com/weaveVM/phala-wvm-archiver) | [https://phala.load.rs/v1.info](https://phala.load.rs/v1.info) | File: load-network-for-evm-chains/ledger-archivers-state-reconstruction.md (4.13 KB) -------------------------------------------------------------------------- --- description: Reconstruction an EVM network using using its load-archiver node instance --- # Ledger Archivers: State Reconstruction ### **Understanding the World State Trie** The World State Trie, also known as the Global State Trie, serves as a cornerstone data structure in Ethereum and other EVM networks. Think of it as a dynamic snapshot that captures the current state of the entire network at any given moment. This sophisticated structure maintains a crucial mapping between account addresses (both externally owned accounts and smart contracts) and their corresponding states. Each account state in the World State Trie contains several essential pieces of information: * Current balance of the account * Transaction nonce (tracking the number of transactions sent from this account) * Smart contract code (for contract accounts) * Hash of the associated storage trie (linking to the account’s persistent storage) This structure effectively represents the current status of all assets and relevant information on the EVM network. Each new block contains a reference to the current global state, enabling network nodes to efficiently verify information and validate transactions. ![EVM Tries](https://gateway.wvm.network/bundle/0x675c3ee485cdc7e8a87b7cf3b109eb0b7558785855f503b42fc7c9ac46093cbb/0) #### **The Dynamic Nature of State Management** An important distinction exists between the World State Trie database and the Account Storage Trie database. While the World State Trie database maintains immutability and reflects the network’s global state, the Account Storage Trie database remains mutable with each block. This mutability is necessary because transaction execution within each block can modify the values stored in accounts, reflecting changes in account states as the blockchain progresses. ### **Reconstructing the World State with Load Network Archivers** The core focus of this article is demonstrating how Load Network Archivers’ data lakes can be leveraged to reconstruct an EVM network’s World State. We’ve developed a proof-of-concept library in Rust that showcases this capability using a customized Revm wrapper. This library abstracts the complexity of state reconstruction into a simple interface that requires just 10 lines of code to implement. Here’s how to reconstruct a network’s state using our library: ```rust use evm_state_reconstructing::utils::core::evm_exec::StateReconstructor; use evm_state_reconstructing::utils::core::networks::Networks; use evm_state_reconstructing::utils::core::reconstruct::reconstruct_network; use anyhow::Error; async fn reconstruct_state() -> Result { let network: Networks = Networks::metis(); let state: StateReconstructor = reconstruct_network(network).await?; Ok(state) } ``` The reconstruction process follows a straightforward workflow: 1. The library connects to the specified Load Network Archive network 2. Historical ledger data is retrieved from the Load Network Archiver data lakes 3. Retrieved blocks are processed through our custom minimal EVM execution machine 4. The EVM StateManager applies the blocks sequentially, updating the state accordingly 5. The final result is a complete reconstruction of the network’s World State This proof-of-concept implementation is available on GitHub: [https://github.com/weaveVM/evm-state-reconstructing](https://github.com/weaveVM/evm-state-reconstructing) ![LN State Reconstuction Flow](https://gateway.wvm.network/bundle/0x675c3ee485cdc7e8a87b7cf3b109eb0b7558785855f503b42fc7c9ac46093cbb/1) [Load Network Archivers](ledger-archiver-any-chain.md) has evolved beyond its foundation as a decentralized archive node. This proof of concept demonstrates how our comprehensive data storage enables full EVM network state reconstruction - a capability that opens new possibilities for network analysis, debugging, and state verification. We built this PoC to showcase what’s possible when you combine permanent storage with proper EVM state handling. Whether you’re analyzing historical network states, debugging complex transactions, or building new tools for chain analysis, the groundwork is now laid. File: quickstart.md (5.86 KB) ------------------- --- description: Get set up with the onchain data center icon: bolt --- # Quickstart {% hint style="info" %} To easily feed Load Network docs to your favourite LLM, access the compressed knowledge (aka LLM.txt) file from Load Network: [https://gateway.load.rs/bundle/0x5eef8d0f9a71bbee9a566430e6b093f916900b7d6d91d34e5641768db4ee3ef7/0](https://gateway.load.rs/bundle/0x5eef8d0f9a71bbee9a566430e6b093f916900b7d6d91d34e5641768db4ee3ef7/0) {% endhint %} Let's make it easy to get going with Load Network. In this doc, we'll go through the simplest ways to use Load across the most common use cases: * [Uploading data](quickstart.md#upload-data) * [Integrating ledger storage](quickstart.md#integrating-ledger-storage) * [Using Load DA](quickstart.md#using-load-da) * [Migrate from another storage layer](quickstart.md#migrate-from-another-storage-layer) ### Upload data The easiest way to upload data to Load Network is to use a bundling service. Bundling services cover upload costs on your behalf, and feel just like using a web2 API. The recommended testnet bundling service endpoints are: * [upload.onchain.rs](https://upload.onchain.rs) (upload) * [resolver.bot](https://resolver.bot) (retrieve) Instantiate an uploader in the [bundler-upload-sdk](https://github.com/weaveVM/bundler-upload-sdk) using this endpoint and the public testnet API key: ```bash API_KEY=d025e132382aea412f4256049c13d0e92d5c64095d1c88e1f5de7652966b69af ``` {% hint style="warning" %} Limits are in place for the public testnet bundler. For production use at scale, we recommend running your own bundling service as explained [here](https://github.com/weaveVM/bundler), or [get in touch](https://calendly.com/decentlandlabs/founders-chat) {% endhint %} #### Full upload example ```javascript import { BundlerSDK } from 'bundler-upload-sdk'; import { readFile } from 'fs/promises'; import 'dotenv/config'; const bundler = new BundlerSDK('https://upload.onchain.rs/', process.env.API_KEY); async function main() { try { const fileBuffer = await readFile('files/hearts.gif'); const txHash = await bundler.upload([ { file: fileBuffer, tags: { 'content-type': 'image/gif', } } ]); console.log(`https://resolver.bot/bundle/${txHash}/0`); } catch (error) { console.error('Upload failed:', error.message); process.exit(1); } } main().catch(error => { console.error('Unhandled error:', error); process.exit(1); }); ``` ...Or [clone this example repo](https://github.com/weaveVM/bundler-upload-example) to avoid copy-pasting. #### Need to upload a huge amount of data? The above example demonstrates posting data in a single Load Network base layer tx. This is limited by Load's blocksize, so tops out at about 8mb. For practically unlimited upload sizes, you can use the large bundles spec to submit data in chunks. Chunks can even be uploaded in parallel, making large bundles a performant way to handle big uploads. The [Rust Bundler SDK](https://github.com/weaveVM/bundler?tab=readme-ov-file#0xbabe2-large-bundle) makes it possible for developers to spin up their own bundling services with support for large bundles. ### Integrating ledger storage Chains like Avalanche, Metis and RSS3 use Load Network as a decentralized archive node. This works by feeding all new and historical blocks to an archiving service you can run yourself, pointed to your network's RPC. [Clone the archiver repo here](https://github.com/WeaveVM/wvm-archiver) As well as storing all real-time and historical data, Load Network can be used to reconstruct full chain state, effectively replicating exactly what archive nodes do, but with a decentralized storage layer underneath. Read [here](https://blog.load.network/state-reconstruction/) to learn how. ### Using Load DA With 125mb/s data throughput and long-term data guarantees, Load Network can handle DA for every known L2, with 99.8% room to spare. Right now there are 4 ways you can integrate Load Network for DA: 1. [As a blob storage layer for EigenDA](da-integrations/ln-eigenda-proxy-server.md) 2. [As a DA layer for Dymension RollApps](da-integrations/ln-dymension-da-client-for-rollap.md) 3. [As an OP-Stack rollup](load-network-for-evm-chains/deploying-op-stack-rollups.md) 4. DIY DIY docs are a work in progress, but the [commit](https://github.com/dymensionxyz/dymint/commit/0140460c75bce6dc1cdcaf15527792734a0f7501) to add support for Load Network in Dymension can be used as a guide to implement Load DA elsewhere. {% hint style="info" %} Work with us to use Load DA for your chain - get onboarded [here](https://calendly.com/decentlandlabs/founders-chat). {% endhint %} ### Migrate from another storage layer If your data is already on another storage layer like IPFS, Filecoin, Swarm or AWS S3, you can use specialized importer tools to migrate. #### AWS S3 The [Load S3 SDK](https://github.com/weaveVM/wvm-aws-sdk-s3) provides a 1:1 compatible development interface for applications using AWS S3 for storage, keeping method names and parameters in tact so the only change should be one line: the `import` . #### Filecoin / IPFS The load-lassie import tool is the recommended way to easily migrate data stored via Filecoin or IPFS. Just provide the CID you want to import to the API, e.g.: `https://lassie.load.rs/import/` The importer is also self-hostable and further documented [here](https://github.com/weaveVM/wvm-lassie). #### Swarm Switching from Swarm to Load is as simple as changing the gateway you already use to resolve content from Swarm. * before: [https://api.gateway.ethswarm.org/bzz/](https://api.gateway.ethswarm.org/bzz/)\ * after: [https://swarm.load.rs/bzz/](https://swarm.wvm.network/bzz/)\ The first time Load's Swarm gateway sees a new hash, it uploads it to Load Network and serves it directly for subsequent calls. This effectively makes your Swarm data permanent on Load while maintaining the same hash. File: riscv-em-1.0-device.md (1.05 KB) ---------------------------- --- description: The RISC-V Execution Machine device --- # \~riscv-em@1.0 device {% hint style="danger" %} This device is in a very Proof Of Concept stage {% endhint %} ## About we have developed a custom fork of [R55](https://github.com/loadnetwork/r55) (an Ethereum Execution Environment that seamlessly integrates RISCV smart contracts alongside traditional EVM smart contracts) to [handle signed raw transaction](https://github.com/loadnetwork/r55/blob/main/r55/src/exec.rs#L27) input and return the resulted computed EVM db. After getting R55 to work with the OOTB interpretation of signed raw transaction, we built on top of it a hyperbeam device offering RISC-V compatible Ethereum appchains.\ For example, this erc20.rs Rust smart contract was deployed on a hb risc-v appchain: [github.com/loadnetwork/r55](https://github.com/loadnetwork/r55/blob/main/examples/erc20/src/lib.rs) RISC-V custom device source code: [https://github.com/loadnetwork/load\_hb/tree/main/native/riscv\_em\_nif](https://github.com/loadnetwork/load_hb/tree/main/native/riscv_em_nif) Directory: using-load-network File: using-load-network/0xbabe2-large-data-uploads.md (9.28 KB) ------------------------------------------------------ --- description: >- Using Load Network's 0xbabe2 transaction format for large data uploads - the largest EVM transaction in history --- # 0xbabe2: Large Data Uploads ### About 0xbabe2 Transaction Format 0xbabe2 is the newest data transaction format from the Bundler data protocol. Also called "Large Bundle," it's a bundle under version `0xbabe2` (address: [0xbabe2dCAf248F2F1214dF2a471D77bC849a2Ce84](https://explorer.wvm.dev/address/0xbabe2dCAf248F2F1214dF2a471D77bC849a2Ce84)) that exceeds the Load Network L1 and `0xbabe1` transaction input size limits, introducing incredibly high size efficiency to data storage on Load Network. For example, with Alphanet v0.4.0 metrics running at 500 mgas/s, a Large Bundle has a max size of 246 GB. However, to ensure a smooth DevX and optimal finalization period (aka "safe mode"), we have limited the 0xbabe2 transaction input limit to 2GB at the [Bundler SDK ](load-network-bundler.md)level. If you want higher limits, you can achieve this by changing a simple constant! {% hint style="success" %} If you have 10 hours to spare, make several teas and watch this 1 GB video streamed to you onchain from the Load Network!\ \ 0xbabe2 txid: [https://bundler.load.rs/v2/resolve/0x45cfaff6c3a507b1b1e88ef502ce32f93e7f515d9580ea66c340dc69e9d47608](https://bundler.load.rs/v2/resolve/0x45cfaff6c3a507b1b1e88ef502ce32f93e7f515d9580ea66c340dc69e9d47608) {% endhint %} ### Architecture design TLDR In simple terms, a Large Bundle consists of `n` smaller chunks (standalone bundles) that are sequentially connected tail-to-head and then at the end the Large Bundle is a reference to all the sequentially related chunks, packing all of the chunks IDs in a single 0xbabe2 bundle and sending it to Load Network. To dive deeper into the architecture design behind 0xbabe2 and how it works, check out the 0xbabe2 section in the [Bundler documentation](https://github.com/weaveVM/bundler?tab=readme-ov-file#architecture-design). {% hint style="info" %} with the upcoming Load Network network release (Alphanet v0.5.0) reaching 1 gigagas/s – 0xbabe2 data size limit will double to 492GB, almost 0.5TB EVM transaction. {% endhint %} ### 0xbabe2 Broadcasting Broadcasting an 0xbabe2 to Load Network can be done via the Bundler Rust SDK through 2 ways: the normal 0xbabe2 broadcasting (single-wallet single-threaded) or through the multi-wallet multi-threaded method (using SuperAccount). #### **Single-Threaded Broadcasting** Uploading data via the single-threaded method is efficient when the data isn't very large; otherwise, it would have very high latency to finish all data chunking then bundle finalization: ```rust use bundler::utils::core::large_bundle::LargeBundle; async fn send_large_bundle_single_thread() -> Result { let private_key = String::from(""); let content_type = "text/plain".to_string(); let data = "~UwU~".repeat(4_000_000).as_bytes().to_vec(); let large_bundle = LargeBundle::new() .data(data) .private_key(private_key) .content_type(content_type) .chunk() .build()? .propagate_chunks() .await? .finalize() .await?; Ok(large_bundle_hash) } ``` **Multi-Threaded Broadcasting** Multi-Threaded 0xbabe2 broadcasting is done via a multi-wallet architecture that ensures parallel chunks settlement on Load Network, maximizing the usage of the network's data throughput. To broadcast a bundle using the multi-threaded method, you need to initiate a `SuperAccount` instance and fund the Chunkers: ```rust use bundler::utils::core::super_account::SuperAccount; // init SuperAccount instance let super_account = SuperAccount::new() .keystore_path(".bundler_keystores".to_string()) .pwd("weak-password".to_string()) // keystore pwd .funder("private-key".to_string()) // the pk that will fund the chunkers .build(); // create chunkers let _chunkers = super_account.create_chunkers(Some(256)).await.unwrap(); // Some(amount) of chunkers // fund chunkers (1 tWVM each) let _fund = super_account.fund_chunkers().await.unwrap(); // will fund each chunker by 1 tWVM // retrieve chunkers let loaded_chunkers = super_account.load_chunkers(None).await.unwrap(); // None to load all chunkers ``` A Super Account is a set of wallets created and stored as keystore wallets locally under your chosen directory. In Bundler terminology, each wallet is called a "chunker". Chunkers optimize the DevX of uploading Large Bundle's chunks to LN by allocating each chunk to a chunker (\~4MB per chunker), moving from a single-wallet single-threaded design in data uploads to a multi-wallet multi-threaded design. ```rust async fn send_large_bundle_multi_thread() -> Result { // will fail until a tLOAD funded EOA (pk) is provided, take care about nonce if same wallet is used as in test_send_bundle_with_target let private_key = String::from("6f142508b4eea641e33cb2a0161221105086a84584c74245ca463a49effea30b"); let content_type = "text/plain".to_string(); let data = "~UwU~".repeat(8_000_000).as_bytes().to_vec(); let super_account = SuperAccount::new() .keystore_path(".bundler_keystores".to_string()) .pwd("test".to_string()); let large_bundle = LargeBundle::new() .data(data) .private_key(private_key) .content_type(content_type) .super_account(super_account) .chunk() .build() .unwrap() .super_propagate_chunks() .await .unwrap() .finalize() .await .unwrap(); println!("{:?}", large_bundle); Ok(large_bundle) } ``` #### 0xbabe2 Data Retrieval 0xbabe2 transaction data retrieval can be done either using the Rust SDK or the REST API. Using the REST API to resolve (chunk reconstruction until reaching final data) is faster for user usage as it does chunks streaming, resulting in near-instant data usability (e.g., rendering in browser).\ **Rust SDK** ```rust async fn retrieve_large_bundle() -> Result, Error> { let large_bundle = LargeBundle::retrieve_chunks_receipts( "0xb58684c24828f8a80205345897afa7aba478c23005e128e4cda037de6b9ca6fd".to_string(), ) .await? .reconstruct_large_bundle() .await?; Ok(large_bundle) } ``` **REST API** ```bash curl -X GET https://bundler.load.rs/v2/resolve/$0xBABE2_TXID ``` ### What you can fit in a 492GB 0xbabe2 transaction #### Modern LLMs | Model | What Can Fit in one 0xbabe2 transaction | | -------------------------------- | -------------------------------------------- | | Claude 3 Haiku (70B params) | 3.51 models (16-bit) or 14.06 models (4-bit) | | Claude 3 Sonnet (175B params) | 1.41 models (16-bit) or 5.62 models (4-bit) | | Claude 3 Opus (350B params) | 0.70 models (16-bit) or 2.81 models (4-bit) | | Claude 3.5 Sonnet (250B params) | 0.98 models (16-bit) or 3.94 models (4-bit) | | Claude 3.7 Sonnet (300B params) | 0.82 models (16-bit) or 3.28 models (4-bit) | | GPT-4o (1500B params est.) | 0.16 models (16-bit) or 0.66 models (4-bit) | | GPT-4 Turbo (1100B params est.) | 0.22 models (16-bit) or 0.89 models (4-bit) | | Llama 3 70B | 3.51 models (16-bit) or 14.06 models (4-bit) | | Llama 3 405B | 0.61 models (16-bit) or 2.43 models (4-bit) | | Gemini Pro (220B params est.) | 1.12 models (16-bit) or 4.47 models (4-bit) | | Gemini Ultra (750B params est.) | 0.33 models (16-bit) or 1.31 models (4-bit) | | Mistral Large (123B params est.) | 2.00 models (16-bit) or 8.00 models (4-bit) | #### Blockchain Data | Data Type | What Can Fit in one 0xbabe2 transaction | | -------------------------------------------- | --------------------------------------- | | Solana's State Snapshot (\~70GB) | \~7 instances | | Bitcoin Full Ledger (\~625 GB) | \~78% of the ledger | | Ethereum Full Ledger (\~1250 GB) | \~40% of the ledger | | Ethereum blobs (\~2.64 GB per day) | \~186 days worth of blob data | | Celestia's max throughput per day (112.5 GB) | 4.37× capacity | #### Media Files | File Type | What Can Fit in one 0xbabe2 transaction | | ---------------------------------- | --------------------------------------- | | MP3 Songs (4MB each) | 123,000 songs | | Full HD Movies (5GB each) | 98 movies | | 4K Video Footage (2GB per hour) | 246 hours | | High-Resolution Photos (3MB each) | 164,000 photos | | Ebooks (5MB each) | 100,000 books | | Documents/Presentations (1MB each) | 492,000 files | #### Other Data | Data Type | What Can Fit in one 0xbabe2 transaction | | ------------------------------------ | --------------------------------------- | | Database Records (5KB per record) | 98 billion records | | Virtual Machine Images (8GB each) | 61 VMs | | Docker container images (500MB each) | 1,007 containers | | Genome sequences (4GB each) | 123 genomes | Directory: using-load-network/code-and-integrations-examples File: using-load-network/code-and-integrations-examples/README.md (74 B) ----------------------------------------------------------------- --- description: Basic code examples --- # Code & Integrations Examples File: using-load-network/code-and-integrations-examples/deploying-an-erc20-token.md (1.28 KB) ----------------------------------------------------------------------------------- --- description: Deploy an ERC20 token on Load Network --- # Deploying an ERC20 Token ### **Add Load Network Alphanet to MetaMask** Before deploying, make sure the Load Network network is configured in your MetaMask wallet. [Check the Network Configurations](../network-configurations.md). ### ERC20 Contract For this example, we will use the ERC20 token template provided by the [OpenZeppelin's](https://docs.openzeppelin.com/contracts/4.x/erc20) smart contract library. ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; /// @title Useless Testing Token /// @notice Just a testing shitcoin /// @dev SupLoad gmgm /// @author pepe frog import "@openzeppelin/contracts/token/ERC20/ERC20.sol"; contract WeaveGM is ERC20 { constructor(uint256 initialSupply) ERC20("supLoad", "LOAD") { _mint(msg.sender, initialSupply); } } ``` ### Deployment Now that you have your contract source code ready, compile the contract and hit deploy with an initial supply.

69420 LOADs because why not

After deploying the contract successfully, check your EOA balance!

Success!

File: using-load-network/code-and-integrations-examples/ethers-etherjs.md (637 B) ------------------------------------------------------------------------- --- description: Use Load Network with etherjs --- # ethers (etherjs) In this example we will use the [ethers npm package.](https://docs.ethers.org/) First of all, install the package: ```bash npm i ethers ``` ### Code Example: Retrieve Address Balance ```javascript import { ethers } from "ethers"; const provider = new ethers.providers.JsonRpcProvider("https://alphanet.load.network"); const address = "0x544836c1d127B0d5ed6586EAb297947dE7e38a78"; async function getBalance() { const balance = await provider.getBalance(address); console.log(`Balance: ${ethers.utils.formatEther(balance)} tLOAD`); } getBalance(); ``` File: using-load-network/compatibility-and-performance.md (685 B) --------------------------------------------------------- --- description: Load Network Compatibility with the standards --- # Compatibility & Performance ### EVM compatibility Load Network EVM is built on top of Reth, making it compatible as a network with existing EVM-based applications. This means you can run your current Ethereum-based projects on LN without significant modifications, leveraging the full potential of the EVM ecosystem. Load Network EVM doesn't introduce new opcodes or breaking changes to the EVM itself, but it uses ExExes and adds custom precompiles: #### Alphanet V0.5.3 * **gas per non-zero byte:** 8 * **gas limit:** 500\_000\_000 * **block time:** 1s * **gas/s:** 500 mg/s * **data throughput:** \~62 MBps File: using-load-network/ln-native-json-rpc-methods.md (900 B) ------------------------------------------------------ --- description: About Load Network Native JSON-RPC methods --- # LN-Native JSON-RPC Methods ### The `eth_getArweaveStorageProof` JSON-RPC method This JSON-RPC method lets you retrieve the Arweave storage proof for a given Load Network block number ```bash curl -X POST https://alphanet.load.network \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getArweaveStorageProof", "params":["8038800"], "id":1 }' ``` ### The `eth_getWvmTransactionByTag` JSON-RPC method For Load Network L1 tagged transactions, the `eth_getWvmTransactionByTag` lets you retrieve a transaction hash for a given name-value tag pair. ```bash curl https://alphanet.load.network \ -X POST \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "eth_getWvmTransactionByTag", "params": [{ "tag": ["name", "value"] }] }' ``` File: using-load-network/load-data-protocol.md (2.70 KB) ---------------------------------------------- --- description: About load:// data retrieving protocol --- # load:// Data Protocol ### About load:// Load Network Data Retriever (load://) is a protocol for retrieving data from the Load Network (EVM). It leverages the LN DA layer and Arweave’s permanent storage to provide trustless access LN transaction data through both networks, whether that’s data which came from LN itself, or L2 data that was settled to LN. Many chains solve this problem by providing query interfaces to archival nodes or centralized indexers. For Load Network, Arweave _is_ the archival node, and can be queried without special tooling. However, the data LN stores on Arweave is also encoded, serialized and compressed, making it cumbersome to access. The load:// protocol solves this problem by providing an out-of-the-box way to grab and decode Load Network data while also checking it has been DA-verified. ### How it works The data retrieval pipeline ensures that when you request data associated with a Load Network transaction, it passes through at least one DA check (currently through LN's self-DA). It then retrieves the transaction block from Arweave, published by LN ExExes, decodes the block (decompresses Brotli and deserializes Borsh), and scans the archived sealed block transactions within LN to locate the requested transaction ID, ultimately returning the calldata (input) associated with it.

workflow

### Try it out Currently, the load:// gateway server provides two methods: one for general data retrieval and another specifically for transaction data posted by the load-archiver nodes. To retrieve calldata for any transaction on Load Network, you can use the following command: ```bash curl -X GET https://gateway.load.network/calldata/$LN_TXID ``` The second method is specific to `load-archiver` nodes because it decompresses the calldata and then deserializes its Borsh encoding according to a predefined structure. This is possible because the data encoding of load-archiver data is known to include an additional layer of Borsh-Brotli encoding before the data is settled on LN. ```bash curl -X GET https://gateway.load.network/war-calldata/$LN_TXID ``` ### Benchmarks #### Latency for /calldata The latency includes the time spent fetching data from LN EVM RPC and the Arweave gateway, as well as the processing time for Brotli decompression, Borsh deserialization, and data validity verification.

/calldata endpoint benchmark

#### Check out the load:// data protocol protocol [here](https://github.com/weavevM/wvm-data-retriever) File: using-load-network/load-network-bundler-gateways.md (3.84 KB) --------------------------------------------------------- --- description: >- The Load Network Gateway Stack: Fast, Reliable Access to Load Network Data (to be decentralized with LOAD1) --- # Load Network Bundler Gateways All storage chains have the same issue: even if the data storage is decentralized, retrieval is handled by a centralized gateway. A solution to this problem is just to provide a way for anyone to easily run their own gateway – and if you’re an application building on Load Network, that’s a great way to ensure content is rapidly retrievable from the blockchain. When [relic.bot](https://relic.bot) – a photo sharing dApp that uses LN bundles for storage – started getting traction, the default LN gateway became a bottleneck for the Relic team. The way data is stored inside bundles (hex-encoded, serialized, compressed) can make it resource-intensive to decode and present media on demand, especially when thousands of users are doing so in parallel. In response, we developed two new open source gateways: one [JavaScript-based cache-enabled gateway](https://github.com/weaveVM/resolver.bot), and [one written in Rust](https://github.com/weaveVM/rusty-gateway/tree/main). The LN Gateway Stack introduces a powerful new way to access data from Load Network bundles, combining high performance with network resilience. At its core, it’s designed to make bundle data instantly accessible while contributing to the overall health and decentralization of the LN. ### **Why we built the Load Network gateway stack** The gateway stack solves several critical needs in the LN ecosystem: **Rapid data retrieval** Through local caching with SQLite, the gateway dramatically reduces load times (4-5x) for frequently accessed bundled data. No more waiting for remote data fetches – popular content is served instantly from the gateway node. For [relic.bot](http://relic.bot/), this slashed feed loading times from 6-8 seconds to near-instant. **Network health** By making it easy to run your own gateway, the stack promotes a more decentralized network. Each gateway instance contributes to network redundancy, ensuring data remains accessible even if some nodes go offline. ### **Running a Load Network gateway** Running your own LN gateway is pretty straightforward. The gateway stack is designed for easy deployment, directly to your server or inside a Docker container. With Docker, you can have a gateway up and running in minutes: ```bash git clone https://github.com/weavevm/bundles-gateway.git cd bundles-gateway docker compose up -d ``` For rustaceans, rusty-gateway is deployable on a Rust host like [shuttle.dev](http://shuttle.dev/) – get the repo [here](https://github.com/WeaveVM/rusty-gateway) and Shuttle deployment docs [here](https://docs.shuttle.dev/introduction/docs). ### **The technical side** Under the hood, the gateway stack features: * SQLite-backed persistent cache * Content-aware caching with automatic MIME type detection * Configurable cache sizes and retention policies * Application-specific cache management * Automatic cache cleanup based on age and size limits * Health monitoring and statistics The gateway exposes a simple API for accessing bundle data: `GET /bundle/:txHash/:index` This endpoint handles the job of data retrieval, caching, and content-type detection behind the scenes. ### **Towards scalability & decentralization** The Load Network gateway stack was built in response to problems of scale – great problems to have as a new network gaining traction. LN bundle data is now more accessible, resilient and performant. By running a gateway, you’re not just improving your own access to LN data – you’re contributing to a more robust, decentralized network. Test the gateways: * [gateway.wvm.rs](http://gateway.wvm.rs/) - [gateway.load.rs](https://gateway.load.rs) * [gateway.wvm.nerwork](https://gateway.wvm.nerwork) * [resolver.bot](http://resolver.bot/) File: using-load-network/load-network-bundler.md (24.80 KB) ------------------------------------------------ --- description: >- The LN Bundler is the fastest, cheapest and most scalable way to store EVM data onchain --- # Load Network Bundler ### :zap: Quickstart To upload data to Load Network with the alphanet bundling service, see [here](https://docs.load.network/quickstart#upload-data) in the quickstart docs for the [upload SDK](https://github.com/weaveVM/bundler-upload-sdk) and [example repository](https://github.com/weaveVM/bundler-upload-example). ### About Load Network Bundler is a data protocol specification and library that introduces the first bundled EVM transactions format. This protocol draws inspiration from Arweave's [ANS-102](https://github.com/ArweaveTeam/arweave-standards/blob/master/ans/ANS-102.md) specification. _**Bundler as data protocol and library is still in PoC (Proof of Concept) phase - not recommended for production usage, testing purposes only.**_ For the JS/TS version of LN bundles, [click here](https://github.com/weavevm/weavevm-bundles-js). #### Advantages of Load Network bundled transactions * Reduces transaction overhead fees from multiple fees (`n`) per `n` transaction to a single fee per bundle of envelopes (`n` transactions) * Enables third-party services to handle bundle settlement on LN (will be decentralized with LOAD1) * Maximizes the TPS capacity of LN without requiring additional protocol changes or constraints * Supports relational data grouping by combining multiple related transactions into a single bundle ### Protocol Specification #### Nomenclature * **Bundler**: Refers to the data protocol specification of the EVM bundled transactions on Load Network. * **Envelope**: A legacy EVM transaction that serves as the fundamental building block and composition unit of a Bundle. * **Bundle**: An EIP-1559 transaction that groups multiple envelopes (`n > 0`), enabling efficient transaction batching and processing. * **Large Bundle**: A transaction that carries multiple bundles. * **Bundler Lib**: Refers to the Bundler Rust library that facilitates composing and propagating Bundler's bundles. #### 1. Bundle Format A bundle is a group of envelopes organized through the following process: 1. Envelopes MUST be grouped in a vector 2. The bundle is Borsh serialized according to the `BundleData` type 3. The resulting serialization vector is compressed using Brotli compression 4. The Borsh-Brotli serialized-compressed vector is added as `input` (calldata) to an EIP-1559 transaction 5. The resulting bundle is broadcasted on Load Network with `target` set to `0xbabe` addresses based on bundle version. ```rust pub struct BundleData { pub envelopes: Vec, } ```

Envelope Lifecycle

#### Bundles Versioning Bundles versioning is based on the bundles target address: | Bundle Version | Bundler Target Acronym | Bundler Target Address | | :------------: | :--------------------: | :-----------------------------------------------------------------------------------------------------------------------: | | v0.1.0 | `0xbabe1` | [0xbabe1d25501157043c7b4ea7CBC877B9B4D8A057](https://explorer.wvm.dev/address/0xbabe1d25501157043c7b4ea7CBC877B9B4D8A057) | | v0.2.0 | `0xbabe2` | [0xbabe2dCAf248F2F1214dF2a471D77bC849a2Ce84](https://explorer.wvm.dev/address/0xbabe2dCAf248F2F1214dF2a471D77bC849a2Ce84) | #### 2. Envelope Format An envelope is a signed Legacy EVM transaction with the following MUSTs and restrictions. ```rust pub struct Tag { pub name: String, pub value: String, } pub struct EnvelopeSignature { pub y_parity: bool, pub r: String, pub s: String, } pub struct TxEnvelopeWrapper { pub chain_id: u64, pub nonce: u64, pub gas_price: u128, pub gas_limit: u64, pub to: String, pub value: String, pub input: String, pub hash: String, pub signature: EnvelopeSignature, pub tags: Option>, } ``` 1. **Transaction Fields** * `nonce`: MUST be 0 * `gas_limit`: MUST be 0 * `gas_price`: MUST be 0 * `value`: MUST be 0 2. **Size Restrictions** * Total Borsh-Brotli compressed envelopes (Bundle data) MUST be under 9 MB * Total Tags bytes size must be <= 2048 bytes before compression. 3. **Signature Requirements** * each envelope MUST have a valid signature 4. **Usage Constraints** * MUST be used strictly for data settling on Load Network * MUST only contain envelope's calldata, with optional `target` setting (default fallback to ZERO address) * CANNOT be used for: * tLOAD transfers * Contract interactions * Any purpose other than data settling #### 3. Transaction Type Choice The selection of transaction types follows clear efficiency principles. Legacy transactions were chosen for envelopes due to their minimal size (144 bytes), making them the most space-efficient option for data storage. EIP-1559 transactions were adopted for bundles as the widely accepted standard for transaction processing.

EVM transaction types - size in bytes

#### 4. Notes * Envelopes exist as signed Legacy transactions within bundles but operate under distinct processing rules - they are not individually processed by the Load Network as transactions, despite having the structure of a Legacy transaction (signed data with a Transaction type). Instead, they are bundled together and processed as a single onchain transaction (therefore the advantage of Bundler). * Multiple instances of the same envelope within a bundle are permissible and do not invalidate either the bundle or the envelopes themselves. These duplicate instances are treated as copies sharing the same timestamp when found in a single bundle. When appearing across different bundles, they are considered distinct instances with their respective bundle timestamps (valid envelopes and considered as copies of distinct timestamps). * Since envelopes are implemented as signed Legacy transactions, they are strictly reserved for data settling purposes. Their use for any other purpose is explicitly prohibited for the envelope's signer security. ### Large Bundle #### About A Large Bundle is a bundle under version 0xbabe2 that exceeds the Load Network L1 and `0xbabe1` transaction size limits, introducing incredibly high size efficiency to data settling on LN. For example, with [Alphanet v0.4.0](https://blog.wvm.dev/alphanet-v4) running @ 500 mgas/s, a Large Bundle has a max size of 246 GB. For the sake of DevX and simplicity of the current 0xbabe2 stack, Large Bundles in the Bundler SDK have been limited to 2GB, while on the network level, the size is 246GB. #### SuperAccount A Super Account is a set of wallets created and stored as keystore wallets locally under your chosen directory. In Bundler terminology, each wallet is called a "chunker". Chunkers optimize the DevX of uploading LB chunks to LN by splitting each chunk to a chunker (\~4MB per chunker), moving from a single-wallet single-threaded design in data uploads to a multi-wallet multi-threaded design. ```rust use bundler::utils::core::super_account::SuperAccount; // init SuperAccount instance let super_account = SuperAccount::new() .keystore_path(".bundler_keystores".to_string()) .pwd("weak-password".to_string()) // keystore pwd .funder("private-key".to_string()) // the pk that will fund the chunkers .build(); // create chunkers let _chunkers = super_account.create_chunkers(Some(256)).await.unwrap(); // Some(amount) of chunkers // fund chunkers (1 tWVM each) let _fund = super_account.fund_chunkers().await.unwrap(); // will fund each chunker by 1 tWVM // retrieve chunkers let loaded_chunkers = super_account.load_chunkers(None).await.unwrap(); // None to load all chunkers ``` #### Architecture design Large Bundles are built on top of the Bundler data specification. In simple terms, a Large Bundle consists of `n` smaller chunks (standalone bundles) that are sequentially connected tail-to-head and then at the end the Large Bundle is a reference to all the sequentially related chunks, packing all of the chunks IDs in a single `0xbabe2` bundle and sending it to Load Network.

0xbabe2 transaction lifecycle

#### Large Bundle Size Calculation **Determining Number of Chunks** To store a file of size S (in MB) with a chunk size C, the number of chunks (N) is calculated as: **N = ⌊S/C⌋ + \[(S mod C) > 0]** Special case: **if S < C then N = 1** **Maximum Theoretical Size** The bundling actor collects all hash receipts of the chunks, orders them in a list, and uploads this list as a LN L1 transaction. The size components of a Large Bundle are: * 2 Brackets \[ ] = 2 bytes * EVM transaction header without "0x" prefix = 64 bytes per hash * 2 bytes for comma and space (one less comma at the end, so subtract 2 from total) * **Size per chunk's hash = 68 bytes** Therefore: **Total hashes size = 2 + (N × 68) - 2 = 68N bytes** **Maximum Capacity Calculation** * Maximum L1 transaction input size (`C_tx`) = 4 MB = 4\_194\_304 bytes * Maximum number of chunks (`Σn`) = `C_tx` ÷ 68 = 4\_194\_304 ÷ 68 = 61\_680 chunks * **Maximum theoretical Large Bundle size (`C_max`) = `Σn` × `C_tx` = 61\_680 × 4 MB = 246,720 MB ≈ 246.72 GB** #### Load Network Bundles Limitation | Network gaslimit | L1 tx input size | 0xbabe1 size | 0xbabe2 size | | :--------------------: | :--------------: | :----------: | :----------: | | 500 mgas/s (current) | 4MB | 4MB | 246 GB | | 1 gigagas/s (upcoming) | 8MB | 8MB | 492 GB | ### Bundler Library #### Import Bundler in your project ```toml bundler = { git = "https://github.com/weaveVM/bundler", branch = "main" } ``` #### 0xbabe1 Bundles **Build an envelope, build a bundle** ```rust use bundler::utils::core::envelope::Envelope; use bundler::utils::core::bundle::Bundle; use bundler::utils::core::tags::Tag; // Envelope let envelope = Envelope::new() .data(byte_vec) .target(address) .tags(tags) .build()?; // Bundle let bundle_tx = Bundle::new() .private_key(private_key) .envelopes(envelopes) .build() .propagate() .await?; ``` **Example: Build a bundle packed with envelopes** ```rust async fn send_bundle_without_target() -> eyre::Result { // will fail until a tLOAD funded EOA (pk) is provided let private_key = String::from(""); let mut envelopes: Vec = vec![]; for _ in 0..10 { let random_calldata: String = generate_random_calldata(128_000); // 128 KB of random calldata let envelope_data = serde_json::to_vec(&random_calldata).unwrap(); let envelope = Envelope::new() .data(Some(envelope_data)) .target(None) .build()?; envelopes.push(envelope); } let bundle_tx = Bundle::new() .private_key(private_key) .envelopes(envelopes) .build() .propagate() .await?; Ok(bundle_tx) } ``` **Example: Send tagged envelopes** ```rust async fn send_envelope_with_tags() -> eyre::Result { // will fail until a tLOAD funded EOA (pk) is provided let private_key = String::from(""); let mut envelopes: Vec = vec![]; // add your tags to a vector let tags = vec![Tag::new( "Content-Type".to_string(), "text/plain".to_string(), )]; for _ in 0..1 { let random_calldata: String = generate_random_calldata(128_000); // 128 KB of random calldata let envelope_data = serde_json::to_vec(&random_calldata).unwrap(); let envelope = Envelope::new() .data(Some(envelope_data)) .target(None) .tags(Some(tags.clone())) // add your tags .build() .unwrap(); envelopes.push(envelope); } let bundle_tx = Bundle::new() .private_key(private_key) .envelopes(envelopes) .build() .expect("REASON") .propagate() .await .unwrap(); Ok(bundle_tx) } ``` #### 0xbabe2 Large Bundle **Example: construct and disperse a Large Bundle single-threaded** ```rust use bundler::utils::core::large_bundle::LargeBundle; async fn send_large_bundle_without_super_account() -> eyre::Result { let private_key = String::from(""); let content_type = "text/plain".to_string(); let data = "~UwU~".repeat(4_000_000).as_bytes().to_vec(); let large_bundle = LargeBundle::new() .data(data) .private_key(private_key) .content_type(content_type) .chunk() .build()? .propagate_chunks() .await? .finalize() .await?; Ok(large_bundle_hash) } ``` **Example: construct and disperse a Large Bundle multi-threaded** ```rust async fn send_large_bundle_with_super_account() { // will fail until a tLOAD funded EOA (pk) is provided, take care about nonce if same wallet is used as in test_send_bundle_with_target let private_key = String::from(""); let content_type = "text/plain".to_string(); let data = "~UwU~".repeat(8_000_000).as_bytes().to_vec(); let super_account = SuperAccount::new() .keystore_path(".bundler_keystores".to_string()) .pwd("test".to_string()); let large_bundle = LargeBundle::new() .data(data) .private_key(private_key) .content_type(content_type) .super_account(super_account) .chunk() .build() .unwrap() .super_propagate_chunks() .await .unwrap() .finalize() .await .unwrap(); println!("{:?}", large_bundle); } ``` **Example: Retrieve Large Bundle data** ```rust async fn retrieve_large_bundle() -> eyre::Result> { let large_bundle = LargeBundle::retrieve_chunks_receipts( "0xb58684c24828f8a80205345897afa7aba478c23005e128e4cda037de6b9ca6fd".to_string(), ) .await? .reconstruct_large_bundle() .await?; Ok(large_bundle) } ``` For more examples, check the tests in [lib.rs](https://github.com/weaveVM/bundler/blob/main/src/lib.rs). ### HTTP API * Base endpoint: [https://bundler.load.rs/](https://bundler.load.rs/) #### Retrieve full envelopes data of a given bundle ```bash GET /v1/envelopes/:bundle_txid ``` #### Retrieve full envelopes data of a given bundle (with `from`'s envelope property derived from sig) ```bash GET /v1/envelopes-full/:bundle_txid ``` #### Retrieve envelopes ids of a given bundle ```bash GET /v1/envelopes/ids/:bundle_txid ``` > **N.B: All of the `/v1` methods (`0xbabe1`) are available under `/v2` for `0xbabe2` Large Bundles.** #### Resolve the content of a Large Bundle (not efficient, experimental) ```bash GET /v2/resolve/:large_bundle_txid ``` ### Cost Efficiency: some comparisons #### SSTORE2 VS LN L1 calldata
View comparison table In the comparison below, we tested data settling of 1MB of non-zero bytes. LN's pricing of non-zero bytes (8 gas) and large transaction data size limit (8MB) allows us to fit the whole MB in a single transaction, paying a single overhead fee. | Chain | File Size (bytes) | Number of Contracts/Tx | Gas Used | Gas Price (Gwei) | Cost in Native | Native Price (USD) | Total (USD) | | ------------------------ | ----------------- | ---------------------- | -------------------------------------------------- | ------------------------ | --------------------------- | ------------------ | ----------- | | LN L1 Calldata | 1,000,000 | 1 | 8,500,000 (8M for calldata & 500k as base gas fee) | 1 Gwei | - | - | \~$0.05 | | Ethereum L1 | 1,000,000 | 41 | 202,835,200 gas | 20 Gwei | 4.056704 | $3641.98 | $14774.43 | | Polygon Sidechain | 1,000,000 | 41 | 202,835,200 gas | 40 Gwei (L1: 20 Gwei) | 8.113408 | $0.52 | $4.21 | | BSC L1 | 1,000,000 | 41 | 202,835,200 gas | 5 Gwei | 1.014176 | $717.59 | $727.76 | | Arbitrum (Optimistic L2) | 1,000,000 | 41 | 202,835,200 gas (+15,000,000 L1 gas) | 0.1 Gwei (L1: 20 Gwei) | 0.020284 (+0.128168 L1 fee) | $3641.98 | $540.66 | | Avalanche L1 | 1,000,000 | 41 | 202,835,200 gas | 25 Gwei | 5.070880 | $43.90 | $222.61 | | Base (Optimistic L2) | 1,000,000 | 41 | 202,835,200 gas (+15,000,000 L1 gas) | 0.001 Gwei (L1: 20 Gwei) | 0.000203 (+0.128168 L1 fee) | $3641.98 | $467.52 | | Optimism (Optimistic L2) | 1,000,000 | 41 | 202,835,200 gas (+15,000,000 L1 gas) | 0.001 Gwei (L1: 20 Gwei) | 0.000203 (+0.128168 L1 fee) | $3641.98 | $467.52 | | Blast (Optimistic L2) | 1,000,000 | 41 | 202,835,200 gas (+15,000,000 L1 gas) | 0.001 Gwei (L1: 20 Gwei) | 0.000203 (+0.128168 L1 fee) | $3641.98 | $467.52 | | Linea (ZK L2) | 1,000,000 | 41 | 202,835,200 gas (+12,000,000 L1 gas) | 0.05 Gwei (L1: 20 Gwei) | 0.010142 (+0.072095 L1 fee) | $3641.98 | $299.50 | | Scroll (ZK L2) | 1,000,000 | 41 | 202,835,200 gas (+12,000,000 L1 gas) | 0.05 Gwei (L1: 20 Gwei) | 0.010142 (+0.072095 L1 fee) | $3641.98 | $299.50 | | Moonbeam (Polkadot) | 1,000,000 | 41 | 202,835,200 gas (+NaN L1 gas) | 100 Gwei | 20.283520 | $0.27 | $5.40 | | Polygon zkEVM (ZK L2) | 1,000,000 | 41 | 202,835,200 gas (+12,000,000 L1 gas) | 0.05 Gwei (L1: 20 Gwei) | 0.010142 (+0.072095 L1 fee) | $3641.98 | $299.50 | | Solana L1 | 1,000,000 | 98 | 490,000 imports | N/A | 0.000495 (0.000005 deposit) | $217.67 | $0.11 |
#### SSTORE2 VS LN L1 Calldata VS LN Bundler 0xbabe1
View comparison table Now let's take the data even higher, but for simplicity, let's not fit the whole data in a single LN L1 calldata transaction. Instead, we'll split it into 1MB transactions (creating multiple data settlement overhead fees): 5MB, 5 txs of 1 MB each: | Chain | File Size (bytes) | Number of Contracts/Tx | Gas Used | Gas Price (Gwei) | Cost in Native | Native Price (USD) | Total (USD) | | ------------------------ | ----------------- | ---------------------- | ---------------------------------------------------- | ------------------------ | --------------------------- | ------------------ | ------------- | | LN Bundler 0xbabe1 | 5,000,000 | 1 | 40,500,000 (40M for calldata & 500k as base gas fee) | 1 Gwei | - | - | \~$0.25-$0.27 | | LN L1 Calldata | 5,000,000 | 5 | 42,500,000 (40M for calldata & 2.5M as base gas fee) | 1 Gwei | - | - | \~$0.22 | | Ethereum L1 | 5,000,000 | 204 | 1,009,228,800 gas | 20 Gwei | 20.184576 | $3650.62 | $73686.22 | | Polygon Sidechain | 5,000,000 | 204 | 1,009,228,800 gas | 40 Gwei (L1: 20 Gwei) | 40.369152 | $0.52 | $20.95 | | BSC L1 | 5,000,000 | 204 | 1,009,228,800 gas | 5 Gwei | 5.046144 | $717.75 | $3621.87 | | Arbitrum (Optimistic L2) | 5,000,000 | 204 | 1,009,228,800 gas (+80,000,000 L1 gas) | 0.1 Gwei (L1: 20 Gwei) | 0.100923 (+0.640836 L1 fee) | $3650.62 | $2707.88 | | Avalanche L1 | 5,000,000 | 204 | 1,009,228,800 gas | 25 Gwei | 25.230720 | $44.01 | $1110.40 | | Base (Optimistic L2) | 5,000,000 | 204 | 1,009,228,800 gas (+80,000,000 L1 gas) | 0.001 Gwei (L1: 20 Gwei) | 0.001009 (+0.640836 L1 fee) | $3650.62 | $2343.13 | | Optimism (Optimistic L2) | 5,000,000 | 204 | 1,009,228,800 gas (+80,000,000 L1 gas) | 0.001 Gwei (L1: 20 Gwei) | 0.001009 (+0.640836 L1 fee) | $3650.62 | $2343.13 | | Blast (Optimistic L2) | 5,000,000 | 204 | 1,009,228,800 gas (+80,000,000 L1 gas) | 0.001 Gwei (L1: 20 Gwei) | 0.001009 (+0.640836 L1 fee) | $3650.62 | $2343.13 | | Linea (ZK L2) | 5,000,000 | 204 | 1,009,228,800 gas (+60,000,000 L1 gas) | 0.05 Gwei (L1: 20 Gwei) | 0.050461 (+0.360470 L1 fee) | $3650.62 | $1500.16 | | Scroll (ZK L2) | 5,000,000 | 204 | 1,009,228,800 gas (+60,000,000 L1 gas) | 0.05 Gwei (L1: 20 Gwei) | 0.050461 (+0.360470 L1 fee) | $3650.62 | $1500.16 | | Moonbeam (Polkadot) | 5,000,000 | 204 | 1,009,228,800 gas (+NaN L1 gas) | 100 Gwei | 100.922880 | $0.27 | $26.94 | | Polygon zkEVM (ZK L2) | 5,000,000 | 204 | 1,009,228,800 gas (+60,000,000 L1 gas) | 0.05 Gwei (L1: 20 Gwei) | 0.050461 (+0.360470 L1 fee) | $3650.62 | $1500.16 | | Solana L1 | 5,000,000 | 489 tx | 2445.00k imports | N/A | 0.002468 (0.000023 deposit) | $218.44 | $0.54 |
#### LN L1 Calldata VS LN Bundler 0xbabe1
View comparison table Let's compare storing 40 MB of data (40 x 1 MB transactions) using two different methods, considering the 8 MB bundle size limit: | Metric | LN L1 Calldata | LN Bundler | | ----------------------- | ---------------------------------- | ----------------------------------------------------- | | Total Data Size | 40 MB | 40 MB | | Transaction Format | 40 separate EIP-1559 transactions | 5 bundle transactions (8MB each, 40 \* 1MB envelopes) | | Transactions per Bundle | 1 MB each | 8 x 1MB per bundle | | Gas Cost per Tx | 8.5M gas (8M calldata + 500k base) | 64.5M gas (64M + 500k base) per bundle | | Number of Base Fees | 40 | 5 | | Total Gas Used | 340M gas (40 x 8.5M) | 322.5M gas (5 x 64.5M) | | Gas Price | 1 Gwei | 1 Gwei | | Total Cost | \~$1.5-1.7 | \~$1.3 | | Cost Savings | - | \~15% cheaper |
#### Table data sources * [Load Network price calculator](https://load.network/calculator) * [EVM storage calculator](https://swader.github.io/soroban/#calculator) ### Source Code [https://github.com/weaveVM/bundler ](https://github.com/weaveVM/bundler) File: using-load-network/load-network-precompiles.md (12.87 KB) ---------------------------------------------------- --- description: About Load Network precompiled contracts --- # Load Network Precompiles ### What Are Precompiled Contracts? Ethereum uses precompiles to efficiently implement cryptographic primitives within the EVM instead of re-implementing these primitives in Solidity. The following precompiles are currently included: ecrecover, sha256, blake2f, ripemd-160, Bn256Add, Bn256Mul, Bn256Pairing, the identity function, modular exponentiation, and point evaluation. Ethereum precompiles behave like smart contracts built into the Ethereum protocol. The ten precompiles live in addresses 0x01 to 0x0A. Load Network supports all of these 10 standard precompiles and adds new custom precompiles starting at the 23rd byte representing the letter "W" position (index) in the alphabet. Therefore, Load Network precompiles start at address 0x17 (23rd byte). ### Load Network Precompiles List | Address | Name | Minimum Gas | Input | Output | Description | | --------------------------------------------------- | -------------------------- | ----------- | ---------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | 0x01 (`0x0000000000000000000000000000000000000001`) | ecRecover | 3000 | hash, v, r, s | publicAddress | Elliptic curve digital signature algorithm (ECDSA) public key recovery function | | 0x02 (`0x0000000000000000000000000000000000000002`) | SHA2-256 | 60 | data | hash | Hash function | | 0x03 (`0x0000000000000000000000000000000000000003`) | RIPEMD-160 | 600 | data | hash | Hash function | | 0x04 (`0x0000000000000000000000000000000000000004`) | identity | 15 | data | data | Returns the input | | 0x05 (`0x0000000000000000000000000000000000000005`) | modexp | 200 | Bsize, Esize, Msize, B, E, M | value | Arbitrary-precision exponentiation under modulo | | 0x06 (`0x0000000000000000000000000000000000000006`) | ecAdd | 150 | x1, y1, x2, y2 | x, y | Point addition (ADD) on the elliptic curve alt\_bn128 | | 0x07 (`0x0000000000000000000000000000000000000007`) | ecMul | 6000 | x1, y1, s | x, y | Scalar multiplication (MUL) on the elliptic curve alt\_bn128 | | 0x08 (`0x0000000000000000000000000000000000000008`) | ecPairing | 45000 | x1, y1, x2, y2, ..., xk, yk | success | Bilinear function on groups on the elliptic curve alt\_bn128 | | 0x09 (`0x0000000000000000000000000000000000000009`) | blake2f | 0 | rounds, h, m, t, f | h | Compression function F used in the BLAKE2 cryptographic hashing algorithm | | 0x0A (`0x000000000000000000000000000000000000000A`) | point evaluation | 50000 | bytes | bytes | Verify p(z) = y given commitment that corresponds to the polynomial p(x) and a KZG proof. Also verify that the provided commitment matches the provided versioned\_hash. | | 0x17 (`0x0000000000000000000000000000000000000017`) | arweave\_upload | 10003 | bytes | bytes | upload bytes array to Arweave and get back the upload TXID in bytes | | 0x18 (`0x0000000000000000000000000000000000000018`) | arweave\_read | 10003 | bytes | bytes | retrieve an Arweave TXID data in bytes | | 0x20 (`0x0000000000000000000000000000000000000020`) | read\_block | 10003 | bytes | bytes | retrieve a LN's block data (from genesis) pulling it from Arweave | | 0x21 (`0x0000000000000000000000000000000000000021`) | kyve\_trustless\_api\_blob | 10003 | bytes | bytes | retrieve a historical Ethereum blob data from LN's smart contract layer | ### Outlining Load Network New Precompiles #### 1- Precompile 0x17: upload data from Solidity to Arweave The LN Precompile at address 0x17 (`0x0000000000000000000000000000000000000017`) enables data upload (in byte format) from Solidity to Arweave, and returns the data TXID (in byte format). In Alphanet V4, data uploads are limited to 100KB. Future network updates will remove this limitation and introduce a higher data cap. **Solidity code example:** ```solidity pragma solidity ^0.8.0; contract ArweaveUploader { function upload_to_arweave(string memory dataString) public view returns (bytes memory) { // Convert the string parameter to bytes bytes memory data = abi.encodePacked(dataString); // pc address: 0x0000000000000000000000000000000000000017 (bool success, bytes memory result) = address(0x17).staticcall(data); return result; } ``` #### 2- Precompile 0x18: read Arweave data from Solidity This precompile, at address 0x18 (`0x0000000000000000000000000000000000000018`), completes the data pipeline between LN and Arweave, making it bidirectional. It enables retrieving data from Arweave in bytes for a given Arweave TXID. The 0x18 precompile allows user input to choose their Arweave gateway for resolving a TXID. If no gateway URL is provided, the precompile defaults to `arweave.net`. The format of the precompile bytes input (string representation) should be as follows: `gateway_url;arweave_txid` **Solidity code example:** ```solidity pragma solidity ^0.8.0; contract ArweaveReader { function read_from_arweave(string memory txIdOrGatewayAndTxId) public view returns (bytes memory) { // Convert the string parameter to bytes bytes memory data = abi.encodePacked(txIdOrGatewayAndTxId); // pc address: 0x0000000000000000000000000000000000000018 (bool success, bytes memory result) = address(0x18).staticcall(data); return result; } } ``` #### 3- Precompile 0x20: Access to LN' historical blocks This precompile, at address 0x20(`0x0000000000000000000000000000000000000020`), lets smart contract developers not access only the most recent 256 blocks, but any block data starting at genesis. To explain how to request block data using the 0x20 precompile, here is a code example: ```solidity pragma solidity ^0.8.0; contract LnBlockReader { function read_block() public view returns (bytes memory) { // Convert the string parameter to bytes string memory blockIdAndField = "141550;hash"; bytes memory data = abi.encodePacked(blockIdAndField); (bool success, bytes memory result) = address(0x20).staticcall(data); return result; } } ``` As you can see, for the query variable we have three “parameters” separated by a semicolon ”;” (`gateway;load_block_id;block_field` format) * An Arweave gateway (optional and fallback to arweave.net if not provided): [https://ar-io.dev](https://ar-io.dev/) * Load Network's block number to fetch, target block: 141550 * The field of the block struct to access, in this case: hash Only the gateway is an optional parameter, and regarding the field of the block struct to access, here is the Block struct that the 0x20 precompile uses: ```rust #[serde(rename_all = "camelCase")] pub struct Block { pub base_fee_per_gas: Option, // "baseFeePerGas" pub blob_gas_used: Option, // "blobGasUsed" pub difficulty: Option, // "difficulty" pub excess_blob_gas: Option, // "excessBlobGas" pub extra_data: Option, // "extraData" pub gas_limit: Option, // "gasLimit" pub gas_used: Option, // "gasUsed" pub hash: Option, // "hash" pub logs_bloom: Option, // "logsBloom" pub miner: Option, // "miner" pub mix_hash: Option, // "mixHash" pub nonce: Option, // "nonce" pub number: Option, // "number" pub parent_beacon_block_root: Option, // "parentBeaconBlockRoot" pub parent_hash: Option, // "parentHash" pub receipts_root: Option, // "receiptsRoot" pub seal_fields: Vec, // "sealFields" as an array of strings pub sha3_uncles: Option, // "sha3Uncles" pub size: Option, // "size" pub state_root: Option, // "stateRoot" pub timestamp: Option, // "timestamp" pub total_difficulty: Option, // "totalDifficulty" pub transactions: Vec, // "transactions" as an array of strings } ``` [Check out the 0x20 source code here](https://github.com/weaveVM/wvm-reth/pull/36/files) #### 4- Precompile 0x21: Native access to archived Ethereum blobs This precompile, at address 0x21 (`0x0000000000000000000000000000000000000021`), is a unique solution for native access to blobs data (not just commitments) from the smart contract layer. This precompile fetches from the [KYVE Trustless API](https://docs.kyve.network/access-data-sets/trustless-api/overview) the blobs data that KYVE archives for their supported networks. Therefore, with 0x21, KYVE clients will have, for the first time, the ability to fetch their archived blobs from an EVM smart contract layer instead of wrapping the Trustless API in oracles and making expensive calls. 0x21 lets you fetch KYVE's Ethereum blob data starting at Ethereum's block [19426589](https://etherscan.io/block/19426589) - the first block with a recorded EIP-4844 transaction. To retrieve a blob from the Trustless API, in the 0x21 staticcall you need to specify the Ethereum block number, blob index in the transaction, and the blob field you want to retrieve, in this format: `block_number;blob_index.field`\ \ &#xNAN;_**N.B: blob\_index represents the blob index in the KYVE’s Trustless API JSON response:**_ ```solidity pragma solidity ^0.8.0; contract KyveBlobsTrustlessApi { function getBlob () public view returns (bytes memory) { // Convert the string parameter to bytes string memory query = "20033081;0.blob"; bytes memory data = abi.encodePacked(query); (bool success, bytes memory result) = address(0x21).staticcall(data); return result; } } ``` The eip-4844 transaction fields that you can access from the 0x21 query are: * blob (raw blob data, the body) * kzg\_commitment * kzg\_proof * slot **Advantages of 0x21 (use cases)** * Native access to blob data from smart contract layer * Access to permanently archived blobs * Opens up longer verification windows for rollups using KYVE for archived blobs and LN for settlement layer * Enables using blobs for purposes beyond rollups DA, opening doors for data-intensive blob-based applications with permanent blob access \ Check out the 0x21 precompile source code [here](https://github.com/weaveVM/wvm-reth/pull/41/files).\ File: using-load-network/network-configurations.md (676 B) -------------------------------------------------- --- description: Load Network Configurations --- # Network configurations ### Alphanet V5 * RPC URL: [https://alphanet.load.network](https://alphanet.load.network) * Chain ID: 9496 * Alphanet Faucet: [https://load.network/faucet](https://load.network/faucet) * Testnet Currency Symbol: tLOAD * Explorer: [https://explorer.load.network](https://explorer.load.network) * Chainlist: [https://chainlist.org/chain/9496](https://chainlist.org/chain/9496) ### Add to MetaMask

Adding Load Alphanet in Metamask

Click on `Networks` > `Add a network` > `Add a network manually` Directory: using-load-network/self-hosted-rpc-proxies File: using-load-network/self-hosted-rpc-proxies/README.md (73 B) ---------------------------------------------------------- --- description: Host your own RPC Proxy --- # Self-Hosted RPC Proxies File: using-load-network/self-hosted-rpc-proxies/javascript-proxy.md (498 B) -------------------------------------------------------------------- --- description: Run a JavaScript RPC Proxy locally or on cloud --- # JavaScript Proxy ### Run Locally ```bash git clone https://github.com/weavevm/proxy-rpc.git cd proxy-rpc npm install && npm run start ``` ### Try it! ```bash curl -X POST http://localhost:3000 -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' ``` You can find the proxy server codebase here: [https://github.com/weaveVM/proxy-rpc](https://github.com/weaveVM/proxy-rpc) File: using-load-network/self-hosted-rpc-proxies/rust-proxy.md (518 B) -------------------------------------------------------------- --- description: Run a Rust RPC Proxy locally or on cloud --- # Rust Proxy ### Run Locally ```bash git clone https://github.com/weavevm/wvm-proxy-rpc.git cd wvm-proxy-rpc cargo build && cargo shuttle run --port 3000 ``` ### Try it! ```bash curl -X POST http://localhost:3000 -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' ``` You can find the proxy server codebase here: [https://github.com/weaveVM/wvm-rpc-proxy](https://github.com/weaveVM/wvm-rpc-proxy)