The Azle Book (Beta)
Welcome to The Azle Book! This is a guide for building secure decentralized/replicated servers in TypeScript or JavaScript on ICP. The current replication factor is 13-40 times.
Please remember that Azle stable mode is continuously subjected to intense scrutiny and testing, however it does not yet have multiple independent security reviews/audits.
The Azle Book is subject to the following license and Azle's License Extension:
MIT License
Copyright (c) 2025 AZLE token holders (nlhft-2iaaa-aaaae-qaaua-cai)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Candid RPC or HTTP Server
Azle applications (canisters) can be developed using two main methodologies: Candid RPC and HTTP Server.
Candid RPC embraces ICP's Candid language, exposing canister methods directly to Candid-speaking clients, and using Candid for serialization and deserialization purposes.
HTTP Server embraces traditional web server techniques, allowing you to write HTTP servers using popular libraries such as Express, and using JSON for simple serialization and deserialization purposes.
Candid RPC is heading towards 1.0 and production-readiness in 2025.
HTTP Server will remain experimental for an unknown length of time.
Candid RPC
This section documents the Candid RPC methodology for developing Azle applications. This methodology embraces ICP's Candid language, exposing canister methods directly to Candid-speaking clients, and using Candid for serialization and deserialization purposes.
Candid RPC is heading towards 1.0 and production-readiness in 2025.
Get Started
Azle helps you to build secure decentralized/replicated servers in TypeScript or JavaScript on ICP. The current replication factor is 13-40 times.
Please remember that Azle stable mode is continuously subjected to intense scrutiny and testing, however it does not yet have multiple independent security reviews/audits.
Azle runs in stable mode by default.
This mode is intended for production use after Azle's 1.0 release. Its focus is on API and runtime stability, security, performance, TypeScript and JavaScript language support, the ICP APIs, and Candid remote procedure calls (RPC). There is minimal support for the Node.js standard library, npm ecosystem, and HTTP server functionality.
Installation
Windows is only supported through a Linux virtual environment of some kind, such as WSL
You will need Node.js 22 and dfx to develop ICP applications with Azle:
Node.js 22
It's recommended to use nvm to install Node.js 22:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
Restart your terminal and then run:
nvm install 22
Check that the installation went smoothly by looking for clean output from the following command:
node --version
dfx
Install the dfx command line tools for managing ICP applications:
DFX_VERSION=0.24.3 sh -ci "$(curl -fsSL https://internetcomputer.org/install.sh)"
Check that the installation went smoothly by looking for clean output from the following command:
dfx --version
Deployment
To create and deploy a simple sample application
called hello_world
:
# create a new default project called hello_world
npx azle new hello_world
cd hello_world
# install all npm dependencies including azle
npm install
# start up a local ICP replica
dfx start --clean
In a separate terminal in the
hello_world
directory:
# deploy your canister
dfx deploy
Examples
Some of the best documentation for creating Candid RPC canisters is currently in the examples directory.
Canister Class
Your canister's functionality must be encapsulated in a class exported using the default export:
import { IDL, query } from 'azle';
export default class {
@query([], IDL.Text)
hello(): string {
return 'world!';
}
}
You must use the @query, @update, @init, @postUpgrade, @preUpgrade, @inspectMessage, and @heartbeat decorators to expose your canister's methods. Adding TypeScript types is optional.
@dfinity/candid IDL
For each of your canister's methods, deserialization of incoming arguments and serialization of return values is handled with a combination of the @query, @update, @init, and @postUpgrade decorators and the IDL object from the @dfinity/candid library.
IDL
is re-exported by Azle, and has
properties that correspond to
Candid's supported types. You must use IDL
to instruct the
method decorators on how to deserialize
arguments and serialize the return value. Here's
an example of accessing the Candid types from
IDL
:
import { IDL } from 'azle';
IDL.Text;
IDL.Vec(IDL.Nat8); // Candid blob
IDL.Nat;
IDL.Nat64;
IDL.Nat32;
IDL.Nat16;
IDL.Nat8;
IDL.Int;
IDL.Int64;
IDL.Int32;
IDL.Int16;
IDL.Int8;
IDL.Float64;
IDL.Float32;
IDL.Bool;
IDL.Null;
IDL.Vec(IDL.Int);
IDL.Opt(IDL.Text);
IDL.Record({
prop1: IDL.Text,
prop2: IDL.Bool
});
IDL.Variant({
Tag1: IDL.Null,
Tag2: IDL.Nat
});
IDL.Func([], [], ['query']);
IDL.Service({
myQueryMethod: IDL.Func([IDL.Text, IDL.Text], [IDL.Bool])
});
IDL.Principal;
IDL.Reserved;
IDL.Empty;
Decorators
@query
Exposes the decorated method as a read-only
canister_query
method.
The first parameter to this decorator accepts
IDL
Candid type objects that will
deserialize incoming Candid arguments. The
second parameter to this decorator accepts an
IDL
Candid type object that will
serialize the outgoing return value to Candid.
@update
Exposes the decorated method as a read-write
canister_update
method.
The first parameter to this decorator accepts
IDL
Candid type objects that will
deserialize incoming Candid arguments. The
second parameter to this decorator accepts an
IDL
Candid type object that will
serialize the outgoing return value to Candid.
@init
Exposes the decorated method as the
canister_init
method called only
once during canister initialization.
The first parameter to this decorator accepts
IDL
Candid type objects that will
deserialize incoming Candid arguments.
@postUpgrade
Exposes the decorated method as the
canister_post_upgrade
method called
during every canister upgrade.
The first parameter to this decorator accepts
IDL
Candid type objects that will
deserialize incoming Candid arguments.
@preUpgrade
Exposes the decorated method as the
canister_pre_upgrade
method called
before every canister upgrade.
@inspectMessage
Exposes the decorated method as the
canister_inspect_message
method
called before every update
call.
@heartbeat
Exposes the decorated method as the
canister_heartbeat
method called on
a regular interval (every second or so).
IC API
The IC API is exposed as functions exported from
azle
. You can see the available
functions in
the source code.
Some of the best documentation for using the IC API is currently in the examples directory, especially the ic_api property tests.
Here's an example of getting the caller's
principal using the
caller
function:
import { caller, IDL, update } from 'azle';
export default class {
@update([], IDL.Bool)
isUserAnonymous(): boolean {
if (caller().toText() === '2vxsx-fae') {
return true;
} else {
return false;
}
}
}
HTTP Server (Experimental)
This section documents the HTTP Server methodology for developing Azle applications. This methodology embraces traditional web server techniques, allowing you to write HTTP servers using popular libraries such as Express, and using JSON for simple serialization and deserialization purposes.
HTTP Server functionality will remain experimental for an unknown length of time.
Get Started
Azle helps you to build secure decentralized/replicated servers in TypeScript or JavaScript on ICP. The current replication factor is 13-40 times.
Please remember that the HTTP Server functionality is only accessible in Azle's experimental mode.
Azle runs in experimental mode through
explicitly enabling a flag in
dfx.json
or certain CLI commands.
This mode is intended for developers who are willing to accept the risk of using an alpha or beta project. Its focus is on quickly enabling new features and functionality without requiring the time and other resources necessary to advance them to the stable mode. The Node.js standard libary, npm ecosystem, and HTTP server functionality are also major areas of focus.
NOTE: Keep clearly in mind that the experimental mode fundamentally changes the Azle Wasm binary. It is not guaranteed to be secure or stable in API changes or runtime behavior. If you enable the experimental mode, even if you only use APIs from the stable mode, you are accepting a higher risk of bugs, errors, crashes, security exploits, breaking API changes, etc.
Installation
Windows is only supported through a Linux virtual environment of some kind, such as WSL
You will need Node.js 22 and dfx to develop ICP applications with Azle:
Node.js 22
It's recommended to use nvm to install Node.js 22:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
Restart your terminal and then run:
nvm install 22
Check that the installation went smoothly by looking for clean output from the following command:
node --version
dfx
Install the dfx command line tools for managing ICP applications:
DFX_VERSION=0.24.3 sh -ci "$(curl -fsSL https://internetcomputer.org/install.sh)"
Check that the installation went smoothly by looking for clean output from the following command:
dfx --version
Deployment
To create and deploy a simple sample application
called hello_world
:
# create a new default project called hello_world
npx azle new hello_world --http-server --experimental
cd hello_world
# install all npm dependencies including azle
npm install
# start up a local ICP replica
dfx start --clean
In a separate terminal in the
hello_world
directory:
# deploy your canister
dfx deploy
If you would like your canister to autoreload on file changes:
AZLE_AUTORELOAD=true dfx deploy
View your frontend in a web browser at
http://[canisterId].raw.localhost:8000
.
To obtain your application's [canisterId]:
dfx canister id backend
Communicate with your canister using any HTTP
client library, for example using
curl
:
curl http://[canisterId].raw.localhost:8000/db
curl -X POST -H "Content-Type: application/json" -d "{ \"hello\": \"world\" }" http://[canisterId].raw.localhost:8000/db/update
Examples
There are many Azle examples in the examples directory. We recommend starting with the following:
- apollo_server
- audio_and_video
- autoreload
- ethers
- ethers_base
- express
- fetch_ic
- file_protocol
- fs
- hello_world_http_server
- http_outcall_fetch
- hybrid_canister
- ic_evm_rpc
- internet_identity
- large_files
- sqlite
- tfjs
- web_assembly
Deployment
- Starting the local replica
- Deploying to the local replica
- Interacting with your canister
- Deploying to mainnet
There are two main ICP environments that you will generally interact with: the local replica and mainnet.
We recommend using the dfx
command
line tools to deploy to these environments.
Please note that not all
dfx
commands are shown here. See
the dfx CLI reference
for more information.
Starting the local replica
We recommend running your local replica in its own terminal and on a port of your choosing:
dfx start --host 127.0.0.1:8000
Alternatively you can start the local replica as a background process:
dfx start --background --host 127.0.0.1:8000
If you want to stop a local replica running in the background:
dfx stop
If you ever see this kind of error after
dfx stop
:
Error: Failed to kill all processes. Remaining: 627221 626923 627260
Then try this:
dfx killall
If your replica starts behaving strangely, we
recommend starting the replica clean, which will
clean the dfx
state of your
project:
dfx start --clean --host 127.0.0.1:8000
Deploying to the local replica
To deploy all canisters defined in your
dfx.json
:
dfx deploy
If you would like your canister to autoreload on file changes:
AZLE_AUTORELOAD=true dfx deploy
To deploy an individual canister:
dfx deploy [canisterName]
Interacting with your canister
You will generally interact with your canister
through an HTTP client such as
curl
, fetch
, or a web
browser. The URL of your canister locally will
look like this:
http://[canisterId].raw.localhost:[replicaPort]
. Azle will print your canister's URL in the
terminal after a successful deploy.
# You can obtain the canisterId like this
dfx canister id [canisterName]
# You can obtain the replicaPort like this
dfx info webserver-port
# An example of performing a GET request to a canister
curl http://a3shf-5eaaa-aaaaa-qaafa-cai.raw.localhost:8000
# An example of performing a POST request to a canister
curl -X POST -H "Content-Type: application/json" -d "{ \"hello\": \"world\" }" http://a3shf-5eaaa-aaaaa-qaafa-cai.raw.localhost:8000
Deploying to mainnet
Assuming you are setup with a cycles wallet, then you are ready to deploy to mainnet.
To deploy all canisters defined in your dfx.json:
dfx deploy --network ic
To deploy an individual canister:
dfx deploy --network ic [canisterName]
The URL of your canister on mainnet will look
like this:
https://[canisterId].raw.icp0.io
.
Project Structure TL;DR
Your project is just a directory with a
dfx.json
file that points to your
.ts
or .js
entrypoint.
Here's what your directory structure might look like:
hello_world/
|
├── dfx.json
|
└── src/
└── api.ts
For an HTTP Server canister this would be the
simplest corresponding
dfx.json
file:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"custom": {
"experimental": true,
"candid_gen": "http"
}
}
}
}
For a Candid RPC canister this would be the
simplest corresponding
dfx.json
file:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts"
}
}
}
Once you have created this directory structure
you can
deploy to mainnet
or a
locally running replica
by running the dfx deploy
command
in the same directory as your
dfx.json
file.
dfx.json
The dfx.json
file is the main
ICP-specific configuration file for your
canisters. The following are various examples of
dfx.json
files.
Automatic Candid File Generation
The command-line tools dfx
require
a Candid file to deploy your canister. Candid
RPC canisters will automatically have their
Candid files generated and stored in the
.azle
directory without any extra
property in the dfx.json
file. HTTP
Server canisters must specify
"candid_gen": "http"
for their
Candid files to be generated automatically in
the .azle
directory:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"custom": {
"experimental": true,
"candid_gen": "http"
}
}
}
}
Custom Candid File
If you would like to provide your own custom
Candid file you can specify
"candid": "[path to your candid
file]"
and "candid_gen": "custom"
:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"candid": "src/api.did",
"custom": {
"experimental": true,
"candid_gen": "custom"
}
}
}
}
Environment Variables
You can provide environment variables to Azle
canisters by specifying their names in your
dfx.json
file and then accessing
them through the process.env
object
in Azle.
You must provide the environment variables that
you want included in the same process as your
dfx deploy
command.
Be aware that the environment variables that you
specify in your dfx.json
file will
be included in plain text in your canister's
Wasm binary.
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"custom": {
"experimental": true,
"candid_gen": "http",
"env": ["MY_ENVIRONMENT_VARIABLE"]
}
}
}
}
Assets
See the Assets chapter for more information:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"custom": {
"experimental": true,
"candid_gen": "http",
"assets": [
["src/frontend/dist", "dist"],
["src/backend/media/audio.ogg", "media/audio.ogg"],
["src/backend/media/video.ogv", "media/video.ogv"]
]
}
}
}
}
Build Assets
See the Assets chapter for more information:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"custom": {
"experimental": true,
"candid_gen": "http",
"assets": [
["src/frontend/dist", "dist"],
["src/backend/media/audio.ogg", "media/audio.ogg"],
["src/backend/media/video.ogv", "media/video.ogv"]
],
"build_assets": "npm run build"
}
}
}
}
ESM Externals
This will instruct Azle's TypeScript/JavaScript build process to ignore bundling the provided named packages.
Sometimes the build process is overly eager to include packages that won't actually be used at runtime. This can be a problem if those packages wouldn't even work at runtime due to limitations in ICP or Azle. It is thus useful to be able to exclude them:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"custom": {
"experimental": true,
"candid_gen": "http",
"esm_externals": ["@nestjs/microservices", "@nestjs/websockets"]
}
}
}
}
ESM Aliases
This will instruct Azle's TypeScript/JavaScript build process to alias a package name to another pacakge name.
This can be useful if you need to polyfill certain packages that might not exist in Azle:
{
"canisters": {
"api": {
"type": "azle",
"main": "src/api.ts",
"custom": {
"experimental": true,
"candid_gen": "http",
"esm_aliases": {
"crypto": "crypto-browserify"
}
}
}
}
}
Servers TL;DR
Just write Node.js servers like this:
import { createServer } from 'http';
const server = createServer((req, res) => {
res.write('Hello World!');
res.end();
});
server.listen();
or write Express servers like this:
import express, { Request } from 'express';
let db = {
hello: ''
};
const app = express();
app.use(express.json());
app.get('/db', (req, res) => {
res.json(db);
});
app.post('/db/update', (req: Request<any, any, typeof db>, res) => {
db = req.body;
res.json(db);
});
app.use(express.static('/dist'));
app.listen();
or NestJS servers like this:
import { NestFactory } from '@nestjs/core';
import { NestExpressApplication } from '@nestjs/platform-express';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create<NestExpressApplication>(AppModule);
await app.listen(3000);
}
bootstrap();
Servers
Azle supports building HTTP servers on ICP using the Node.js http.Server class as the foundation. These servers can serve static files or act as API backends, or both.
Azle currently has good but not comprehensive support for Node.js http.Server and Express. Support for other libraries like Nest are works-in-progress.
Once
deployed you can
access your server at a URL like this locally
http://bkyz2-fmaaa-aaaaa-qaaaq-cai.raw.localhost:8000
or like this on mainnet
https://bkyz2-fmaaa-aaaaa-qaaaq-cai.raw.icp0.io
.
You can use any HTTP client to interact with
your server, such as curl
,
fetch
, or a web browser. See the
Interacting with your canister section
of the
deployment chapter
for help in constructing your canister URL.
Node.js http.server
Azle supports instances of
Node.js http.Server. listen()
must be called on the
server instance for Azle to use it to handle
HTTP requests. Azle does not respect a port
being passed into listen()
. The
port is set by the ICP replica (e.g.
dfx start --host 127.0.0.1:8000
),
not by Azle.
Here's an example of a very simple Node.js http.Server:
import { createServer } from 'http';
const server = createServer((req, res) => {
res.write('Hello World!');
res.end();
});
server.listen();
Express
Express is one of the most popular backend JavaScript web frameworks, and it's the recommended way to get started building servers in Azle. Here's the main code from the hello_world_http_server example:
import express, { Request } from 'express';
let db = {
hello: ''
};
const app = express();
app.use(express.json());
app.get('/db', (req, res) => {
res.json(db);
});
app.post('/db/update', (req: Request<any, any, typeof db>, res) => {
db = req.body;
res.json(db);
});
app.use(express.static('/dist'));
app.listen();
jsonStringify
When working with res.json
you may
run into errors because of attempting to send
back JavaScript objects that are not strictly
JSON
. This can happen when trying
to send back an object with a
BigInt
for example.
Azle has created a special function called
jsonStringify
that will serialize
many ICP-specific data structures to
JSON
for you:
import { jsonStringify } from 'azle/experimental';
import express, { Request } from 'express';
let db = {
bigInt: 0n
};
const app = express();
app.use(express.json());
app.get('/db', (req, res) => {
res.send(jsonStringify(db));
});
app.post('/db/update', (req: Request<any, any, typeof db>, res) => {
db = req.body;
res.send(jsonStringify(db));
});
app.use(express.static('/dist'));
app.listen();
Server
If you need to add
canister methods
to your HTTP server, the
Server
function imported from
azle
allows you to do so.
Here's an example of a very simple HTTP server:
import { Server } from 'azle/experimental';
import express from 'express';
export default Server(() => {
const app = express();
app.get('/http-query', (_req, res) => {
res.send('http-query-server');
});
app.post('/http-update', (_req, res) => {
res.send('http-update-server');
});
return app.listen();
});
You can add canister methods like this:
import { query, Server, text, update } from 'azle/experimental';
import express from 'express';
export default Server(
() => {
const app = express();
app.get('/http-query', (_req, res) => {
res.send('http-query-server');
});
app.post('/http-update', (_req, res) => {
res.send('http-update-server');
});
return app.listen();
},
{
candidQuery: query([], text, () => {
return 'candidQueryServer';
}),
candidUpdate: update([], text, () => {
return 'candidUpdateServer';
})
}
);
The default
export of your
main
module must be the result of
calling Server
, and the callback
argument to Server
must return a
Node.js http.Server. The main
module is specified by
the main
property of your project's
dfx.json file. The dfx.json
file must be at the
root directory of your project.
The callback argument to Server
can
be asynchronous:
import { Server } from 'azle/experimental';
import { createServer } from 'http';
export default Server(async () => {
const message = await asynchronousHelloWorld();
return createServer((req, res) => {
res.write(message);
res.end();
});
});
async function asynchronousHelloWorld() {
// do some asynchronous task
return 'Hello World Asynchronous!';
}
Limitations
For a deeper understanding of possible limitations you may want to refer to The HTTP Gateway Protocol Specification.
-
The top-level route
/api
is currently reserved by the replica locally -
The
Transfer-Encoding
header is not supported -
gzip
responses most likely do not work - HTTP requests are generally limited to ~2 MiB
- HTTP responses are generally limited to ~3 MiB
- You cannot set HTTP status codes in the 1xx range
Assets TL;DR
You can automatically copy static assets
(essentially files and folders) into your
canister's filesystem during deploy by using the
assets
and
build_assets
properties of the
canister object in your project's
dfx.json
file.
Here's an example that copies the
src/frontend/dist
directory on the
deploying machine into the
dist
directory of the canister,
using the assets
and
build_assets
properties:
{
"canisters": {
"backend": {
"type": "azle",
"main": "src/backend/index.ts",
"custom": {
"experimental": true,
"assets": [["src/frontend/dist", "dist"]],
"build_assets": "npm run build"
}
}
}
}
The assets
property is an array of
tuples, where the first element of the tuple is
the source directory on the deploying machine,
and the second element of the tuple is the
destination directory in the canister. Use
assets
for total assets up to ~2
GiB in size. We are working on increasing this
limit further.
The build_assets
property allows
you to specify custom terminal commands that
will run before Azle copies the assets into the
canister. You can use
build_assets
to build your frontend
code for example. In this case we are running
npm run build
, which refers to an
npm script that we have specified in our
package.json
file.
Once you have loaded assets into your canister, they are accessible from that canister's filesystem. Here's an example of using the Express static middleware to serve a frontend from the canister's filesystem:
import express from 'express';
const app = express();
app.use(express.static('/dist'));
app.listen();
Assuming the /dist
directory in the
canister has an appropriate
index.html
file, this canister
would serve a frontend at its URL when loaded in
a web browser.
Authentication TL;DR
Azle canisters can import
caller
from azle
and
use it to get the
principal (public-key linked identifier)
of the initiator of an HTTP request. HTTP
requests are anonymous (principal
2vxsx-fae
) by default, but
authentication with web browsers (and maybe
Node.js) can be done using a JWT-like API from
azle/experimental/http_client
.
First you import toJwt
from
azle/experimental/http_client
:
import { toJwt } from 'azle/experimental/http_client';
Then you use fetch
and construct an
Authorization
header using an
@dfinity/agent
Identity
:
const response = await fetch(
`http://bkyz2-fmaaa-aaaaa-qaaaq-cai.raw.localhost:8000/whoami`,
{
method: 'GET',
headers: [['Authorization', toJwt(this.identity)]]
}
);
Here's an example of the frontend of a simple
web application using
azle/experimental/http_client
and
Internet Identity:
import { Identity } from '@dfinity/agent';
import { AuthClient } from '@dfinity/auth-client';
import { toJwt } from 'azle/experimental/http_client';
import { html, LitElement } from 'lit';
import { customElement, property } from 'lit/decorators.js';
@customElement('azle-app')
export class AzleApp extends LitElement {
@property()
identity: Identity | null = null;
@property()
whoami: string = '';
connectedCallback() {
super.connectedCallback();
this.authenticate();
}
async authenticate() {
const authClient = await AuthClient.create();
const isAuthenticated = await authClient.isAuthenticated();
if (isAuthenticated === true) {
this.handleIsAuthenticated(authClient);
} else {
await this.handleIsNotAuthenticated(authClient);
}
}
handleIsAuthenticated(authClient: AuthClient) {
this.identity = authClient.getIdentity();
}
async handleIsNotAuthenticated(authClient: AuthClient) {
await new Promise((resolve, reject) => {
authClient.login({
identityProvider: import.meta.env.VITE_IDENTITY_PROVIDER,
onSuccess: resolve as () => void,
onError: reject,
windowOpenerFeatures: `width=500,height=500`
});
});
this.identity = authClient.getIdentity();
}
async whoamiUnauthenticated() {
const response = await fetch(
`${import.meta.env.VITE_CANISTER_ORIGIN}/whoami`
);
const responseText = await response.text();
this.whoami = responseText;
}
async whoamiAuthenticated() {
const response = await fetch(
`${import.meta.env.VITE_CANISTER_ORIGIN}/whoami`,
{
method: 'GET',
headers: [['Authorization', toJwt(this.identity)]]
}
);
const responseText = await response.text();
this.whoami = responseText;
}
render() {
return html`
<h1>Internet Identity</h1>
<h2>
Whoami principal:
<span id="whoamiPrincipal">${this.whoami}</span>
</h2>
<button
id="whoamiUnauthenticated"
@click=${this.whoamiUnauthenticated}
>
Whoami Unauthenticated
</button>
<button
id="whoamiAuthenticated"
@click=${this.whoamiAuthenticated}
.disabled=${this.identity === null}
>
Whoami Authenticated
</button>
`;
}
}
Here's an example of the backend of that same simple web application:
import { caller } from 'azle';
import express from 'express';
const app = express();
app.get('/whoami', (req, res) => {
res.send(caller().toString());
});
app.use(express.static('/dist'));
app.listen();
Authentication
Examples:
Under-the-hood
Authentication of ICP calls is done through signatures on messages. @dfinity/agent provides very nice abstractions for creating all of the required signatures in the correct formats when calling into canisters on ICP. Unfortunately this requires you to abandon traditional HTTP requests, as you must use the agent's APIs.
Azle attempts to enable you to perform
traditional HTTP requests with traditional
libraries. Currently Azle focuses on
fetch
. When importing
toJwt
,
azle/experimental/http_client
will
overwrite the global fetch
function
and will intercept fetch
requests
that have Authorization
headers
with an Identity
as a value.
Once intercepted, these requests are turned into
@dfinity/agent
requests that call
the http_request and http_request_update
canister methods
directly, thus performing all of the required
client-side authentication work.
We are working to push for ICP to more natively
understand JWTs for authentication, without the
need to intercept fetch
requests
and convert them into agent requests.
fetch TL;DR
Azle canisters use a custom
fetch
implementation to perform
cross-canister calls and to perform HTTPS
outcalls.
Here's an example of performing a cross-canister call:
import { serialize } from 'azle/experimental';
import express from 'express';
const app = express();
app.use(express.json());
app.post('/cross-canister-call', async (req, res) => {
const to: string = req.body.to;
const amount: number = req.body.amount;
const response = await fetch(`icp://dfdal-2uaaa-aaaaa-qaama-cai/transfer`, {
body: serialize({
candidPath: '/token.did',
args: [to, amount]
})
});
const responseJson = await response.json();
res.json(responseJson);
});
app.listen();
Keep these important points in mind when performing a cross-canister call:
-
Use the
icp://
protocol in the URL -
The
canister id
of the canister that you are calling immediately followsicp://
in the URL -
The
canister method
that you are calling immediately follows thecanister id
in the URL -
The
candidPath
property of thebody
is the path to the Candid file defining the method signatures of the canister that you are calling. You must obtain this file and copy it into your canister. See the Assets chapter for info on copying files into your canister -
The
args
property of thebody
is an array of the arguments that will be passed to thecanister method
that you are calling
Here's an example of performing an HTTPS outcall:
import express from 'express';
const app = express();
app.use(express.json());
app.post('/https-outcall', async (_req, res) => {
const response = await fetch(`https://httpbin.org/headers`, {
headers: {
'X-Azle-Request-Key-0': 'X-Azle-Request-Value-0',
'X-Azle-Request-Key-1': 'X-Azle-Request-Value-1',
'X-Azle-Request-Key-2': 'X-Azle-Request-Value-2'
}
});
const responseJson = await response.json();
res.json(responseJson);
});
app.listen();
fetch
Azle has custom
fetch
implementations for clients
and canisters.
The client fetch
is used for
authentication, and you can learn more about it
in the
Authentication chapter.
Canister fetch
is used to perform
cross-canister calls and
HTTPS outcalls. There are three main types of calls made with
canister fetch
:
Cross-canister calls to a candid canister
Examples:
- async_await
- bitcoin
- canister
- ckbtc
- composite_queries
- cross_canister_calls
- cycles
- func_types
- heartbeat
- ic_evm_rpc
- icrc
- ledger_canister
- management_canister
- threshold_ecdsa
- whoami
- recursion
- rejections
- timers
Cross-canister calls to an HTTP canister
We are working on better abstractions for these
types of calls. For now you would just make a
cross-canister call using icp://
to
the http_request
and
http_request_update
methods of the
canister that you are calling.
HTTPS outcalls
Examples:
npm TL;DR
If you want to know if an npm package will work with Azle, just try out the package.
It's extremely difficult to know generally if a
package will work unless it has been tried out
and tested already. This is due to the
complexity of understanding and implementing all
required JavaScript, web, Node.js, and OS-level
APIs required for an npm
package to
execute correctly.
To get an idea for which
npm
packages are currently
supported, the
Azle examples
are full of example code with tests.
You can also look at the
wasmedge-quickjs
documentation
here
and
here, as wasmedge-quickjs
is our
implementation for much of the Node.js stdlib.
npm
Azle's goal is to support as many npm packages as possible.
The current reality is that not all
npm
packages work well with Azle.
It is also very difficult to determine which
npm
packages might work well.
For example, when asked about a specific package, we usually cannot say whether or not a given package "works". To truly know if a package will work for your situation, the easiest thing to do is to install it, import it, and try it out.
If you do want to reason about whether or not a package is likely to work, consider the following:
- Which web or Node.js APIs does the package use?
- Does the package depend on functionality that ICP supports?
- Will the package stay within these limitations?
For example, any kind of networking outside of HTTP is unlikely to work (without modification), because ICP has very limited support for non-ICP networking.
Also any kind of heavy computation is unlikely to work (without modification), because ICP has very limited instruction limits per call.
We use wasmedge-quickjs as our implementation for much of the Node.js stdlib. To get a feel for which Node.js standard libraries Azle supports, see here and here.
Tokens TL;DR
Canisters can either:
- Interact with tokens that already exist
- Implement, extend, or proxy tokens
Canisters can use cross-canister calls to interact with tokens implemented using ICRC or other standards. They can also interact with non-ICP tokens through threshold ECDSA.
Canisters can implement tokens from scratch, or extend or proxy implementations already written.
Demergent Labs does not keep any token implementations up-to-date. Here are some old implementations for inspiration and learning:
Tokens
Examples:
- basic_bitcoin
- bitcoin
- bitcoinjs-lib
- bitcore-lib
- ckbtc
- ethereum_json_rpc
- ethers
- ethers_base
- extendable-token-azle
- ic_evm_rpc
- icrc
- ICRC-1
- ledger_canister
Bitcoin
Examples:
There are two main ways to interact with Bitcoin on ICP: through the management canister and through the ckBTC canister.
management canister
To sign Bitcoin transactions using
threshold ECDSA
and interact with the Bitcoin blockchain
directly from ICP, make
cross-canister calls
to the following methods on the
management canister: ecdsa_public_key
,
sign_with_ecdsa
,
bitcoin_get_balance
,
bitcoin_get_balance_query
,
bitcoin_get_utxos
,
bitcoin_get_utxos_query
,
bitcoin_send_transaction
,
bitcoin_get_current_fee_percentiles
.
To construct your cross-canister calls to these
methods, use canister id
aaaaa-aa
and the management
canister's
Candid type information
to construct the arguments to send in the
body
of your
fetch
call.
Here's an example of doing a test cross-canister
call to the
bitcoin_get_balance
method:
import { serialize } from 'azle/experimental';
// ...
const response = await fetch(`icp://aaaaa-aa/bitcoin_get_balance`, {
body: serialize({
args: [
{
'bc1q34aq5drpuwy3wgl9lhup9892qp6svr8ldzyy7c',
min_confirmations: [],
network: { regtest: null }
}
],
cycles: 100_000_000n
})
});
const responseJson = await response.json();
// ...
ckBTC
ckBTC is an ICRC canister that wraps underlying bitcoin controlled with threshold ECDSA.
ICRCs are a set of standards for ICP canisters that define the method signatures and corresponding types for those canisters.
You interact with the
ckBTC
canister by calling its
methods. You can do this from the frontend with
@dfinity/agent, or from an Azle canister through
cross-canister calls.
Here's an example of doing a test cross-canister
call to the ckBTC
icrc1_balance_of
method:
import { ic, serialize } from 'azle/experimental';
// ...
const response = await fetch(
`icp://mc6ru-gyaaa-aaaar-qaaaq-cai/icrc1_balance_of`,
{
body: serialize({
candidPath: `/candid/icp/icrc.did`,
args: [
{
owner: ic.id(),
subaccount: [
padPrincipalWithZeros(ic.caller().toUint8Array())
]
}
]
})
}
);
const responseJson = await response.json();
// ...
function padPrincipalWithZeros(principalBlob: Uint8Array): Uint8Array {
let newUin8Array = new Uint8Array(32);
newUin8Array.set(principalBlob);
return newUin8Array;
}
Ethereum
Examples:
Databases
The eventual goal for Azle is to support as many database solutions as possible. This is difficult for a number of reasons related to ICP's decentralized computing paradigm and Wasm environment.
SQLite is the current recommended approach to databases with Azle. We plan to provide Postgres support through pglite next.
Azle has good support for SQLite through
sql.js. It also has good support for ORMs like
Drizzle
and
TypeORM using
sql.js
.
The following examples should be very useful as you get started using SQLite in Azle:
Examples:
sql.js
SQLite in Azle works using an
asm.js
build of SQLite from sql.js
without
modifications to the library. The database is
stored entirely in memory on the heap, giving
you ~2 GiB of space. Serialization across
upgrades is possible using stable memory like
this:
// src/index.its
import {
init,
postUpgrade,
preUpgrade,
Server,
StableBTreeMap,
stableJson
} from 'azle/experimental';
import { Database } from 'sql.js/dist/sql-asm.js';
import { initDb } from './db';
import { initServer } from './server';
export let db: Database;
let stableDbMap = StableBTreeMap<'DATABASE', Uint8Array>(0, stableJson, {
toBytes: (data: Uint8Array) => data,
fromBytes: (bytes: Uint8Array) => bytes
});
export default Server(initServer, {
init: init([], async () => {
db = await initDb();
}),
preUpgrade: preUpgrade(() => {
stableDbMap.insert('DATABASE', db.export());
}),
postUpgrade: postUpgrade([], async () => {
db = await initDb(stableDbMap.get('DATABASE').Some);
})
});
// src/db/index.ts
import initSqlJs, {
Database,
QueryExecResult,
SqlValue
} from 'sql.js/dist/sql-asm.js';
import { migrations } from './migrations';
export async function initDb(
bytes: Uint8Array = Uint8Array.from([])
): Promise<Database> {
const SQL = await initSqlJs({});
let db = new SQL.Database(bytes);
if (bytes.length === 0) {
for (const migration of migrations) {
db.run(migration);
}
}
return db;
}
Debugging TL;DR
If your terminal logs ever say
did not produce a response
or
response failed classification=Status code:
502 Bad Gateway
, it most likely means that your canister has
thrown an error and halted execution for that
call. Use console.log
and
try/catch
liberally to track down
problems and reveal error information. If your
error logs do not have useful messages, use
try/catch
with a
console.log
of the catch error
argument to reveal the underlying error message.
Debugging
- console.log and try/catch
- Canister did not produce a response
- No error message
- Final Compiled and Bundled JavaScript
Azle currently has less-than-elegant error reporting. We hope to improve this significantly in the future.
In the meantime, consider the following tips when trying to debug your application.
console.log and try/catch
At the highest level, the most important tip is
this: use console.log
and
try/catch
liberally to track down
problems and reveal error information.
Canister did not produce a response
If you ever see an error that looks like this:
Replica Error: reject code CanisterError, reject message IC0506: Canister bkyz2-fmaaa-aaaaa-qaaaq-cai did not produce a response, error code Some("IC0506")
or this:
2024-04-17T15:01:39.194377Z WARN icx_proxy_dev::proxy::agent: Replica Error
2024-04-17T15:01:39.194565Z ERROR tower_http::trace::on_failure: response failed classification=Status code: 502 Bad Gateway latency=61 ms
it most likely means that your canister has thrown an error and halted execution for that call. First check the replica's logs for any errors messages. If there are no useful error messages, use console.log and try/catch liberally to track down the source of the error and to reveal more information about the error.
Don't be surprised if you need to
console.log
after each of your
program's statements (including dependencies
found in node_modules
) to find out
where the error is coming from. And don't be
surprised if you need to use
try/catch
with a
console.log
of the catch error
argument to reveal useful error messaging.
No error message
You might find yourself in a situation where an error is reported without a useful message like this:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre> at <anonymous> (.azle/main.js:110643)<br> at handle (.azle/main.js:73283)<br> at next (.azle/main.js:73452)<br> at dispatch (.azle/main.js:73432)<br> at handle (.azle/main.js:73283)<br> at <anonymous> (.azle/main.js:73655)<br> at process_params (.azle/main.js:73692)<br> at next (.azle/main.js:73660)<br> at expressInit (.azle/main.js:73910)<br> at handle (.azle/main.js:73283)<br> at trim_prefix (.azle/main.js:73684)<br> at <anonymous> (.azle/main.js:73657)<br> at process_params (.azle/main.js:73692)<br> at next (.azle/main.js:73660)<br> at query3 (.azle/main.js:73938)<br> at handle (.azle/main.js:73283)<br> at trim_prefix (.azle/main.js:73684)<br> at <anonymous> (.azle/main.js:73657)<br> at process_params (.azle/main.js:73692)<br> at next (.azle/main.js:73660)<br> at handle (.azle/main.js:73587)<br> at handle (.azle/main.js:76233)<br> at app2 (.azle/main.js:78091)<br> at call (native)<br> at emitTwo (.azle/main.js:9782)<br> at emit2 (.azle/main.js:10023)<br> at httpHandler (.azle/main.js:87618)<br></pre>
</body>
</html>
or like this:
2024-04-17 14:35:30.433501980 UTC: [Canister bkyz2-fmaaa-aaaaa-qaaaq-cai] " at <anonymous> (.azle/main.js:110643)\n at handle (.azle/main.js:73283)\n at next (.azle/main.js:73452)\n at dispatch (.azle/main.js:73432)\n at handle (.azle/main.js:73283)\n at <anonymous> (.azle/main.js:73655)\n at process_params (.azle/main.js:73692)\n at next (.azle/main.js:73660)\n at expressInit (.azle/main.js:73910)\n at handle (.azle/main.js:73283)\n at trim_prefix (.azle/main.js:73684)\n at <anonymous> (.azle/main.js:73657)\n at process_params (.azle/main.js:73692)\n at next (.azle/main.js:73660)\n at query3 (.azle/main.js:73938)\n at handle (.azle/main.js:73283)\n at trim_prefix (.azle/main.js:73684)\n at <anonymous> (.azle/main.js:73657)\n at process_params (.azle/main.js:73692)\n at next (.azle/main.js:73660)\n at handle (.azle/main.js:73587)\n at handle (.azle/main.js:76233)\n at app2 (.azle/main.js:78091)\n at call (native)\n at emitTwo (.azle/main.js:9782)\n at emit2 (.azle/main.js:10023)\n at httpHandler (.azle/main.js:87618)\n"
2024-04-17T14:35:31.983590Z ERROR tower_http::trace::on_failure: response failed classification=Status code: 500 Internal Server Error latency=101 ms
2024-04-17 14:36:34.652587412 UTC: [Canister bkyz2-fmaaa-aaaaa-qaaaq-cai] " at <anonymous> (.azle/main.js:110643)\n at handle (.azle/main.js:73283)\n at next (.azle/main.js:73452)\n at dispatch (.azle/main.js:73432)\n at handle (.azle/main.js:73283)\n at <anonymous> (.azle/main.js:73655)\n at process_params (.azle/main.js:73692)\n at next (.azle/main.js:73660)\n at expressInit (.azle/main.js:73910)\n at handle (.azle/main.js:73283)\n at trim_prefix (.azle/main.js:73684)\n at <anonymous> (.azle/main.js:73657)\n at process_params (.azle/main.js:73692)\n at next (.azle/main.js:73660)\n at query3 (.azle/main.js:73938)\n at handle (.azle/main.js:73283)\n at trim_prefix (.azle/main.js:73684)\n at <anonymous> (.azle/main.js:73657)\n at process_params (.azle/main.js:73692)\n at next (.azle/main.js:73660)\n at handle (.azle/main.js:73587)\n at handle (.azle/main.js:76233)\n at app2 (.azle/main.js:78091)\n at call (native)\n at emitTwo (.azle/main.js:9782)\n at emit2 (.azle/main.js:10023)\n at httpHandler (.azle/main.js:87618)\n"
In these situations you might be able to use
try/catch
with a
console.log
of the catch error
argument to reveal the underlying error message.
For example, this code without a
try/catch
will log errors without
the message This is the error text
:
import express from 'express';
const app = express();
app.get('/hello-world', (_req, res) => {
throw new Error('This is the error text');
res.send('Hello World!');
});
app.listen();
You can get the message to print in the replica terminal like this:
import express from 'express';
const app = express();
app.get('/hello-world', (_req, res) => {
try {
throw new Error('This is the error text');
res.send('Hello World!');
} catch (error) {
console.log(error);
}
});
app.listen();
Final Compiled and Bundled JavaScript
Azle compiles and bundles your TypeScript/JavaScript into a final JavaScript file to be included and executed inside of your canister. Inspecting this final JavaScript code may help you to debug your application.
When you see something like
(.azle/main.js:110643)
in your
error stack traces, it is a reference to the
final compiled and bundled JavaScript file that
is actually deployed with and executed by the
canister. The right-hand side of
.azle/main.js
e.g.
:110643
is the line number in that
file.
You can find the file at
[project_name]/.azle/[canister_name]/canister/src/main.js
. If you have the
AZLE_AUTORELOAD
environment
variable set to true
then you
should instead look at
[project_name]/.azle/[canister_name]/canister/src/main_reloaded.js
Limitations TL;DR
There are a number of limitations that you are likely to run into while you develop with Azle on ICP. These are generally the most limiting:
- 5 billion instruction limit for query calls (HTTP GET requests) (~1 second of computation)
- 40 billion instruction limit for update calls (HTTP POST/etc requests) (~10 seconds of computation)
- 2 MiB request size limit
- 3 MiB response size limit
- 4 GiB heap limit
- High request latency relative to traditional web applications (think seconds not milliseconds)
- High costs relative to traditional web applications (think ~10x traditional web costs)
-
StableBTreeMap memory id
254
is reserved for the stable memory file system
Read more here for in-depth information on current ICP limitations.
Reference
Autoreload
You can turn on automatic reloading of your
canister's final compiled JavaScript by using
the AZLE_AUTORELOAD
environment
variable during deploy:
AZLE_AUTORELOAD=true dfx deploy
The autoreload feature watches all
.ts
and .js
files
recursively in the directory with your
dfx.json
file (the root directory
of your project), excluding files found in
.azle
, .dfx
, and
node_modules
.
Autoreload only works properly if you do not
change the methods of your canister. HTTP-based
canisters will generally work well with
autoreload as the query and update methods
http_request
and
http_request_update
will not need
to change often. Candid-based canisters with
explicit query
and
update
methods may require manual
deploys more often.
Autoreload will not reload assets uploaded
through the assets
property of your
dfx.json
.
Setting AZLE_AUTORELOAD=true
will
create a new dfx
identity and set
it as a controller of your canister. By default
it will be called
_azle_file_uploader_identity
. This
name can be changed with the
AZLE_UPLOADER_IDENTITY_NAME
environment variable.
Environment Variables
- AZLE_AUTORELOAD
- AZLE_IDENTITY_STORAGE_MODE
- AZLE_INSTRUCTION_COUNT
- AZLE_PROPTEST_NUM_RUNS
- AZLE_PROPTEST_PATH
- AZLE_PROPTEST_QUIET
- AZLE_PROPTEST_SEED
- AZLE_PROPTEST_VERBOSE
- AZLE_TEST_FETCH
- AZLE_UPLOADER_IDENTITY_NAME
- AZLE_VERBOSE
AZLE_AUTORELOAD
Set this to true
to enable
autoreloading of your TypeScript/JavaScript code
when making any changes to .ts
or
.js
files in your project.
AZLE_IDENTITY_STORAGE_MODE
Used for automated testing.
AZLE_INSTRUCTION_COUNT
Set this to true
to see rough
instruction counts just before JavaScript
execution completes for calls.
AZLE_PROPTEST_NUM_RUNS
Used for automated testing.
AZLE_PROPTEST_PATH
Used for automated testing.
AZLE_PROPTEST_QUIET
Used for automated testing.
AZLE_PROPTEST_SEED
Used for automated testing.
AZLE_PROPTEST_VERBOSE
Used for automated testing.
AZLE_TEST_FETCH
Used for automated testing.
AZLE_UPLOADER_IDENTITY_NAME
Change the name of the dfx
identity
added as a controller for uploading large assets
and autoreload.
AZLE_VERBOSE
Set this to true
to enable more
logging output during dfx deploy
.
Old Candid-based Documentation
This entire section of the documentation may be out of date
Azle is currently going through a transition to
give higher priority to utilizing HTTP, REST,
JSON, and other familiar web technologies. This
is in contrast to having previously focused on
ICP-specific technologies like
Candid and
explicitly creating
Canister
objects with
query and
update
methods.
We are calling these two paradigms HTTP-based and Candid-based. Many concepts from the Candid-based documentation are still applicable in the HTTP-based paradigm. The HTTP-based paradigm simply focuses on changing the communication and serialization strategies to be more web-focused and less custom.
Azle (Beta)
Azle is a TypeScript and JavaScript Canister Development Kit (CDK) for the Internet Computer (IC). In other words, it's a TypeScript/JavaScript runtime for building applications (canisters) on the IC.
Disclaimer
Azle stable mode is continuously subjected to intense scrutiny and testing, however it does not yet have multiple independent security reviews/audits.
Stable Mode
Azle runs in stable mode by default.
This mode is intended for production use after Azle's 1.0 release. Its focus is on API and runtime stability, security, performance, TypeScript and JavaScript language support, the ICP APIs, and Candid remote procedure calls (RPC). There is minimal support for the Node.js standard library, npm ecosystem, and HTTP server functionality.
Experimental Mode
Azle runs in experimental mode through
explicitly enabling a flag in
dfx.json
or certain CLI commands.
This mode is intended for developers who are willing to accept the risk of using an alpha or beta project. Its focus is on quickly enabling new features and functionality without requiring the time and other resources necessary to advance them to the stable mode. The Node.js standard libary, npm ecosystem, and HTTP server functionality are also major areas of focus.
NOTE: Keep clearly in mind that the experimental mode fundamentally changes the Azle Wasm binary. It is not guaranteed to be secure or stable in API changes or runtime behavior. If you enable the experimental mode, even if you only use APIs from the stable mode, you are accepting a higher risk of bugs, errors, crashes, security exploits, breaking API changes, etc.
Demergent Labs
Azle is currently developed by Demergent Labs, a for-profit company with a grant from DFINITY.
Demergent Labs' vision is to accelerate the adoption of Web3, the Internet Computer, and sustainable open source.
Benefits and drawbacks
Azle and the IC provide unique benefits and drawbacks, and both are not currently suitable for all application use-cases.
The following information will help you to determine when Azle and the IC might be beneficial for your use-case.
Benefits
Azle intends to be a full TypeScript and JavaScript environment for the IC (a decentralized cloud platform), with support for all of the TypeScript and JavaScript language and as many relevant environment APIs as possible. These environment APIs will be similar to those available in the Node.js and web browser environments.
One of the core benefits of Azle is that it allows web developers to bring their TypeScript or JavaScript skills to the IC. For example, Azle allows the use of various npm packages and VS Code intellisense.
As for the IC, we believe its main benefits can be broken down into the following categories:
Most of these benefits stem from the decentralized nature of the IC, though the IC is best thought of as a progressively decentralizing cloud platform. As opposed to traditional cloud platforms, its goal is to be owned and controlled by many independent entities.
Ownership
- Full-stack group ownership
- Autonomous ownership
- Permanent APIs
- Credible neutrality
- Reduced platform risk
Full-stack group ownership
The IC allows you to build applications that are controlled directly and only (with some caveats) by a group of people. This is in opposition to most cloud applications written today, which must be under the control of a very limited number of people and often a single legal entity that answers directly to a cloud provider, which itself is a single legal entity.
In the blockchain world, group-owned applications are known as DAOs. As opposed to DAOs built on most blockchains, the IC allows full-stack applications to be controlled by groups. This means that the group fully controls the running instances of the frontend and the backend code.
Autonomous ownership
In addition to allowing applications to be owned by groups of people, the IC also allows applications to be owned by no one. This essentially creates autonomous applications or everlasting processes that execute indefinitely. The IC will essentially allow such an application to run indefinitely, unless it depletes its balance of cycles, or the NNS votes to shut it down, neither of which is inevitable.
Permanent APIs
Because most web APIs are owned and operated by individual entities, their fate is tied to that of their owners. If their owners go out of business, then those APIs may cease to exist. If their owners decide that they do not like or agree with certain users, they may restrict their access. In the end, they may decide to shut down or restrict access for arbitrary reasons.
Because the IC allows for group and autonomous ownership of cloud software, the IC is able to produce potentially permanent web APIs. A decentralized group of independent entities will find it difficult to censor API consumers or shut down an API. An autonomous API would take those difficulties to the extreme, as it would continue operating as long as consumers were willing to pay for it.
Credible neutrality
Group and autonomous ownership makes it possible to build neutral cloud software on the IC. This type of software would allow independent parties to coordinate with reduced trust in each other or a single third-party coordinator.
This removes the risk of the third-party coordinator acting in its own self-interest against the interests of the coordinating participants. The coordinating participants would also find it difficult to implement changes that would benefit themselves to the detriment of other participants.
Examples could include mobile app stores, ecommerce marketplaces, and podcast directories.
Reduced platform risk
Because the IC is not owned or controlled by any one entity or individual, the risk of being deplatformed is reduced. This is in opposition to most cloud platforms, where the cloud provider itself generally has the power to arbitrarily remove users from its platform. While deplatforming can still occur on the IC, the only endogenous means of forcefully taking down an application is through an NNS vote.
Security
- Built-in replication
- Built-in authentication
- Built-in firewall/port management
- Built-in sandboxing
- Threshold protocols
- Verifiable source code
- Blockchain integration
Built-in replication
Replication has many benefits that stem from reducing various central points of failure.
The IC is at its core a Byzantine Fault Tolerant replicated compute environment. Applications are deployed to subnets which are composed of nodes running replicas. Each replica is an independent replicated state machine that executes an application's state transitions (usually initiated with HTTP requests) and persists the results.
This replication provides a high level of security out-of-the-box. It is also the foundation of a number of protocols that provide threshold cryptographic operations to IC applications.
Built-in authentication
IC client tooling makes it easy to sign and send messages to the IC, and Internet Identity provides a novel approach to self-custody of private keys. The IC automatically authenticates messages with the public key of the signer, and provides a compact representation of that public key, called a principal, to the application. The principal can be used for authorization purposes. This removes many authentication concerns from the developer.
Built-in firewall/port management
The concept of ports and various other low-level network infrastructure on the IC is abstracted away from the developer. This can greatly reduce application complexity thus minimizing the chance of introducing vulnerabilities through incorrect configurations. Canisters expose endpoints through various methods, usually query or update methods. Because authentication is also built-in, much of the remaining vulnerability surface area is minimized to implementing correct authorization rules in the canister method endpoints.
Built-in sandboxing
Canisters have at least two layers of sandboxing to protect colocated canisters from each other. All canisters are at their core Wasm modules and thus inherit the built-in Wasm sandbox. In case there is any bug in the underlying implementation of the Wasm execution environment (or a vulnerability in the imported host functionality), there is also an OS-level sandbox. Developers need not do anything to take advantage of these sandboxes.
Threshold protocols
The IC provides a number of threshold protocols that allow groups of independent nodes to perform cryptographic operations. These protocols remove central points of failure while providing familiar and useful cryptographic operations to developers. Included are ECDSA, BLS, VRF-like, and in the future threshold key derivation.
Verifiable source code
IC applications (canisters) are compiled into Wasm and deployed to the IC as Wasm modules. The IC hashes each canister's Wasm binary and stores it for public retrieval. The Wasm binary hash can be retrieved and compared with the hash of an independently compiled Wasm binary derived from available source code. If the hashes match, then one can know with a high degree of certainty that the application is executing the Wasm binary that was compiled from that source code.
Blockchain integration
When compared with web APIs built for the same purpose, the IC provides a high degree of security when integrating with various other blockchains. It has a direct client integration with Bitcoin, allowing applications to query its state with BFT guarantees. A similar integration is coming for Ethereum.
In addition to these blockchain client integrations, a threshold ECDSA protocol (tECDSA) allows the IC to create keys and sign transactions on various ECDSA chains. These chains include Bitcoin and Ethereum, and in the future the protocol may be extended to allow interaction with various EdDSA chains. These direct integrations combined with tECDSA provide a much more secure way to provide blockchain functionality to end users than creating and storing their private keys on traditional cloud infrastructure.
Developer experience
Built-in devops
The IC provides many devops benefits automatically. Though currently limited in its scalability, the protocol attempts to remove the need for developers to concern themselves with concepts such as autoscaling, load balancing, uptime, sandboxing, and firewalls/port management.
Correctly constructed canisters have a simple deploy process and automatically inherit these devops capabilities up unto the current scaling limits of the IC. DFINITY engineers are constantly working to remove scalability bottlenecks.
Orthogonal persistence
The IC automatically persists its heap. This creates an extremely convenient way for developers to store application state, by simply writing into global variables in their programming language of choice. This is a great way to get started.
If a canister upgrades its code, swapping out its Wasm binary, then the heap must be cleared. To overcome this limitation, there is a special area of memory called stable memory that persists across these canister upgrades. Special stable data structures provide a familiar API that allows writing into stable memory directly.
All of this together provides the foundation for a very simple persistence experience for the developer. The persistence tools now available and coming to the IC may be simpler than their equivalents on traditional cloud infrastructure.
Drawbacks
It's important to note that both Azle and the IC are early-stage projects. The IC officially launched in May of 2021, and Azle reached beta in April of 2022.
Azle
Some of Azle's main drawbacks can be summarized as follows:
Beta
Azle reached beta in April of 2022. It's an immature project that may have unforeseen bugs and other issues. We're working constantly to improve it. We hope to get to a production-ready 1.0 in 2024. The following are the major blockers to 1.0:
- Extensive automated property test coverage
- Multiple independent security reviews/audits
- Broad npm package support
Security risks
As discussed earlier, these are some things to keep in mind:
- Azle does not yet have extensive automated property tests
- Azle does not yet have multiple independent security reviews/audits
- Azle does not yet have many live, successful, continuously operating applications deployed to the IC
Missing APIs
Azle is not Node.js nor is it V8 running in a web browser. It is using a JavaScript interpreter running in a very new and very different environment. APIs from the Node.js and web browser ecosystems may not be present in Azle. Our goal is to support as many of these APIs as possible over time.
IC
Some of the IC's main drawbacks can be summarized as follows:
- Early
- High latencies
- Limited and expensive compute resources
- Limited scalability
- Lack of privacy
- NNS risk
Early
The IC launched officially in May of 2021. As a relatively new project with an extremely ambitious vision, you can expect a small community, immature tooling, and an unproven track record. Much has been delivered, but many promises are yet to be fulfilled.
High latencies
Any requests that change state on the IC must go through consensus, thus you can expect latencies of a few seconds for these types of requests. When canisters need to communicate with each other across subnets or under heavy load, these latencies can be even longer. Under these circumstances, in the worst case latencies will build up linearly. For example, if canister A calls canister B calls canister C, and these canisters are all on different subnets or under heavy load, then you might need to multiply the latency by the total number of calls.
Limited and expensive compute resources
CPU usage, data storage, and network usage may be more expensive than the equivalent usage on traditional cloud platforms. Combining these costs with the high latencies explained above, it becomes readily apparent that the IC is currently not built for high-performance computing.
Limited scalability
The IC might not be able to scale to the needs of your application. It is constantly seeking to improve scalability bottlenecks, but it will probably not be able to onboard millions of users to your traditional web application.
Lack of privacy
You should assume that all of your application data (unless it is end-to-end encrypted) is accessible to multiple third-parties with no direct relationship and limited commitment to you. Currently all canister state sits unencrypted on node operator's machines. Application-layer access controls for data are possible, but motivated node operators will have an easy time getting access to your data.
NNS risk
The NNS has the ability to uninstall any canister and can generally change anything about the IC protocol. The NNS uses a simple liquid democracy based on coin/token voting and follower relationships. At the time of this writing most of the voting power on the NNS follows DFINITY for protocol changes, effectively giving DFINITY write control to the protocol while those follower relationships remain in place. The NNS must mature and decentralize to provide practical and realistic protections to canisters and their users.
Internet Computer Overview
The Internet Computer (IC) is a decentralized cloud platform. Actually, it is better thought of as a progressively decentralizing cloud platform. Its full vision is yet to be fulfilled.
It aims to be owned and operated by many independent entities in many geographies and legal jurisdictions throughout the world. This is in opposition to most traditional cloud platforms today, which are generally owned and operated by one overarching legal entity.
The IC is composed of computer hardware nodes running the IC protocol software. Each running IC protocol software process is known as a replica.
Nodes are assigned into groups known as subnets. Each subnet attempts to maximize its decentralization of nodes according to factors such as data center location and node operator independence.
The subnets vary in size. Generally speaking the larger the size of the subnet the more secure it will be. Subnets currently range in size from 13 to 40 nodes, with most subnets having 13 nodes.
IC applications, known as canisters, are deployed to specific subnets. They are then accessible through Internet Protocol requests such as HTTP. Each subnet replicates all canisters across all of its replicas. A consensus protocol is run by the replicas to ensure Byzantine Fault Tolerance.
View the IC Dashboard to explore all data centers, subnets, node operators, and many other aspects of the IC.
Canisters Overview
Canisters are Internet Computer (IC) applications. They are the encapsulation of your code and state, and are essentially Wasm modules.
State can be stored on the 4 GiB heap or in a larger 96 GiB location called stable memory. You can store state on the heap using your language's native global variables. You can store state in stable memory using low-level APIs or special stable data structures that behave similarly to native language data structures.
State changes must go through a process called consensus. The consensus process ensures that state changes are Byzantine Fault Tolerant. This process takes a few seconds to complete.
Operations on canister state are exposed to users through canister methods. These methods can be invoked through HTTP requests. Query methods allow state to be read and are low-latency. Update methods allow state to be changed and are higher-latency. Update methods take a few seconds to complete because of the consensus process.
Installation
Windows is only supported through a Linux virtual environment of some kind, such as WSL
It's recommended to use nvm and Node.js 22:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
Restart your terminal and then run:
nvm install 22
Check that the installation went smoothly by looking for clean output from the following command:
node --version
Install the dfx command line tools for managing ICP applications:
DFX_VERSION=0.24.3 sh -ci "$(curl -fsSL https://sdk.dfinity.org/install.sh)"
Check that the installation went smoothly by looking for clean output from the following command:
dfx --version
If after trying to run
dfx --version
you encounter an
error such as
dfx: command not found
, you might
need to add $HOME/bin
to your path.
Here's an example of doing this in your
.bashrc
:
echo 'export PATH="$PATH:$HOME/bin"' >> "$HOME/.bashrc"
Hello World
Let's build your first application (canister) with Azle!
Before embarking please ensure you've followed all of the installation instructions, especially noting the build dependencies.
We'll build a simple
Hello World
canister that shows the
basics of importing Azle, exposing a query
method, exposing an update method, and storing
some state in a global variable. We'll then
interact with it from the command line and from
our web browser.
Quick Start
We are going to use the Azle
new
command which creates a simple
example project.
First use the new
command to create
a new project called
azle_hello_world
:
npx azle new azle_hello_world
Now let's go inside of our project:
cd azle_hello_world
We should install Azle and all of its dependencies:
npm install
Start up your local replica:
dfx start
In another terminal, deploy your canister:
dfx deploy azle_hello_world
Call the setMessage
method:
dfx canister call azle_hello_world setMessage '("Hello world!")'
Call the getMessage
method:
dfx canister call azle_hello_world getMessage
If you run into an error during deployment, see the common deployment issues section.
See the official azle_hello_world example for more information.
Methodical start
The project directory and file structure
Assuming you're starting completely from scratch, run these commands to setup your project's directory and file structure:
mkdir azle_hello_world
cd azle_hello_world
mkdir src
touch src/index.ts
touch tsconfig.json
touch dfx.json
Now install Azle, which will create your
package.json
and
package-lock.json
files:
npm install azle
Open up azle_hello_world
in your
text editor (we recommend
VS Code).
index.ts
Here's the main code of the project, which you
should put in the
azle_hello_world/src/index.ts
file
of your canister:
import { Canister, query, text, update, Void } from 'azle/experimental';
// This is a global variable that is stored on the heap
let message = '';
export default Canister({
// Query calls complete quickly because they do not go through consensus
getMessage: query([], text, () => {
return message;
}),
// Update calls take a few seconds to complete
// This is because they persist state changes and go through consensus
setMessage: update([text], Void, (newMessage) => {
message = newMessage; // This change will be persisted
})
});
Let's discuss each section of the code.
import { Canister, query, text, update, Void } from 'azle/experimental';
The code starts off by importing
Canister
, query
,
text
, update
and
Void
from azle
. The
azle
module provides most of the
Internet Computer (IC) APIs for your canister.
// This is a global variable that is stored on the heap
let message = '';
We have created a global variable to store the state of our application. This variable is in scope to all of the functions defined in this module. We have set it equal to an empty string.
export default Canister({
...
});
The Canister
function allows us to
export our canister's definition to the Azle IC
environment.
// Query calls complete quickly because they do not go through consensus
getMessage: query([], text, () => {
return message;
}),
We are exposing a canister query method here.
This method simply returns our global
message
variable. We use a
CandidType
object called
text
to instruct Azle to encode the
return value as a Candid
text
value. When query methods are
called they execute quickly because they do not
have to go through consensus.
// Update calls take a few seconds to complete
// This is because they persist state changes and go through consensus
setMessage: update([text], Void, (newMessage) => {
message = newMessage; // This change will be persisted
});
We are exposing an update method here. This
method accepts a string
from the
caller and will store it in our global
message
variable. We use a
CandidType
object called
text
to instruct Azle to decode the
newMessage
parameter from a Candid
text
value to a JavaScript string
value. Azle will infer the TypeScript type for
newMessage
. We use a
CandidType
object called
Void
to instruct Azle to encode the
return value as the absence of a Candid value.
When update methods are called they take a few seconds to complete. This is because they persist changes and go through consensus. A majority of nodes in a subnet must agree on all state changes introduced in calls to update methods.
That's it! We've created a very simple
getter/setter
Hello World
application. But no
Hello World
project is complete
without actually yelling
Hello world
!
To do that, we'll need to setup the rest of our project.
tsconfig.json
Create the following in
azle_hello_world/tsconfig.json
:
{
"compilerOptions": {
"strict": true,
"target": "ES2020",
"moduleResolution": "node",
"allowJs": true,
"outDir": "HACK_BECAUSE_OF_ALLOW_JS"
}
}
dfx.json
Create the following in
azle_hello_world/dfx.json
:
{
"canisters": {
"azle_hello_world": {
"type": "custom",
"main": "src/index.ts",
"candid": "src/index.did",
"build": "node_modules/.bin/azle compile azle_hello_world",
"wasm": ".azle/azle_hello_world/azle_hello_world.wasm",
"gzip": true
}
}
}
Local deployment
Let's deploy to our local replica.
First startup the replica:
dfx start --background
Then deploy the canister:
dfx deploy
Common deployment issues
If you run into an error during deployment, see the common deployment issues section.
Interacting with your canister from the command line
Once we've deployed we can ask for our message:
dfx canister call azle_hello_world getMessage
We should see ("")
representing an
empty message.
Now let's yell Hello World!
:
dfx canister call azle_hello_world setMessage '("Hello World!")'
Retrieve the message:
dfx canister call azle_hello_world getMessage
We should see ("Hello World!")
.
Interacting with your canister from the web UI
After deploying your canister, you should see output similar to the following in your terminal:
Deployed canisters.
URLs:
Backend canister via Candid interface:
azle_hello_world: http://127.0.0.1:8000/?canisterId=ryjl3-tyaaa-aaaaa-aaaba-cai&id=rrkah-fqaaa-aaaaa-aaaaq-cai
Open up http://127.0.0.1:8000/?canisterId=ryjl3-tyaaa-aaaaa-aaaba-cai&id=rrkah-fqaaa-aaaaa-aaaaq-cai or the equivalent URL from your terminal to access the web UI and interact with your canister.
Deployment
- Starting the local replica
- Deploying to the local replica
- Interacting with your canister
- Deploying to mainnet
- Common deployment issues
There are two main Internet Computer (IC) environments that you will generally interact with: the local replica and mainnet.
When developing on your local machine, our recommended flow is to start up a local replica in your project's root directoy and then deploy to it for local testing.
Starting the local replica
Open a terminal and navigate to your project's root directory:
dfx start
Alternatively you can start the local replica as a background process:
dfx start --background
If you want to stop a local replica running in the background:
dfx stop
If you ever see this error after
dfx stop
:
Error: Failed to kill all processes. Remaining: 627221 626923 627260
Then try this:
sudo kill -9 627221
sudo kill -9 626923
sudo kill -9 627260
If your replica starts behaving strangely, we
recommend starting the replica clean, which will
clean the dfx
state of your
project:
dfx start --clean
Deploying to the local replica
To deploy all canisters defined in your
dfx.json
:
dfx deploy
To deploy an individual canister:
dfx deploy canister_name
Interacting with your canister
As a developer you can generally interact with your canister in three ways:
dfx command line
You can see a more complete reference here.
The commands you are likely to use most frequently are:
# assume a canister named my_canister
# builds and deploys all canisters specified in dfx.json
dfx deploy
# builds all canisters specified in dfx.json
dfx build
# builds and deploys my_canister
dfx deploy my_canister
# builds my_canister
dfx build my_canister
# removes the Wasm binary and state of my_canister
dfx uninstall-code my_canister
# calls the methodName method on my_canister with a string argument
dfx canister call my_canister methodName '("This is a Candid string argument")'
dfx web UI
After deploying your canister, you should see output similar to the following in your terminal:
Deployed canisters.
URLs:
Backend canister via Candid interface:
my_canister: http://127.0.0.1:8000/?canisterId=ryjl3-tyaaa-aaaaa-aaaba-cai&id=rrkah-fqaaa-aaaaa-aaaaq-cai
Open up http://127.0.0.1:8000/?canisterId=ryjl3-tyaaa-aaaaa-aaaba-cai&id=rrkah-fqaaa-aaaaa-aaaaq-cai to access the web UI.
@dfinity/agent
@dfinity/agent is the TypeScript/JavaScript client library for interacting with canisters on the IC. If you are building a client web application, this is probably what you'll want to use.
There are other agents for other languages as well:
Deploying to mainnet
Assuming you are setup with cycles, then you are ready to deploy to mainnet.
To deploy all canisters defined in your dfx.json:
dfx deploy --network ic
To deploy an individual canister:
dfx deploy --network ic canister_name
Common deployment issues
If you run into an error during deployment, try the following:
- Ensure that you have followed the instructions correctly in the installation chapter, especially noting the build dependencies
-
Start the whole deployment process from
scratch by running the following commands:
dfx stop
or simply terminatedfx
in your terminal,dfx start --clean
,npx azle clean
,dfx deploy
-
Look for more error output by adding the
AZLE_VERBOSE=true
environment variable into the same process that runsdfx deploy
-
Look for errors in each of the files in
~/.config/azle/rust/[rust_version]/logs
- Reach out in the Discord channel
Examples
Azle has many example projects showing nearly all Azle APIs. They can be found in the examples directory of the Azle GitHub repository.
We'll highlight a few of them and some others here:
- Query
- Update
- Primitive Types
- Stable Structures
- Cycles
- Cross Canister Calls
- Management Canister
- Outgoing HTTP Requests
- Incoming HTTP Requests
- Pre and Post Upgrade
- Timers
- Multisig Vault
- ICRC-1
- IC Chainlink Data Feeds
- Bitcoin
- ckBTC
Query Methods
TL;DR
-
Created with the
query
function - Read-only
- Executed on a single node
- No consensus
- Latency on the order of ~100 milliseconds
- 5 billion Wasm instruction limit
- 4 GiB heap limit
- ~32k queries per second per canister
The most basic way to expose your canister's
functionality publicly is through a query
method. Here's an example of a simple query
method named getString
:
import { Canister, query, text } from 'azle/experimental';
export default Canister({
getString: query([], text, () => {
return 'This is a query method!';
})
});
Query methods are defined inside of a call to
Canister
using the
query
function.
The first parameter to query
is an
array of CandidType
objects that
will be used to decode the Candid bytes of the
arguments sent from the client when calling your
query method.
The second parameter to query
is a
CandidType
object used to encode
the return value of your function to Candid
bytes to then be sent back to the client.
The third parameter to query
is the
function that receives the decoded arguments,
performs some computation, and then returns a
value to be encoded. The TypeScript signature of
this function (parameter and return types) will
be inferred from the
CandidType
arguments in the first
and second parameters to query
.
getString
can be called from the
outside world through the IC's HTTP API. You'll
usually invoke this API from the
dfx command line
,
dfx web UI
, or an agent.
From the dfx command line
you can
call it like this:
dfx canister call my_canister getString
Query methods are read-only. They do not persist any state changes. Take a look at the following example:
import { Canister, query, text, Void } from 'azle/experimental';
let db: {
[key: string]: string;
} = {};
export default Canister({
set: query([text, text], Void, (key, value) => {
db[key] = value;
})
});
Calling set
will perform the
operation of setting the
key
property on the
db
object to value
,
but after the call finishes that change will be
discarded.
This is because query methods are executed on a single node machine and do not go through consensus. This results in lower latencies, perhaps on the order of 100 milliseconds.
There is a limit to how much computation can be done in a single call to a query method. The current query call limit is 5 billion Wasm instructions. Here's an example of a query method that runs the risk of reaching the limit:
import { Canister, nat32, query, text } from 'azle/experimental';
export default Canister({
pyramid: query([nat32], text, (levels) => {
return new Array(levels).fill(0).reduce((acc, _, index) => {
const asterisks = new Array(index + 1).fill('*').join('');
return `${acc}${asterisks}\n`;
}, '');
})
});
From the dfx command line
you can
call pyramid
like this:
dfx canister call my_canister pyramid '(1_000)'
With an argument of 1_000
,
pyramid
will fail with an error
...exceeded the instruction limit for
single message execution
.
Keep in mind that each query method invocation has up to 4 GiB of heap available.
In terms of query scalability, an individual canister likely has an upper bound of ~36k queries per second.
Update Methods
TL;DR
-
Created with the
update
function - Read-write
- Executed on many nodes
- Consensus
- Latency ~2-5 seconds
- 20 billion Wasm instruction limit
- 4 GiB heap limit
- 96 GiB stable memory limit
- ~900 updates per second per canister
Update methods are similar to query methods, but state changes can be persisted. Here's an example of a simple update method:
import { Canister, nat64, update } from 'azle/experimental';
let counter = 0n;
export default Canister({
increment: update([], nat64, () => {
return counter++;
})
});
Calling increment
will return the
current value of counter
and then
increase its value by 1. Because
counter
is a global variable, the
change will be persisted to the heap, and
subsequent query and update calls will have
access to the new counter
value.
Because the Internet Computer (IC) persists changes with certain fault tolerance guarantees, update calls are executed on many nodes and go through consensus. This leads to latencies of ~2-5 seconds per update call.
Due to the latency and other expenses involved with update methods, it is best to use them only when necessary. Look at the following example:
import { Canister, query, text, update, Void } from 'azle/experimental';
let message = '';
export default Canister({
getMessage: query([], text, () => {
return message;
}),
setMessage: update([text], Void, (newMessage) => {
message = newMessage;
})
});
You'll notice that we use an update method,
setMessage
, only to perform the
change to the global
message
variable. We use
getMessage
, a query method, to read
the message.
Keep in mind that the heap is limited to 4 GiB, and thus there is an upper bound to global variable storage capacity. You can imagine how a simple database like the following would eventually run out of memory with too many entries:
import {
Canister,
None,
Opt,
query,
Some,
text,
update,
Void
} from 'azle/experimental';
type Db = {
[key: string]: string;
};
let db: Db = {};
export default Canister({
get: query([text], Opt(text), (key) => {
const value = db[key];
return value !== undefined ? Some(value) : None;
}),
set: update([text, text], Void, (key, value) => {
db[key] = value;
})
});
If you need more than 4 GiB of storage, consider
taking advantage of the 96 GiB of stable memory.
Stable structures like
StableBTreeMap
give you a nice API
for interacting with stable memory. These data
structures will be
covered in more detail later. Here's a simple example:
import {
Canister,
Opt,
query,
StableBTreeMap,
text,
update,
Void
} from 'azle/experimental';
let db = StableBTreeMap<text, text>(0);
export default Canister({
get: query([text], Opt(text), (key) => {
return db.get(key);
}),
set: update([text, text], Void, (key, value) => {
db.insert(key, value);
})
});
So far we have only seen how state changes can be persisted. State changes can also be discarded by implicit or explicit traps. A trap is an immediate stop to execution with the ability to provide a message to the execution environment.
Traps can be useful for ensuring that multiple operations are either all completed or all disregarded, or in other words atomic. Keep in mind that these guarantees do not hold once cross-canister calls are introduced, but that's a more advanced topic covered later.
Here's an example of how to trap and ensure atomic changes to your database:
import {
Canister,
ic,
Opt,
query,
Record,
StableBTreeMap,
text,
update,
Vec,
Void
} from 'azle/experimental';
const Entry = Record({
key: text,
value: text
});
let db = StableBTreeMap<text, text>(0);
export default Canister({
get: query([text], Opt(text), (key) => {
return db.get(key);
}),
set: update([text, text], Void, (key, value) => {
db.insert(key, value);
}),
setMany: update([Vec(Entry)], Void, (entries) => {
entries.forEach((entry) => {
if (entry.key === 'trap') {
ic.trap('explicit trap');
}
db.insert(entry.key, entry.value);
});
})
});
In addition to ic.trap
, an explicit
JavaScript throw
or any unhandled
exception will also trap.
There is a limit to how much computation can be done in a single call to an update method. The current update call limit is 20 billion Wasm instructions. If we modify our database example, we can introduce an update method that runs the risk of reaching the limit:
import {
Canister,
nat64,
Opt,
query,
StableBTreeMap,
text,
update,
Void
} from 'azle/experimental';
let db = StableBTreeMap<text, text>(0);
export default Canister({
get: query([text], Opt(text), (key) => {
return db.get(key);
}),
set: update([text, text], Void, (key, value) => {
db.insert(key, value);
}),
setMany: update([nat64], Void, (numEntries) => {
for (let i = 0; i < numEntries; i++) {
db.insert(i.toString(), i.toString());
}
})
});
From the dfx command line
you can
call setMany
like this:
dfx canister call my_canister setMany '(10_000)'
With an argument of 10_000
,
setMany
will fail with an error
...exceeded the instruction limit for
single message execution
.
In terms of update scalability, an individual canister likely has an upper bound of ~900 updates per second.
Candid
- text
- blob
- nat
- nat8
- nat16
- nat32
- nat64
- int
- int8
- int16
- int32
- int64
- float32
- float64
- bool
- null
- vec
- opt
- record
- variant
- func
- service
- principal
- reserved
- empty
Candid is an interface description language created by DFINITY. It can be used to define interfaces between services (canisters), allowing canisters and clients written in various languages to easily interact with each other. This interaction occurs through the serialization/encoding and deserialization/decoding of runtime values to and from Candid bytes.
Azle performs automatic encoding and decoding of
JavaScript values to and from Candid bytes
through the use of various
CandidType
objects. For example,
CandidType
objects are used when
defining the parameter and return types of your
query and update methods. They are also used to
define the keys and values of a
StableBTreeMap
.
It's important to note that the
CandidType
objects decode Candid
bytes into specific JavaScript runtime data
structures that may differ in behavior from the
description of the actual Candid type. For
example, a float32
Candid type is a
JavaScript Number, a nat64
is a
JavaScript BigInt, and an int
is also a
JavaScript BigInt.
Keep this in mind as it may result in unexpected
behavior. Each CandidType
object
and its equivalent JavaScript runtime value is
explained in more detail in
The Azle Book Candid reference.
A more canonical reference of all Candid types available on the Internet Computer (IC) can be found here.
The following is a simple example showing how to
import and use many of the
CandidType
objects available in
Azle:
import {
blob,
bool,
Canister,
float32,
float64,
Func,
int,
int16,
int32,
int64,
int8,
nat,
nat16,
nat32,
nat64,
nat8,
None,
Null,
Opt,
Principal,
query,
Record,
Recursive,
text,
update,
Variant,
Vec
} from 'azle/experimental';
const MyCanister = Canister({
query: query([], bool),
update: update([], text)
});
const Candid = Record({
text: text,
blob: blob,
nat: nat,
nat64: nat64,
nat32: nat32,
nat16: nat16,
nat8: nat8,
int: int,
int64: int64,
int32: int32,
int16: int16,
int8: int8,
float64: float64,
float32: float32,
bool: bool,
null: Null,
vec: Vec(text),
opt: Opt(nat),
record: Record({
firstName: text,
lastName: text,
age: nat8
}),
variant: Variant({
Tag1: Null,
Tag2: Null,
Tag3: int
}),
func: Recursive(() => Func([], Candid, 'query')),
canister: Canister({
query: query([], bool),
update: update([], text)
}),
principal: Principal
});
export default Canister({
candidTypes: query([], Candid, () => {
return {
text: 'text',
blob: Uint8Array.from([]),
nat: 340_282_366_920_938_463_463_374_607_431_768_211_455n,
nat64: 18_446_744_073_709_551_615n,
nat32: 4_294_967_295,
nat16: 65_535,
nat8: 255,
int: 170_141_183_460_469_231_731_687_303_715_884_105_727n,
int64: 9_223_372_036_854_775_807n,
int32: 2_147_483_647,
int16: 32_767,
int8: 127,
float64: Math.E,
float32: Math.PI,
bool: true,
null: null,
vec: ['has one element'],
opt: None,
record: {
firstName: 'John',
lastName: 'Doe',
age: 35
},
variant: {
Tag1: null
},
func: [
Principal.fromText('rrkah-fqaaa-aaaaa-aaaaq-cai'),
'candidTypes'
],
canister: MyCanister(Principal.fromText('aaaaa-aa')),
principal: Principal.fromText('ryjl3-tyaaa-aaaaa-aaaba-cai')
};
})
});
Calling candidTypes
with
dfx
will return:
(
record {
func = func "rrkah-fqaaa-aaaaa-aaaaq-cai".candidTypes;
text = "text";
nat16 = 65_535 : nat16;
nat32 = 4_294_967_295 : nat32;
nat64 = 18_446_744_073_709_551_615 : nat64;
record = record { age = 35 : nat8; lastName = "Doe"; firstName = "John" };
int = 170_141_183_460_469_231_731_687_303_715_884_105_727 : int;
nat = 340_282_366_920_938_463_463_374_607_431_768_211_455 : nat;
opt = null;
vec = vec { "has one element" };
variant = variant { Tag1 };
nat8 = 255 : nat8;
canister = service "aaaaa-aa";
int16 = 32_767 : int16;
int32 = 2_147_483_647 : int32;
int64 = 9_223_372_036_854_775_807 : int64;
null = null : null;
blob = vec {};
bool = true;
principal = principal "ryjl3-tyaaa-aaaaa-aaaba-cai";
int8 = 127 : int8;
float32 = 3.1415927 : float32;
float64 = 2.718281828459045 : float64;
},
)
Stable Structures
TL;DR
- 96 GiB of stable memory
- Persistent across upgrades
- Familiar API
- Must specify memory id
- No migrations per memory id
Stable structures are data structures with familiar APIs that allow write and read access to stable memory. Stable memory is a separate memory location from the heap that currently allows up to 96 GiB of binary storage. Stable memory persists automatically across upgrades.
Persistence on the Internet Computer (IC) is very important to understand. When a canister is upgraded (its code is changed after being initially deployed) its heap is wiped. This includes all global variables.
On the other hand, anything stored in stable memory will be preserved. Writing and reading to and from stable memory can be done with a low-level API, but it is generally easier and preferable to use stable structures.
Azle currently provides one stable structure
called StableBTreeMap
. It's similar
to a
JavaScript Map
and has most of the common operations you'd
expect such as reading, inserting, and removing
values.
Here's how to define a simple
StableBTreeMap
:
import { nat8, StableBTreeMap, text } from 'azle/experimental';
let map = StableBTreeMap<nat8, text>(0);
This is a StableBTreeMap
with a key
of type nat8
and a value of type
text
. Unless you want a default
type of any
for your
key
and value
, then
you must explicitly type your
StableBTreeMap
with type arguments.
StableBTreeMap
works by encoding
and decoding values under-the-hood, storing and
retrieving these values in bytes in stable
memory. When writing to and reading from a
StableBTreeMap
, by default the
stableJson
Serializable object
is used to
encode JS values into bytes and to decode JS
values from bytes. stableJson
uses
JSON.stringify
and
JSON.parse
with a custom
replacer
and
reviver
to handle many
Candid
and other values that you
will most likely use in your canisters.
You may use other
Serializable objects
besides
stableJson
, and you can even create
your own. Simply pass in a
Serializable object
as the second
and third parameters to your
StableBTreeMap
. The second
parameter is the key
Serializable object
and the third
parameter is the value
Serializable object
. For example,
the following StableBTreeMap
uses
the nat8
and text
CandidType objects
from Azle as
Serializable objects
. These
Serializable objects
will encode
and decode to and from Candid bytes:
import { nat8, StableBTreeMap, text } from 'azle/experimental';
let map = StableBTreeMap<nat8, text>(0, nat8, text);
All CandidType
objects imported
from azle
are
Serializable objects
.
A Serializable object
simply has a
toBytes
method that takes a JS
value and returns a Uint8Array
, and
a fromBytes
method that takes a
Uint8Array
and returns a JS value.
Here's an example of how to create your own
simple JSON
Serializable
:
export interface Serializable {
toBytes: (data: any) => Uint8Array;
fromBytes: (bytes: Uint8Array) => any;
}
export function StableSimpleJson(): Serializable {
return {
toBytes(data: any) {
const result = JSON.stringify(data);
return Uint8Array.from(Buffer.from(result));
},
fromBytes(bytes: Uint8Array) {
return JSON.parse(Buffer.from(bytes).toString());
}
};
}
This StableBTreeMap
also has a
memory id
of 0
. Each
StableBTreeMap
instance must have a
unique memory id
between
0
and 254
. Once a
memory id
is allocated, it cannot
be used with a different
StableBTreeMap
. This means you
can't create another
StableBTreeMap
using the same
memory id
, and you can't change the
key or value types of an existing
StableBTreeMap
.
This problem will be addressed to some
extent.
Here's an example showing all of the basic
StableBTreeMap
operations:
import {
bool,
Canister,
nat64,
nat8,
Opt,
query,
StableBTreeMap,
text,
Tuple,
update,
Vec
} from 'azle/experimental';
const Key = nat8;
type Key = typeof Key.tsType;
const Value = text;
type Value = typeof Value.tsType;
let map = StableBTreeMap<Key, Value>(0);
export default Canister({
containsKey: query([Key], bool, (key) => {
return map.containsKey(key);
}),
get: query([Key], Opt(Value), (key) => {
return map.get(key);
}),
insert: update([Key, Value], Opt(Value), (key, value) => {
return map.insert(key, value);
}),
isEmpty: query([], bool, () => {
return map.isEmpty();
}),
items: query([], Vec(Tuple(Key, Value)), () => {
return map.items();
}),
keys: query([], Vec(Key), () => {
return Uint8Array.from(map.keys());
}),
len: query([], nat64, () => {
return map.len();
}),
remove: update([Key], Opt(Value), (key) => {
return map.remove(key);
}),
values: query([], Vec(Value), () => {
return map.values();
})
});
With these basic operations you can build more complex CRUD database applications:
import {
blob,
Canister,
ic,
Err,
nat64,
Ok,
Opt,
Principal,
query,
Record,
Result,
StableBTreeMap,
text,
update,
Variant,
Vec
} from 'azle/experimental';
const User = Record({
id: Principal,
createdAt: nat64,
recordingIds: Vec(Principal),
username: text
});
type User = typeof User.tsType;
const Recording = Record({
id: Principal,
audio: blob,
createdAt: nat64,
name: text,
userId: Principal
});
type Recording = typeof Recording.tsType;
const AudioRecorderError = Variant({
RecordingDoesNotExist: Principal,
UserDoesNotExist: Principal
});
type AudioRecorderError = typeof AudioRecorderError.tsType;
let users = StableBTreeMap<Principal, User>(0);
let recordings = StableBTreeMap<Principal, Recording>(1);
export default Canister({
createUser: update([text], User, (username) => {
const id = generateId();
const user: User = {
id,
createdAt: ic.time(),
recordingIds: [],
username
};
users.insert(user.id, user);
return user;
}),
readUsers: query([], Vec(User), () => {
return users.values();
}),
readUserById: query([Principal], Opt(User), (id) => {
return users.get(id);
}),
deleteUser: update([Principal], Result(User, AudioRecorderError), (id) => {
const userOpt = users.get(id);
if ('None' in userOpt) {
return Err({
UserDoesNotExist: id
});
}
const user = userOpt.Some;
user.recordingIds.forEach((recordingId) => {
recordings.remove(recordingId);
});
users.remove(user.id);
return Ok(user);
}),
createRecording: update(
[blob, text, Principal],
Result(Recording, AudioRecorderError),
(audio, name, userId) => {
const userOpt = users.get(userId);
if ('None' in userOpt) {
return Err({
UserDoesNotExist: userId
});
}
const user = userOpt.Some;
const id = generateId();
const recording: Recording = {
id,
audio,
createdAt: ic.time(),
name,
userId
};
recordings.insert(recording.id, recording);
const updatedUser: User = {
...user,
recordingIds: [...user.recordingIds, recording.id]
};
users.insert(updatedUser.id, updatedUser);
return Ok(recording);
}
),
readRecordings: query([], Vec(Recording), () => {
return recordings.values();
}),
readRecordingById: query([Principal], Opt(Recording), (id) => {
return recordings.get(id);
}),
deleteRecording: update(
[Principal],
Result(Recording, AudioRecorderError),
(id) => {
const recordingOpt = recordings.get(id);
if ('None' in recordingOpt) {
return Err({ RecordingDoesNotExist: id });
}
const recording = recordingOpt.Some;
const userOpt = users.get(recording.userId);
if ('None' in userOpt) {
return Err({
UserDoesNotExist: recording.userId
});
}
const user = userOpt.Some;
const updatedUser: User = {
...user,
recordingIds: user.recordingIds.filter(
(recordingId) =>
recordingId.toText() !== recording.id.toText()
)
};
users.insert(updatedUser.id, updatedUser);
recordings.remove(id);
return Ok(recording);
}
)
});
function generateId(): Principal {
const randomBytes = new Array(29)
.fill(0)
.map((_) => Math.floor(Math.random() * 256));
return Principal.fromUint8Array(Uint8Array.from(randomBytes));
}
The example above shows a very basic audio
recording backend application. There are two
types of entities that need to be stored,
User
and Recording
.
These are represented as
Candid
records.
Each entity gets its own
StableBTreeMap
:
import {
blob,
Canister,
ic,
Err,
nat64,
Ok,
Opt,
Principal,
query,
Record,
Result,
StableBTreeMap,
text,
update,
Variant,
Vec
} from 'azle/experimental';
const User = Record({
id: Principal,
createdAt: nat64,
recordingIds: Vec(Principal),
username: text
});
type User = typeof User.tsType;
const Recording = Record({
id: Principal,
audio: blob,
createdAt: nat64,
name: text,
userId: Principal
});
type Recording = typeof Recording.tsType;
const AudioRecorderError = Variant({
RecordingDoesNotExist: Principal,
UserDoesNotExist: Principal
});
type AudioRecorderError = typeof AudioRecorderError.tsType;
let users = StableBTreeMap<Principal, User>(0);
let recordings = StableBTreeMap<Principal, Recording>(1);
Notice that each StableBTreeMap
has
a unique memory id
. You can begin
to create basic database CRUD functionality by
creating one StableBTreeMap
per
entity. It's up to you to create functionality
for querying, filtering, and relations.
StableBTreeMap
is not a
full-featured database solution, but a
fundamental building block that may enable you
to achieve more advanced database functionality.
Demergent Labs plans to deeply explore database solutions on the IC in the future.
Caveats
float64 values
It seems to be only some
float64
values cannot be
successfully stored and retrieved with a
StableBTreeMap
using
stableJson
because of this bug with
JSON.parse
:
https://github.com/bellard/quickjs/issues/206
CandidType Performance
Azle's Candid encoding/decoding implementation
is currently not well optimized, and Candid may
not be the most optimal encoding format overall,
so you may experience heavy instruction usage
when performing many
StableBTreeMap
operations in
succession. A rough idea of the overhead from
our preliminary testing is probably 1-2 million
instructions for a full Candid encoding and
decoding of values per
StableBTreeMap
operation.
For these reasons we recommend using the
stableJson
Serializable object
(the default)
instead of CandidType
Serializable objects
.
Migrations
Migrations must be performed manually by reading
the values out of one
StableBTreeMap
and writing them
into another. Once a
StableBTreeMap
is initialized to a
specific memory id
, that
memory id
cannot be changed unless
the canister is completely wiped and initialized
again.
Canister
Canister
values do not currently
work with the default
stableJson
implementation. If you
must persist Canister
s, consider
using the Canister
CandidType object
as your
Serializable object
in your
StableBTreeMap
, or create a custom
replacer
or
reviver
for
stableJson
that handles
Canister
.
Cross-canister
Examples:
- async_await
- bitcoin
- composite_queries
- cross_canister_calls
- cycles
- ethereum_json_rpc
- func_types
- heartbeat
- inline_types
- ledger_canister
- management_canister
- outgoing_http_requests
- threshold_ecdsa
- rejections
- timers
- tuple_types
- whoami
Canisters are generally able to call the query or update methods of other canisters in any subnet. We refer to these types of calls as cross-canister calls.
A cross-canister call begins with a definition of the canister to be called.
Imagine a simple canister called
token_canister
:
import {
Canister,
ic,
nat64,
Opt,
Principal,
StableBTreeMap,
update
} from 'azle/experimental';
let accounts = StableBTreeMap<Principal, nat64>(0);
export default Canister({
transfer: update([Principal, nat64], nat64, (to, amount) => {
const from = ic.caller();
const fromBalance = getBalance(accounts.get(from));
const toBalance = getBalance(accounts.get(to));
accounts.insert(from, fromBalance - amount);
accounts.insert(to, toBalance + amount);
return amount;
})
});
function getBalance(accountOpt: Opt<nat64>): nat64 {
if ('None' in accountOpt) {
return 0n;
} else {
return accountOpt.Some;
}
}
Now that you have the canister definition, you can import and instantiate it in another canister:
import { Canister, ic, nat64, Principal, update } from 'azle/experimental';
import TokenCanister from './token_canister';
const tokenCanister = TokenCanister(
Principal.fromText('r7inp-6aaaa-aaaaa-aaabq-cai')
);
export default Canister({
payout: update([Principal, nat64], nat64, async (to, amount) => {
return await ic.call(tokenCanister.transfer, {
args: [to, amount]
});
})
});
If you don't have the actual definition of the token canister with the canister method implementations, you can always create your own canister definition without method implementations:
import { Canister, ic, nat64, Principal, update } from 'azle/experimental';
const TokenCanister = Canister({
transfer: update([Principal, nat64], nat64)
});
const tokenCanister = TokenCanister(
Principal.fromText('r7inp-6aaaa-aaaaa-aaabq-cai')
);
export default Canister({
payout: update([Principal, nat64], nat64, async (to, amount) => {
return await ic.call(tokenCanister.transfer, {
args: [to, amount]
});
})
});
The IC guarantees that cross-canister calls will
return. This means that, generally speaking, you
will always receive a response from
ic.call
. If there are errors during
the call, ic.call
will throw.
Wrapping your cross-canister call in a
try...catch
allows you to handle
these errors.
Let's add to our example code and explore adding some practical error-handling to stop people from stealing tokens.
token_canister
:
import {
Canister,
ic,
nat64,
Opt,
Principal,
StableBTreeMap,
update
} from 'azle/experimental';
let accounts = StableBTreeMap<Principal, nat64>(0);
export default Canister({
transfer: update([Principal, nat64], nat64, (to, amount) => {
const from = ic.caller();
const fromBalance = getBalance(accounts.get(from));
if (amount > fromBalance) {
throw new Error(`${from} has an insufficient balance`);
}
const toBalance = getBalance(accounts.get(to));
accounts.insert(from, fromBalance - amount);
accounts.insert(to, toBalance + amount);
return amount;
})
});
function getBalance(accountOpt: Opt<nat64>): nat64 {
if ('None' in accountOpt) {
return 0n;
} else {
return accountOpt.Some;
}
}
payout_canister
:
import { Canister, ic, nat64, Principal, update } from 'azle/experimental';
import TokenCanister from './index';
const tokenCanister = TokenCanister(
Principal.fromText('bkyz2-fmaaa-aaaaa-qaaaq-cai')
);
export default Canister({
payout: update([Principal, nat64], nat64, async (to, amount) => {
try {
return await ic.call(tokenCanister.transfer, {
args: [to, amount]
});
} catch (error) {
console.log(error);
}
return 0n;
})
});
Throwing will allow you to express error
conditions and halt execution, but you may find
embracing the Result
variant as a
better solution for error handling because of
its composability and predictability.
So far we have only shown a cross-canister call from an update method. Update methods can call other update methods or query methods (but not composite query methods as discussed below). If an update method calls a query method, that query method will be called in replicated mode. Replicated mode engages the consensus process, but for queries the state will still be discarded.
Cross-canister calls can also be initiated from
query methods. These are known as composite
queries, and in Azle they are simply
async
query methods. Composite
queries can call other composite query methods
and regular query methods. Composite queries
cannot call update methods.
Here's an example of a composite query method:
import { bool, Canister, ic, Principal, query } from 'azle/experimental';
const SomeCanister = Canister({
queryForBoolean: query([], bool)
});
const someCanister = SomeCanister(
Principal.fromText('ryjl3-tyaaa-aaaaa-aaaba-cai')
);
export default Canister({
querySomeCanister: query([], bool, async () => {
return await ic.call(someCanister.queryForBoolean);
})
});
You can expect cross-canister calls within the same subnet to take up to a few seconds to complete, and cross-canister calls across subnets take about double that time. Composite queries should be much faster, similar to query calls in latency.
If you don't need to wait for your
cross-canister call to return, you can use
notify
:
import { Canister, ic, Principal, update, Void } from 'azle/experimental';
const SomeCanister = Canister({
receiveNotification: update([], Void)
});
const someCanister = SomeCanister(
Principal.fromText('ryjl3-tyaaa-aaaaa-aaaba-cai')
);
export default Canister({
sendNotification: update([], Void, () => {
return ic.notify(someCanister.receiveNotification);
})
});
If you need to send cycles with your
cross-canister call, you can add
cycles
to the
config
object of
ic.notify
:
import { Canister, ic, Principal, update, Void } from 'azle/experimental';
const SomeCanister = Canister({
receiveNotification: update([], Void)
});
const someCanister = SomeCanister(
Principal.fromText('ryjl3-tyaaa-aaaaa-aaaba-cai')
);
export default Canister({
sendNotification: update([], Void, () => {
return ic.notify(someCanister.receiveNotification, {
cycles: 1_000_000n
});
})
});
HTTP
This chapter is a work in progress.
Incoming HTTP requests
Examples:
import {
blob,
bool,
Canister,
Func,
nat16,
None,
Opt,
query,
Record,
text,
Tuple,
Variant,
Vec
} from 'azle/experimental';
const Token = Record({
// add whatever fields you'd like
arbitrary_data: text
});
const StreamingCallbackHttpResponse = Record({
body: blob,
token: Opt(Token)
});
export const Callback = Func([text], StreamingCallbackHttpResponse, 'query');
const CallbackStrategy = Record({
callback: Callback,
token: Token
});
const StreamingStrategy = Variant({
Callback: CallbackStrategy
});
type HeaderField = [text, text];
const HeaderField = Tuple(text, text);
const HttpResponse = Record({
status_code: nat16,
headers: Vec(HeaderField),
body: blob,
streaming_strategy: Opt(StreamingStrategy),
upgrade: Opt(bool)
});
const HttpRequest = Record({
method: text,
url: text,
headers: Vec(HeaderField),
body: blob,
certificate_version: Opt(nat16)
});
export default Canister({
http_request: query([HttpRequest], HttpResponse, (req) => {
return {
status_code: 200,
headers: [],
body: Buffer.from('hello'),
streaming_strategy: None,
upgrade: None
};
})
});
Outgoing HTTP requests
Examples:
import {
Canister,
ic,
init,
nat32,
Principal,
query,
Some,
StableBTreeMap,
text,
update
} from 'azle/experimental';
import {
HttpResponse,
HttpTransformArgs,
managementCanister
} from 'azle/canisters/management';
let stableStorage = StableBTreeMap<text, text>(0);
export default Canister({
init: init([text], (ethereumUrl) => {
stableStorage.insert('ethereumUrl', ethereumUrl);
}),
ethGetBalance: update([text], text, async (ethereumAddress) => {
const urlOpt = stableStorage.get('ethereumUrl');
if ('None' in urlOpt) {
throw new Error('ethereumUrl is not defined');
}
const url = urlOpt.Some;
const httpResponse = await ic.call(managementCanister.http_request, {
args: [
{
url,
max_response_bytes: Some(2_000n),
method: {
post: null
},
headers: [],
body: Some(
Buffer.from(
JSON.stringify({
jsonrpc: '2.0',
method: 'eth_getBalance',
params: [ethereumAddress, 'earliest'],
id: 1
}),
'utf-8'
)
),
transform: Some({
function: [ic.id(), 'ethTransform'] as [
Principal,
string
],
context: Uint8Array.from([])
})
}
],
cycles: 50_000_000n
});
return Buffer.from(httpResponse.body.buffer).toString('utf-8');
}),
ethGetBlockByNumber: update([nat32], text, async (number) => {
const urlOpt = stableStorage.get('ethereumUrl');
if ('None' in urlOpt) {
throw new Error('ethereumUrl is not defined');
}
const url = urlOpt.Some;
const httpResponse = await ic.call(managementCanister.http_request, {
args: [
{
url,
max_response_bytes: Some(2_000n),
method: {
post: null
},
headers: [],
body: Some(
Buffer.from(
JSON.stringify({
jsonrpc: '2.0',
method: 'eth_getBlockByNumber',
params: [`0x${number.toString(16)}`, false],
id: 1
}),
'utf-8'
)
),
transform: Some({
function: [ic.id(), 'ethTransform'] as [
Principal,
string
],
context: Uint8Array.from([])
})
}
],
cycles: 50_000_000n
});
return Buffer.from(httpResponse.body.buffer).toString('utf-8');
}),
ethTransform: query([HttpTransformArgs], HttpResponse, (args) => {
return {
...args.response,
headers: []
};
})
});
Management Canister
This chapter is a work in progress.
You can access the management canister like this:
import { blob, Canister, ic, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
randomBytes: update([], blob, async () => {
return await ic.call(managementCanister.raw_rand);
})
});
See the management canister reference section for more information.
Canister Lifecycle
This chapter is a work in progress.
import { Canister, init, postUpgrade, preUpgrade } from 'azle/experimental';
export default Canister({
init: init([], () => {
console.log('runs on first canister install');
}),
preUpgrade: preUpgrade(() => {
console.log('runs before canister upgrade');
}),
postUpgrade: postUpgrade([], () => {
console.log('runs after canister upgrade');
})
});
Timers
This chapter is a work in progress.
import {
blob,
bool,
Canister,
Duration,
ic,
int8,
query,
Record,
text,
TimerId,
update,
Void
} from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
const StatusReport = Record({
single: bool,
inline: int8,
capture: text,
repeat: int8,
singleCrossCanister: blob,
repeatCrossCanister: blob
});
const TimerIds = Record({
single: TimerId,
inline: TimerId,
capture: TimerId,
repeat: TimerId,
singleCrossCanister: TimerId,
repeatCrossCanister: TimerId
});
let statusReport: typeof StatusReport = {
single: false,
inline: 0,
capture: '',
repeat: 0,
singleCrossCanister: Uint8Array.from([]),
repeatCrossCanister: Uint8Array.from([])
};
export default Canister({
clearTimer: update([TimerId], Void, (timerId) => {
ic.clearTimer(timerId);
console.log(`timer ${timerId} cancelled`);
}),
setTimers: update([Duration, Duration], TimerIds, (delay, interval) => {
const capturedValue = '🚩';
const singleId = ic.setTimer(delay, oneTimeTimerCallback);
const inlineId = ic.setTimer(delay, () => {
statusReport.inline = 1;
console.log('Inline timer called');
});
const captureId = ic.setTimer(delay, () => {
statusReport.capture = capturedValue;
console.log(`Timer captured value ${capturedValue}`);
});
const repeatId = ic.setTimerInterval(interval, () => {
statusReport.repeat++;
console.log(`Repeating timer. Call ${statusReport.repeat}`);
});
const singleCrossCanisterId = ic.setTimer(
delay,
singleCrossCanisterTimerCallback
);
const repeatCrossCanisterId = ic.setTimerInterval(
interval,
repeatCrossCanisterTimerCallback
);
return {
single: singleId,
inline: inlineId,
capture: captureId,
repeat: repeatId,
singleCrossCanister: singleCrossCanisterId,
repeatCrossCanister: repeatCrossCanisterId
};
}),
statusReport: query([], StatusReport, () => {
return statusReport;
})
});
function oneTimeTimerCallback() {
statusReport.single = true;
console.log('oneTimeTimerCallback called');
}
async function singleCrossCanisterTimerCallback() {
console.log('singleCrossCanisterTimerCallback');
statusReport.singleCrossCanister = await ic.call(
managementCanister.raw_rand
);
}
async function repeatCrossCanisterTimerCallback() {
console.log('repeatCrossCanisterTimerCallback');
statusReport.repeatCrossCanister = Uint8Array.from([
...statusReport.repeatCrossCanister,
...(await ic.call(managementCanister.raw_rand))
]);
}
Cycles
This chapter is a work in progress.
Cycles are essentially units of computational resources such as bandwidth, memory, and CPU instructions. Costs are generally metered on the Internet Computer (IC) by cycles. You can see a breakdown of all cycle costs here.
Currently queries do not have any cycle costs.
Most important to you will probably be update costs.
TODO break down some cycle scenarios maybe? Perhaps we should show some of our analyses for different types of applications. Maybe show how to send and receive cycles, exactly how to do it.
Show all of the APIs for sending or receiving cycles?
Perhaps we don't need to do that here, since each API will show this information.
Maybe here we just show the basic concept of cycles, link to the main cycles cost page, and show a few examples of how to break down these costs or estimate these costs.
Caveats
npm packages
Some npm packages will work and some will not work. It is our long-term goal to support as many npm packages as possible. There are various reasons why an npm package may not currently work, including the small Wasm binary limit of the IC and unimplemented web or Node.js APIs. Feel free to open issues if your npm package does not work in Azle.
JavaScript environment APIs
You may encounter various missing JavaScript environment APIs, such as those you would expect in the web or Node.js environments.
High Candid encoding/decoding costs
Candid encoding/decoding is currently very
unoptimized. This will most likely lead to a
~1-2 million extra fixed instruction cost for
all calls. Be careful using
CandidType
Serializable objects
with
StableBTreeMap
, or using any other
API or data structure that engages in Candid
encoding/decoding.
Promises
Though promises are implemented, the underlying queue that handles asynchronous operations is very simple. This queue will not behave exactly as queues from the major JS engines.
JSON.parse and StableBTreeMap float64 values
It seems to be only some
float64
values cannot be
successfully stored and retrieved with a
StableBTreeMap
using
stableJson
because of this bug with
JSON.parse:
https://github.com/bellard/quickjs/issues/206
This will also affect stand-alone usage of
JSON.parse
.
Reference
- Bitcoin
- Call APIs
- Candid
- Canister APIs
- Canister Methods
- Environment Variables
- Management Canister
- Plugins
- Stable Memory
- Timers
Bitcoin
The Internet Computer (IC) interacts with the
Bitcoin blockchain through the use of
tECDSA
, the
Bitcoin integration
, and a ledger
canister called ckBTC
.
tECDSA
tECDSA
on the IC allows canisters
to request access to threshold ECDSA keypairs on
the tECDSA
subnet. This
functionality is exposed through two management
canister methods:
The following are examples using
tECDSA
:
Bitcoin integration
The Bitcoin integration
allows
canisters on the IC to interact directly with
the Bitcoin network. This functionality is
exposed through the following management
canister methods:
The following are examples using the
Bitcoin integration
:
ckBTC
ckBTC
is a ledger canister deployed
to the IC. It follows the
ICRC
standard, and can be accessed
easily from an Azle canister using
azle/canisters/ICRC
if you only
need the ICRC
methods. For access
to the full ledger methods you will need to
create your own
Service
for now.
The following are examples using
ckBTC
:
Call APIs
- accept message
- arg data raw
- call
- call raw
- call raw 128
- call with payment
- call with payment 128
- caller
- method name
- msg cycles accept
- msg cycles accept 128
- msg cycles available
- msg cycles available 128
- msg cycles refunded
- msg cycles refunded 128
- notify
- notify raw
- notify with payment 128
- reject
- reject code
- reject message
- reply
- reply raw
accept message
This section is a work in progress.
Examples:
import { Canister, ic, inspectMessage } from 'azle/experimental';
export default Canister({
inspectMessage: inspectMessage(() => {
ic.acceptMessage();
})
});
arg data raw
This section is a work in progress.
Examples:
import { blob, bool, Canister, ic, int8, query, text } from 'azle/experimental';
export default Canister({
// returns the argument data as bytes.
argDataRaw: query(
[blob, int8, bool, text],
blob,
(arg1, arg2, arg3, arg4) => {
return ic.argDataRaw();
}
)
});
call
This section is a work in progress.
Examples:
- async_await
- bitcoin
- composite_queries
- cross_canister_calls
- cycles
- ethereum_json_rpc
- func_types
- heartbeat
- inline_types
- ledger_canister
- management_canister
- outgoing_http_requests
- threshold_ecdsa
- rejections
- timers
- tuple_types
- whoami
import {
Canister,
ic,
init,
nat64,
Principal,
update
} from 'azle/experimental';
const TokenCanister = Canister({
transfer: update([Principal, nat64], nat64)
});
let tokenCanister: typeof TokenCanister;
export default Canister({
init: init([], setup),
postDeploy: init([], setup),
payout: update([Principal, nat64], nat64, async (to, amount) => {
return await ic.call(tokenCanister.transfer, {
args: [to, amount]
});
})
});
function setup() {
tokenCanister = TokenCanister(
Principal.fromText('r7inp-6aaaa-aaaaa-aaabq-cai')
);
}
call raw
This section is a work in progress.
Examples:
import {
Canister,
ic,
nat64,
Principal,
text,
update
} from 'azle/experimental';
export default Canister({
executeCallRaw: update(
[Principal, text, text, nat64],
text,
async (canisterId, method, candidArgs, payment) => {
const candidBytes = await ic.callRaw(
canisterId,
method,
ic.candidEncode(candidArgs),
payment
);
return ic.candidDecode(candidBytes);
}
)
});
call raw 128
This section is a work in progress.
Examples:
import { Canister, ic, nat, Principal, text, update } from 'azle/experimental';
export default Canister({
executeCallRaw128: update(
[Principal, text, text, nat],
text,
async (canisterId, method, candidArgs, payment) => {
const candidBytes = await ic.callRaw128(
canisterId,
method,
ic.candidEncode(candidArgs),
payment
);
return ic.candidDecode(candidBytes);
}
)
});
call with payment
This section is a work in progress.
Examples:
import { blob, Canister, ic, Principal, update, Void } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeInstallCode: update(
[Principal, blob],
Void,
async (canisterId, wasmModule) => {
return await ic.call(managementCanister.install_code, {
args: [
{
mode: { install: null },
canister_id: canisterId,
wasm_module: wasmModule,
arg: Uint8Array.from([])
}
],
cycles: 100_000_000_000n
});
}
)
});
call with payment 128
This section is a work in progress.
Examples:
import { blob, Canister, ic, Principal, update, Void } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeInstallCode: update(
[Principal, blob],
Void,
async (canisterId, wasmModule) => {
return await ic.call128(managementCanister.install_code, {
args: [
{
mode: { install: null },
canister_id: canisterId,
wasm_module: wasmModule,
arg: Uint8Array.from([])
}
],
cycles: 100_000_000_000n
});
}
)
});
caller
This section is a work in progress.
Examples:
import { Canister, ic, Principal, update } from 'azle/experimental';
export default Canister({
// returns the principal of the identity that called this function
caller: update([], Principal, () => {
return ic.caller();
})
});
method name
This section is a work in progress.
Examples:
import { bool, Canister, ic, inspectMessage, update } from 'azle/experimental';
export default Canister({
inspectMessage: inspectMessage(() => {
console.log('inspectMessage called');
if (ic.methodName() === 'accessible') {
ic.acceptMessage();
return;
}
if (ic.methodName() === 'inaccessible') {
return;
}
throw `Method "${ic.methodName()}" not allowed`;
}),
accessible: update([], bool, () => {
return true;
}),
inaccessible: update([], bool, () => {
return false;
}),
alsoInaccessible: update([], bool, () => {
return false;
})
});
msg cycles accept
This section is a work in progress.
Examples:
import { Canister, ic, nat64, update } from 'azle/experimental';
export default Canister({
// Moves all transferred cycles to the canister
receiveCycles: update([], nat64, () => {
return ic.msgCyclesAccept(ic.msgCyclesAvailable() / 2n);
})
});
msg cycles accept 128
This section is a work in progress.
Examples:
import { Canister, ic, nat64, update } from 'azle/experimental';
export default Canister({
// Moves all transferred cycles to the canister
receiveCycles128: update([], nat64, () => {
return ic.msgCyclesAccept128(ic.msgCyclesAvailable128() / 2n);
})
});
msg cycles available
This section is a work in progress.
Examples:
import { Canister, ic, nat64, update } from 'azle/experimental';
export default Canister({
// Moves all transferred cycles to the canister
receiveCycles: update([], nat64, () => {
return ic.msgCyclesAccept(ic.msgCyclesAvailable() / 2n);
})
});
msg cycles available 128
This section is a work in progress.
Examples:
import { Canister, ic, nat64, update } from 'azle/experimental';
export default Canister({
// Moves all transferred cycles to the canister
receiveCycles128: update([], nat64, () => {
return ic.msgCyclesAccept128(ic.msgCyclesAvailable128() / 2n);
})
});
msg cycles refunded
This section is a work in progress.
Examples:
import { Canister, ic, nat64, update } from 'azle/experimental';
import { otherCanister } from './other_canister';
export default Canister({
// Reports the number of cycles returned from the Cycles canister
sendCycles: update([], nat64, async () => {
await ic.call(otherCanister.receiveCycles, {
cycles: 1_000_000n
});
return ic.msgCyclesRefunded();
})
});
msg cycles refunded 128
This section is a work in progress.
Examples:
import { Canister, ic, nat64, update } from 'azle/experimental';
import { otherCanister } from './other_canister';
export default Canister({
// Reports the number of cycles returned from the Cycles canister
sendCycles128: update([], nat64, async () => {
await ic.call128(otherCanister.receiveCycles128, {
cycles: 1_000_000n
});
return ic.msgCyclesRefunded128();
})
});
notify
This section is a work in progress.
Examples:
import { Canister, ic, update, Void } from 'azle/experimental';
import { otherCanister } from './otherCanister';
export default Canister({
sendNotification: update([], Void, () => {
return ic.notify(otherCanister.receiveNotification, {
args: ['This is the notification']
});
})
});
notify raw
This section is a work in progress.
Examples:
import { Canister, ic, Principal, update, Void } from 'azle/experimental';
export default Canister({
sendNotification: update([], Void, () => {
return ic.notifyRaw(
Principal.fromText('ryjl3-tyaaa-aaaaa-aaaba-cai'),
'receiveNotification',
Uint8Array.from(ic.candidEncode('()')),
0n
);
})
});
notify with payment 128
This section is a work in progress.
Examples:
import { Canister, ic, update, Void } from 'azle/experimental';
import { otherCanister } from './otherCanister';
export default Canister({
sendCycles128Notify: update([], Void, () => {
return ic.notify(otherCanister.receiveCycles128, {
cycles: 1_000_000n
});
})
});
reject
This section is a work in progress.
Examples:
import { Canister, empty, ic, Manual, query, text } from 'azle/experimental';
export default Canister({
reject: query(
[text],
Manual(empty),
(message) => {
ic.reject(message);
},
{ manual: true }
)
});
reject code
This section is a work in progress.
Examples:
import { Canister, ic, RejectionCode, update } from 'azle/experimental';
import { otherCanister } from './other_canister';
export default Canister({
getRejectionCodeDestinationInvalid: update([], RejectionCode, async () => {
await ic.call(otherCanister.method);
return ic.rejectCode();
})
});
reject message
This section is a work in progress.
Examples:
import { Canister, ic, text, update } from 'azle/experimental';
import { otherCanister } from './other_canister';
export default Canister({
getRejectionMessage: update([], text, async () => {
await ic.call(otherCanister.method);
return ic.rejectMessage();
})
});
reply
This section is a work in progress.
Examples:
import { blob, Canister, ic, Manual, update } from 'azle/experimental';
export default Canister({
updateBlob: update(
[],
Manual(blob),
() => {
ic.reply(
new Uint8Array([83, 117, 114, 112, 114, 105, 115, 101, 33]),
blob
);
},
{ manual: true }
)
});
reply raw
This section is a work in progress.
Examples:
import {
blob,
bool,
Canister,
ic,
int,
Manual,
Null,
Record,
text,
update,
Variant
} from 'azle/experimental';
const Options = Variant({
High: Null,
Medium: Null,
Low: Null
});
export default Canister({
replyRaw: update(
[],
Manual(
Record({
int: int,
text: text,
bool: bool,
blob: blob,
variant: Options
})
),
() => {
ic.replyRaw(
ic.candidEncode(
'(record { "int" = 42; "text" = "text"; "bool" = true; "blob" = blob "Surprise!"; "variant" = variant { Medium } })'
)
);
},
{ manual: true }
)
});
Candid
- blob
- bool
- empty
- float32
- float64
- func
- int
- int8
- int16
- int32
- int64
- nat
- nat8
- nat16
- nat32
- nat64
- null
- opt
- principal
- record
- reserved
- service
- text
- variant
- vec
blob
The CandidType
object
blob
corresponds to the
Candid type blob, is inferred to be a TypeScript
Uint8Array
and will be decoded into
a
JavaScript Uint8Array
at runtime.
TypeScript or JavaScript:
import { blob, Canister, query } from 'azle/experimental';
export default Canister({
getBlob: query([], blob, () => {
return Uint8Array.from([68, 73, 68, 76, 0, 0]);
}),
printBlob: query([blob], blob, (blob) => {
console.log(typeof blob);
return blob;
})
});
Candid:
service : () -> {
getBlob : () -> (vec nat8) query;
printBlob : (vec nat8) -> (vec nat8) query;
}
dfx:
dfx canister call candid_canister printBlob '(vec { 68; 73; 68; 76; 0; 0; })'
(blob "DIDL\00\00")
dfx canister call candid_canister printBlob '(blob "DIDL\00\00")'
(blob "DIDL\00\00")
bool
The CandidType
object
bool
corresponds to the
Candid type bool, is inferred to be a TypeScript
boolean
, and will be decoded into a
JavaScript Boolean
at runtime.
TypeScript or JavaScript:
import { bool, Canister, query } from 'azle/experimental';
export default Canister({
getBool: query([], bool, () => {
return true;
}),
printBool: query([bool], bool, (bool) => {
console.log(typeof bool);
return bool;
})
});
Candid:
service : () -> {
getBool : () -> (bool) query;
printBool : (bool) -> (bool) query;
}
dfx:
dfx canister call candid_canister printBool '(true)'
(true)
empty
The CandidType
object
empty
corresponds to the
Candid type empty, is inferred to be a TypeScript
never
, and has no JavaScript value
at runtime.
TypeScript or JavaScript:
import { Canister, empty, query } from 'azle/experimental';
export default Canister({
getEmpty: query([], empty, () => {
throw 'Anything you want';
}),
// Note: It is impossible to call this function because it requires an argument
// but there is no way to pass an "empty" value as an argument.
printEmpty: query([empty], empty, (empty) => {
console.log(typeof empty);
throw 'Anything you want';
})
});
Candid:
service : () -> {
getEmpty : () -> (empty) query;
printEmpty : (empty) -> (empty) query;
}
dfx:
dfx canister call candid_canister printEmpty '("You can put anything here")'
Error: Failed to create argument blob.
Caused by: Failed to create argument blob.
Invalid data: Unable to serialize Candid values: type mismatch: "You can put anything here" cannot be of type empty
float32
The CandidType
object
float32
corresponds to the
Candid type float32, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, float32, query } from 'azle/experimental';
export default Canister({
getFloat32: query([], float32, () => {
return Math.PI;
}),
printFloat32: query([float32], float32, (float32) => {
console.log(typeof float32);
return float32;
})
});
Candid:
service : () -> {
getFloat32 : () -> (float32) query;
printFloat32 : (float32) -> (float32) query;
}
dfx:
dfx canister call candid_canister printFloat32 '(3.1415927 : float32)'
(3.1415927 : float32)
float64
The CandidType
object
float64
corresponds to the
Candid type float64, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, float64, query } from 'azle/experimental';
export default Canister({
getFloat64: query([], float64, () => {
return Math.E;
}),
printFloat64: query([float64], float64, (float64) => {
console.log(typeof float64);
return float64;
})
});
Candid:
service : () -> {
getFloat64 : () -> (float64) query;
printFloat64 : (float64) -> (float64) query;
}
dfx:
dfx canister call candid_canister printFloat64 '(2.718281828459045 : float64)'
(2.718281828459045 : float64)
func
Values created by the
CandidType
function
Func
correspond to the
Candid type func, are inferred to be TypeScript
[Principal, string]
tuples, and
will be decoded into
JavaScript array
with two elements at runtime.
The first element is an
@dfinity/principal
and the second is a
JavaScript string. The
@dfinity/principal
represents the
principal
of the canister/service
where the function exists, and the
string
represents the function's
name.
A func
acts as a callback, allowing
the func
receiver to know which
canister instance and method must be used to
call back.
TypeScript or JavaScript:
import { Canister, Func, Principal, query, text } from 'azle/experimental';
const BasicFunc = Func([text], text, 'query');
export default Canister({
getBasicFunc: query([], BasicFunc, () => {
return [
Principal.fromText('rrkah-fqaaa-aaaaa-aaaaq-cai'),
'getBasicFunc'
];
}),
printBasicFunc: query([BasicFunc], BasicFunc, (basicFunc) => {
console.log(typeof basicFunc);
return basicFunc;
})
});
Candid:
service : () -> {
getBasicFunc : () -> (func (text) -> (text) query) query;
printBasicFunc : (func (text) -> (text) query) -> (
func (text) -> (text) query,
) query;
}
dfx:
dfx canister call candid_canister printBasicFunc '(func "r7inp-6aaaa-aaaaa-aaabq-cai".getBasicFunc)'
(func "r7inp-6aaaa-aaaaa-aaabq-cai".getBasicFunc)
int
The CandidType
object
int
corresponds to the
Candid type int, is inferred to be a TypeScript
bigint
, and will be decoded into a
JavaScript BigInt
at runtime.
TypeScript or JavaScript:
import { Canister, int, query } from 'azle/experimental';
export default Canister({
getInt: query([], int, () => {
return 170_141_183_460_469_231_731_687_303_715_884_105_727n;
}),
printInt: query([int], int, (int) => {
console.log(typeof int);
return int;
})
});
Candid:
service : () -> {
getInt : () -> (int) query;
printInt : (int) -> (int) query;
}
dfx:
dfx canister call candid_canister printInt '(170_141_183_460_469_231_731_687_303_715_884_105_727 : int)'
(170_141_183_460_469_231_731_687_303_715_884_105_727 : int)
int8
The CandidType
object
int8
corresponds to the
Candid type int8, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, int8, query } from 'azle/experimental';
export default Canister({
getInt8: query([], int8, () => {
return 127;
}),
printInt8: query([int8], int8, (int8) => {
console.log(typeof int8);
return int8;
})
});
Candid:
service : () -> {
getInt8 : () -> (int8) query;
printInt8 : (int8) -> (int8) query;
}
dfx:
dfx canister call candid_canister printInt8 '(127 : int8)'
(127 : int8)
int16
The CandidType
object
int16
corresponds to the
Candid type int16, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, int16, query } from 'azle/experimental';
export default Canister({
getInt16: query([], int16, () => {
return 32_767;
}),
printInt16: query([int16], int16, (int16) => {
console.log(typeof int16);
return int16;
})
});
Candid:
service : () -> {
getInt16 : () -> (int16) query;
printInt16 : (int16) -> (int16) query;
}
dfx:
dfx canister call candid_canister printInt16 '(32_767 : int16)'
(32_767 : int16)
int32
The CandidType
object
int32
corresponds to the
Candid type int32, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, int32, query } from 'azle/experimental';
export default Canister({
getInt32: query([], int32, () => {
return 2_147_483_647;
}),
printInt32: query([int32], int32, (int32) => {
console.log(typeof int32);
return int32;
})
});
Candid:
service : () -> {
getInt32 : () -> (int32) query;
printInt32 : (int32) -> (int32) query;
}
dfx:
dfx canister call candid_canister printInt32 '(2_147_483_647 : int32)'
(2_147_483_647 : int32)
int64
The CandidType
object
int64
corresponds to the
Candid type int64, is inferred to be a TypeScript
bigint
, and will be decoded into a
JavaScript BigInt
at runtime.
TypeScript or JavaScript:
import { Canister, int64, query } from 'azle/experimental';
export default Canister({
getInt64: query([], int64, () => {
return 9_223_372_036_854_775_807n;
}),
printInt64: query([int64], int64, (int64) => {
console.log(typeof int64);
return int64;
})
});
Candid:
service : () -> {
getInt64 : () -> (int64) query;
printInt64 : (int64) -> (int64) query;
}
dfx:
dfx canister call candid_canister printInt64 '(9_223_372_036_854_775_807 : int64)'
(9_223_372_036_854_775_807 : int64)
nat
The CandidType
object
nat
corresponds to the
Candid type nat, is inferred to be a TypeScript
bigint
, and will be decoded into a
JavaScript BigInt
at runtime.
TypeScript or JavaScript:
import { Canister, nat, query } from 'azle/experimental';
export default Canister({
getNat: query([], nat, () => {
return 340_282_366_920_938_463_463_374_607_431_768_211_455n;
}),
printNat: query([nat], nat, (nat) => {
console.log(typeof nat);
return nat;
})
});
Candid:
service : () -> {
getNat : () -> (nat) query;
printNat : (nat) -> (nat) query;
}
dfx:
dfx canister call candid_canister printNat '(340_282_366_920_938_463_463_374_607_431_768_211_455 : nat)'
(340_282_366_920_938_463_463_374_607_431_768_211_455 : nat)
nat8
The CandidType
object
nat8
corresponds to the
Candid type nat8, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, nat8, query } from 'azle/experimental';
export default Canister({
getNat8: query([], nat8, () => {
return 255;
}),
printNat8: query([nat8], nat8, (nat8) => {
console.log(typeof nat8);
return nat8;
})
});
Candid:
service : () -> {
getNat8 : () -> (nat8) query;
printNat8 : (nat8) -> (nat8) query;
}
dfx:
dfx canister call candid_canister printNat8 '(255 : nat8)'
(255 : nat8)
nat16
The CandidType
object
nat16
corresponds to the
Candid type nat16, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, nat16, query } from 'azle/experimental';
export default Canister({
getNat16: query([], nat16, () => {
return 65_535;
}),
printNat16: query([nat16], nat16, (nat16) => {
console.log(typeof nat16);
return nat16;
})
});
Candid:
service : () -> {
getNat16 : () -> (nat16) query;
printNat16 : (nat16) -> (nat16) query;
}
dfx:
dfx canister call candid_canister printNat16 '(65_535 : nat16)'
(65_535 : nat16)
nat32
The CandidType
object
nat32
corresponds to the
Candid type nat32, is inferred to be a TypeScript
number
, and will be decoded into a
JavaScript Number
at runtime.
TypeScript or JavaScript:
import { Canister, nat32, query } from 'azle/experimental';
export default Canister({
getNat32: query([], nat32, () => {
return 4_294_967_295;
}),
printNat32: query([nat32], nat32, (nat32) => {
console.log(typeof nat32);
return nat32;
})
});
Candid:
service : () -> {
getNat32 : () -> (nat32) query;
printNat32 : (nat32) -> (nat32) query;
}
dfx:
dfx canister call candid_canister printNat32 '(4_294_967_295 : nat32)'
(4_294_967_295 : nat32)
nat64
The CandidType
object
nat64
corresponds to the
Candid type nat64, is inferred to be a TypeScript
bigint
, and will be decoded into a
JavaScript BigInt
at runtime.
TypeScript or JavaScript:
import { Canister, nat64, query } from 'azle/experimental';
export default Canister({
getNat64: query([], nat64, () => {
return 18_446_744_073_709_551_615n;
}),
printNat64: query([nat64], nat64, (nat64) => {
console.log(typeof nat64);
return nat64;
})
});
Candid:
service : () -> {
getNat64 : () -> (nat64) query;
printNat64 : (nat64) -> (nat64) query;
}
dfx:
dfx canister call candid_canister printNat64 '(18_446_744_073_709_551_615 : nat64)'
(18_446_744_073_709_551_615 : nat64)
null
The CandidType
object
null
corresponds to the
Candid type null, is inferred to be a TypeScript
null
, and will be decoded into a
JavaScript null
at runtime.
TypeScript or JavaScript:
import { Canister, Null, query } from 'azle/experimental';
export default Canister({
getNull: query([], Null, () => {
return null;
}),
printNull: query([Null], Null, (null_) => {
console.log(typeof null_);
return null_;
})
});
Candid:
service : () -> {
getNull : () -> (null) query;
printNull : (null) -> (null) query;
}
dfx:
dfx canister call candid_canister printNull '(null)'
(null : null)
opt
The CandidType
object
Opt
corresponds to the
Candid type opt, is inferred to be a TypeScript
Opt<T>
, and will be decoded
into a
JavaScript Object
at runtime.
It is a
variant
with Some
and
None
cases. At runtime if the value
of the variant is Some
, the
Some
property of the variant object
will have a value of the enclosed
Opt
type at runtime.
TypeScript or JavaScript:
import { bool, Canister, None, Opt, query, Some } from 'azle/experimental';
export default Canister({
getOptSome: query([], Opt(bool), () => {
return Some(true); // equivalent to { Some: true }
}),
getOptNone: query([], Opt(bool), () => {
return None; //equivalent to { None: null}
})
});
Candid:
service : () -> {
getOptNone : () -> (opt bool) query;
getOptSome : () -> (opt bool) query;
}
dfx:
dfx canister call candid_canister getOptSome
(opt true)
dfx canister call candid_canister getOptNone
(null)
principal
The CandidType
object
Principal
corresponds to the
Candid type principal, is inferred to be a TypeScript
@dfinity/principal
Principal
, and will be decoded into
an
@dfinity/principal Principal
at runtime.
TypeScript or JavaScript:
import { Canister, Principal, query } from 'azle/experimental';
export default Canister({
getPrincipal: query([], Principal, () => {
return Principal.fromText('rrkah-fqaaa-aaaaa-aaaaq-cai');
}),
printPrincipal: query([Principal], Principal, (principal) => {
console.log(typeof principal);
return principal;
})
});
Candid:
service : () -> {
getPrincipal : () -> (principal) query;
printPrincipal : (principal) -> (principal) query;
}
dfx:
dfx canister call candid_canister printPrincipal '(principal "rrkah-fqaaa-aaaaa-aaaaq-cai")'
(principal "rrkah-fqaaa-aaaaa-aaaaq-cai")
record
Objects created by the
CandidType
function
Record
correspond to the
Candid record type, are inferred to be TypeScript
Object
s, and will be decoded into
JavaScript Objects
at runtime.
The shape of the object will match the object
literal passed to the
Record
function.
TypeScript or JavaScript:
import { Canister, Principal, query, Record, text } from 'azle/experimental';
const User = Record({
id: Principal,
username: text
});
export default Canister({
getUser: query([], User, () => {
return {
id: Principal.fromUint8Array(Uint8Array.from([0])),
username: 'lastmjs'
};
}),
printUser: query([User], User, (user) => {
console.log(typeof user);
return user;
})
});
Candid:
type User = record { id : principal; username : text };
service : () -> {
getUser : () -> (User) query;
printUser : (User) -> (User) query;
}
dfx:
dfx canister call candid_canister printUser '(record { id = principal "2ibo7-dia"; username = "lastmjs" })'
(record { id = principal "2ibo7-dia"; username = "lastmjs" })
reserved
The CandidType
object
reserved
corresponds to the
Candid type reserved, is inferred to be a TypeScript
any
, and will be decoded into a
JavaScript null
at runtime.
TypeScript or JavaScript:
import { Canister, query, reserved } from 'azle/experimental';
export default Canister({
getReserved: query([], reserved, () => {
return 'anything';
}),
printReserved: query([reserved], reserved, (reserved) => {
console.log(typeof reserved);
return reserved;
})
});
Candid:
service : () -> {
getReserved : () -> (reserved) query;
printReserved : (reserved) -> (reserved) query;
}
dfx:
dfx canister call candid_canister printReserved '(null)'
(null : reserved)
service
Values created by the
CandidType
function
Canister
correspond to the
Candid service type, are inferred to be TypeScript
Object
s, and will be decoded into
JavaScript Objects
at runtime.
The properties of this object that match the
keys of the service's query
and
update
methods can be passed into
ic.call
and
ic.notify
to perform cross-canister
calls.
TypeScript or JavaScript:
import {
bool,
Canister,
ic,
Principal,
query,
text,
update
} from 'azle/experimental';
const SomeCanister = Canister({
query1: query([], bool),
update1: update([], text)
});
export default Canister({
getService: query([], SomeCanister, () => {
return SomeCanister(Principal.fromText('aaaaa-aa'));
}),
callService: update([SomeCanister], text, (service) => {
return ic.call(service.update1);
})
});
Candid:
type ManualReply = variant { Ok : text; Err : text };
service : () -> {
callService : (
service { query1 : () -> (bool) query; update1 : () -> (text) },
) -> (ManualReply);
getService : () -> (
service { query1 : () -> (bool) query; update1 : () -> (text) },
) query;
}
dfx:
dfx canister call candid_canister getService
(service "aaaaa-aa")
text
The CandidType
object
text
corresponds to the
Candid type text, is inferred to be a TypeScript
string
, and will be decoded into a
JavaScript String
at runtime.
TypeScript or JavaScript:
import { Canister, query, text } from 'azle/experimental';
export default Canister({
getString: query([], text, () => {
return 'Hello world!';
}),
printString: query([text], text, (string) => {
console.log(typeof string);
return string;
})
});
Candid:
service : () -> {
getString : () -> (text) query;
printString : (text) -> (text) query;
}
dfx:
dfx canister call candid_canister printString '("Hello world!")'
("Hello world!")
variant
Objects created by the
CandidType
function
Variant
correspond to the
Candid variant type, are inferred to be TypeScript
Object
s, and will be decoded into
JavaScript Objects
at runtime.
The shape of the object will match the object
literal passed to the
Variant
function, however it will
contain only one of the enumerated properties.
TypeScript or JavaScript:
import { Canister, Null, query, Variant } from 'azle/experimental';
const Emotion = Variant({
Happy: Null,
Indifferent: Null,
Sad: Null
});
const Reaction = Variant({
Fire: Null,
ThumbsUp: Null,
Emotion: Emotion
});
export default Canister({
getReaction: query([], Reaction, () => {
return {
Fire: null
};
}),
printReaction: query([Reaction], Reaction, (reaction) => {
console.log(typeof reaction);
return reaction;
})
});
Candid:
type Emotion = variant { Sad; Indifferent; Happy };
type Reaction = variant { Emotion : Emotion; Fire; ThumbsUp };
service : () -> {
getReaction : () -> (Reaction) query;
printReaction : (Reaction) -> (Reaction) query;
}
dfx:
dfx canister call candid_canister printReaction '(variant { Fire })'
(variant { Fire })
vec
The CandidType
object
Vec
corresponds to the
Candid type vec, is inferred to be a TypeScript
T[]
, and will be decoded into a
JavaScript array
of the specified type at runtime (except for
Vec<nat8>
which will become a
Uint8Array
, thus it is recommended
to use the blob
type instead of
Vec<nat8>
).
TypeScript or JavaScript:
import { Canister, int32, Vec, query } from 'azle/experimental';
export default Canister({
getNumbers: query([], Vec(int32), () => {
return [0, 1, 2, 3];
}),
printNumbers: query([Vec(int32)], Vec(int32), (numbers) => {
console.log(typeof numbers);
return numbers;
})
});
Candid:
service : () -> {
getNumbers : () -> (vec int32) query;
printNumbers : (vec int32) -> (vec int32) query;
}
dfx:
dfx canister call candid_canister printNumbers '(vec { 0 : int32; 1 : int32; 2 : int32; 3 : int32 })'
(vec { 0 : int32; 1 : int32; 2 : int32; 3 : int32 })
Canister APIs
- candid decode
- candid encode
- canister balance
- canister balance 128
- canister version
- canister id
- data certificate
- instruction counter
- is controller
- performance counter
- set certified data
- time
- trap
candid decode
This section is a work in progress.
Examples:
import { blob, Canister, ic, query, text } from 'azle/experimental';
export default Canister({
// decodes Candid bytes to a Candid string
candidDecode: query([blob], text, (candidEncoded) => {
return ic.candidDecode(candidEncoded);
})
});
candid encode
This section is a work in progress.
Examples:
import { blob, Canister, ic, query, text } from 'azle/experimental';
export default Canister({
// encodes a Candid string to Candid bytes
candidEncode: query([text], blob, (candidString) => {
return ic.candidEncode(candidString);
})
});
canister balance
This section is a work in progress.
Examples:
import { Canister, ic, nat64, query } from 'azle/experimental';
export default Canister({
// returns the amount of cycles available in the canister
canisterBalance: query([], nat64, () => {
return ic.canisterBalance();
})
});
canister balance 128
This section is a work in progress.
Examples:
import { Canister, ic, nat, query } from 'azle/experimental';
export default Canister({
// returns the amount of cycles available in the canister
canisterBalance128: query([], nat, () => {
return ic.canisterBalance128();
})
});
canister version
This section is a work in progress.
Examples:
import { Canister, ic, nat64, query } from 'azle/experimental';
export default Canister({
// returns the canister's version number
canisterVersion: query([], nat64, () => {
return ic.canisterVersion();
})
});
canister id
This section is a work in progress.
Examples:
import { Canister, ic, Principal, query } from 'azle/experimental';
export default Canister({
// returns this canister's id
id: query([], Principal, () => {
return ic.id();
})
});
data certificate
This section is a work in progress.
Examples:
import { blob, Canister, ic, Opt, query } from 'azle/experimental';
export default Canister({
// When called from a query call, returns the data certificate
// authenticating certified_data set by this canister. Returns None if not
// called from a query call.
dataCertificate: query([], Opt(blob), () => {
return ic.dataCertificate();
})
});
instruction counter
This section is a work in progress.
Examples:
import { Canister, ic, nat64, query } from 'azle/experimental';
export default Canister({
// Returns the number of instructions that the canister executed since the
// last entry point.
instructionCounter: query([], nat64, () => {
return ic.instructionCounter();
})
});
is controller
This section is a work in progress.
Examples:
import { bool, Canister, ic, Principal, query } from 'azle/experimental';
export default Canister({
// determines whether the given principal is a controller of the canister
isController: query([Principal], bool, (principal) => {
return ic.isController(principal);
})
});
performance counter
This section is a work in progress.
Examples:
import { Canister, ic, nat64, query } from 'azle/experimental';
export default Canister({
performanceCounter: query([], nat64, () => {
return ic.performanceCounter(0);
})
});
set certified data
This section is a work in progress.
Examples:
import { blob, Canister, ic, update, Void } from 'azle/experimental';
export default Canister({
// sets up to 32 bytes of certified data
setCertifiedData: update([blob], Void, (data) => {
ic.setCertifiedData(data);
})
});
time
This section is a work in progress.
Examples:
import { Canister, ic, nat64, query } from 'azle/experimental';
export default Canister({
// returns the current timestamp
time: query([], nat64, () => {
return ic.time();
})
});
trap
This section is a work in progress.
Examples:
import { bool, Canister, ic, query, text } from 'azle/experimental';
export default Canister({
// traps with a message, stopping execution and discarding all state within the call
trap: query([text], bool, (message) => {
ic.trap(message);
return true;
})
});
Canister Methods
- heartbeat
- http_request
- http_request_update
- init
- inspect message
- post upgrade
- pre upgrade
- query
- update
heartbeat
This section is a work in progress.
Examples:
import { Canister, heartbeat } from 'azle/experimental';
export default Canister({
heartbeat: heartbeat(() => {
console.log('this runs ~1 time per second');
})
});
http_request
This section is a work in progress.
Examples:
import {
blob,
bool,
Canister,
Func,
nat16,
None,
Opt,
query,
Record,
text,
Tuple,
Variant,
Vec
} from 'azle/experimental';
const Token = Record({
// add whatever fields you'd like
arbitrary_data: text
});
const StreamingCallbackHttpResponse = Record({
body: blob,
token: Opt(Token)
});
export const Callback = Func([text], StreamingCallbackHttpResponse, 'query');
const CallbackStrategy = Record({
callback: Callback,
token: Token
});
const StreamingStrategy = Variant({
Callback: CallbackStrategy
});
type HeaderField = [text, text];
const HeaderField = Tuple(text, text);
const HttpResponse = Record({
status_code: nat16,
headers: Vec(HeaderField),
body: blob,
streaming_strategy: Opt(StreamingStrategy),
upgrade: Opt(bool)
});
const HttpRequest = Record({
method: text,
url: text,
headers: Vec(HeaderField),
body: blob,
certificate_version: Opt(nat16)
});
export default Canister({
http_request: query([HttpRequest], HttpResponse, (req) => {
return {
status_code: 200,
headers: [],
body: Buffer.from('hello'),
streaming_strategy: None,
upgrade: None
};
})
});
http_request
This section is a work in progress.
Examples:
import {
blob,
bool,
Canister,
Func,
nat16,
None,
Opt,
Record,
text,
Tuple,
update,
Variant,
Vec
} from 'azle/experimental';
const Token = Record({
// add whatever fields you'd like
arbitrary_data: text
});
const StreamingCallbackHttpResponse = Record({
body: blob,
token: Opt(Token)
});
export const Callback = Func([text], StreamingCallbackHttpResponse, 'query');
const CallbackStrategy = Record({
callback: Callback,
token: Token
});
const StreamingStrategy = Variant({
Callback: CallbackStrategy
});
type HeaderField = [text, text];
const HeaderField = Tuple(text, text);
const HttpResponse = Record({
status_code: nat16,
headers: Vec(HeaderField),
body: blob,
streaming_strategy: Opt(StreamingStrategy),
upgrade: Opt(bool)
});
const HttpRequest = Record({
method: text,
url: text,
headers: Vec(HeaderField),
body: blob,
certificate_version: Opt(nat16)
});
export default Canister({
http_request_update: update([HttpRequest], HttpResponse, (req) => {
return {
status_code: 200,
headers: [],
body: Buffer.from('hello'),
streaming_strategy: None,
upgrade: None
};
})
});
init
This section is a work in progress.
Examples:
import { Canister, init } from 'azle/experimental';
export default Canister({
init: init([], () => {
console.log('This runs once when the canister is first initialized');
})
});
inspect message
This section is a work in progress.
Examples:
import { bool, Canister, ic, inspectMessage, update } from 'azle/experimental';
export default Canister({
inspectMessage: inspectMessage(() => {
console.log('inspectMessage called');
if (ic.methodName() === 'accessible') {
ic.acceptMessage();
return;
}
if (ic.methodName() === 'inaccessible') {
return;
}
throw `Method "${ic.methodName()}" not allowed`;
}),
accessible: update([], bool, () => {
return true;
}),
inaccessible: update([], bool, () => {
return false;
}),
alsoInaccessible: update([], bool, () => {
return false;
})
});
post upgrade
This section is a work in progress.
Examples:
import { Canister, postUpgrade } from 'azle/experimental';
export default Canister({
postUpgrade: postUpgrade([], () => {
console.log('This runs after every canister upgrade');
})
});
pre upgrade
This section is a work in progress.
Examples:
import { Canister, preUpgrade } from 'azle/experimental';
export default Canister({
preUpgrade: preUpgrade(() => {
console.log('This runs before every canister upgrade');
})
});
query
This section is a work in progress.
import { Canister, query, text } from 'azle/experimental';
export default Canister({
simpleQuery: query([], text, () => {
return 'This is a query method';
})
});
update
This section is a work in progress.
import { Canister, query, text, update, Void } from 'azle/experimental';
let message = '';
export default Canister({
getMessage: query([], text, () => {
return message;
}),
setMessage: update([text], Void, (newMessage) => {
message = newMessage;
})
});
Environment Variables
You can provide environment variables to Azle
canisters by specifying their names in your
dfx.json
file and then using the
process.env
object in Azle. Be
aware that the environment variables that you
specify in your dfx.json
file will
be included in plain text in your canister's
Wasm binary.
dfx.json
Modify your dfx.json
file with the
env
property to specify which
environment variables you would like included in
your Azle canister's binary. In this case,
CANISTER1_PRINCIPAL
and
CANISTER2_PRINCIPAL
will be
included:
{
"canisters": {
"canister1": {
"type": "azle",
"main": "src/canister1/index.ts",
"declarations": {
"output": "test/dfx_generated/canister1",
"node_compatibility": true
},
"custom": {
"experimental": true,
"candid_gen": "http",
"env": ["CANISTER1_PRINCIPAL", "CANISTER2_PRINCIPAL"]
}
}
}
}
process.env
You can access the specified environment variables in Azle like so:
import { Canister, query, text } from 'azle/experimental';
export default Canister({
canister1PrincipalEnvVar: query([], text, () => {
return (
process.env.CANISTER1_PRINCIPAL ??
'process.env.CANISTER1_PRINCIPAL is undefined'
);
}),
canister2PrincipalEnvVar: query([], text, () => {
return (
process.env.CANISTER2_PRINCIPAL ??
'process.env.CANISTER2_PRINCIPAL is undefined'
);
})
});
Management Canister
- bitcoin_get_balance
- bitcoin_get_current_fee_percentiles
- bitcoin_get_utxos
- bitcoin_send_transaction
- canister_info
- canister_status
- create_canister
- delete_canister
- deposit_cycles
- ecdsa_public_key
- http_request
- install_code
- provisional_create_canister_with_cycles
- provisional_top_up_canister
- raw_rand
- sign_with_ecdsa
- start_canister
- stop_canister
- uninstall_code
- update_settings
bitcoin_get_balance
This section is a work in progress.
Examples:
import { Canister, ic, None, text, update } from 'azle/experimental';
import { managementCanister, Satoshi } from 'azle/canisters/management';
const BITCOIN_API_CYCLE_COST = 100_000_000n;
export default Canister({
getBalance: update([text], Satoshi, async (address) => {
return await ic.call(managementCanister.bitcoin_get_balance, {
args: [
{
address,
min_confirmations: None,
network: { Regtest: null }
}
],
cycles: BITCOIN_API_CYCLE_COST
});
})
});
bitcoin_get_current_fee_percentiles
This section is a work in progress.
Examples:
import { Canister, ic, update, Vec } from 'azle/experimental';
import {
managementCanister,
MillisatoshiPerByte
} from 'azle/canisters/management';
const BITCOIN_API_CYCLE_COST = 100_000_000n;
export default Canister({
getCurrentFeePercentiles: update([], Vec(MillisatoshiPerByte), async () => {
return await ic.call(
managementCanister.bitcoin_get_current_fee_percentiles,
{
args: [
{
network: { Regtest: null }
}
],
cycles: BITCOIN_API_CYCLE_COST
}
);
})
});
bitcoin_get_utxos
This section is a work in progress.
Examples:
import { Canister, ic, None, text, update } from 'azle/experimental';
import { GetUtxosResult, managementCanister } from 'azle/canisters/management';
const BITCOIN_API_CYCLE_COST = 100_000_000n;
export default Canister({
getUtxos: update([text], GetUtxosResult, async (address) => {
return await ic.call(managementCanister.bitcoin_get_utxos, {
args: [
{
address,
filter: None,
network: { Regtest: null }
}
],
cycles: BITCOIN_API_CYCLE_COST
});
})
});
bitcoin_send_transaction
This section is a work in progress.
Examples:
import { blob, bool, Canister, ic, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
const BITCOIN_BASE_TRANSACTION_COST = 5_000_000_000n;
const BITCOIN_CYCLE_COST_PER_TRANSACTION_BYTE = 20_000_000n;
export default Canister({
sendTransaction: update([blob], bool, async (transaction) => {
const transactionFee =
BITCOIN_BASE_TRANSACTION_COST +
BigInt(transaction.length) *
BITCOIN_CYCLE_COST_PER_TRANSACTION_BYTE;
await ic.call(managementCanister.bitcoin_send_transaction, {
args: [
{
transaction,
network: { Regtest: null }
}
],
cycles: transactionFee
});
return true;
})
});
canister_status
This section is a work in progress.
Examples:
import { Canister, ic, update } from 'azle/experimental';
import {
CanisterStatusArgs,
CanisterStatusResult,
managementCanister
} from 'azle/canisters/management';
export default Canister({
getCanisterStatus: update(
[CanisterStatusArgs],
CanisterStatusResult,
async (args) => {
return await ic.call(managementCanister.canister_status, {
args: [args]
});
}
)
});
create_canister
This section is a work in progress.
Examples:
import { Canister, ic, None, update } from 'azle/experimental';
import {
CreateCanisterResult,
managementCanister
} from 'azle/canisters/management';
export default Canister({
executeCreateCanister: update([], CreateCanisterResult, async () => {
return await ic.call(managementCanister.create_canister, {
args: [{ settings: None }],
cycles: 50_000_000_000_000n
});
})
});
delete_canister
This section is a work in progress.
Examples:
import { bool, Canister, ic, Principal, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeDeleteCanister: update([Principal], bool, async (canisterId) => {
await ic.call(managementCanister.delete_canister, {
args: [
{
canister_id: canisterId
}
]
});
return true;
})
});
deposit_cycles
This section is a work in progress.
Examples:
import { bool, Canister, ic, Principal, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeDepositCycles: update([Principal], bool, async (canisterId) => {
await ic.call(managementCanister.deposit_cycles, {
args: [
{
canister_id: canisterId
}
],
cycles: 10_000_000n
});
return true;
})
});
ecdsa_public_key
This section is a work in progress.
Examples:
import { blob, Canister, ic, None, Record, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
const PublicKey = Record({
publicKey: blob
});
export default Canister({
publicKey: update([], PublicKey, async () => {
const caller = ic.caller().toUint8Array();
const publicKeyResult = await ic.call(
managementCanister.ecdsa_public_key,
{
args: [
{
canister_id: None,
derivation_path: [caller],
key_id: {
curve: { secp256k1: null },
name: 'dfx_test_key'
}
}
]
}
);
return {
publicKey: publicKeyResult.public_key
};
})
});
http_request
This section is a work in progress.
Examples:
import {
Canister,
ic,
None,
Principal,
query,
Some,
update
} from 'azle/experimental';
import {
HttpResponse,
HttpTransformArgs,
managementCanister
} from 'azle/canisters/management';
export default Canister({
xkcd: update([], HttpResponse, async () => {
return await ic.call(managementCanister.http_request, {
args: [
{
url: `https://xkcd.com/642/info.0.json`,
max_response_bytes: Some(2_000n),
method: {
get: null
},
headers: [],
body: None,
transform: Some({
function: [ic.id(), 'xkcdTransform'] as [
Principal,
string
],
context: Uint8Array.from([])
})
}
],
cycles: 50_000_000n
});
}),
xkcdTransform: query([HttpTransformArgs], HttpResponse, (args) => {
return {
...args.response,
headers: []
};
})
});
install_code
This section is a work in progress.
Examples:
import { blob, bool, Canister, ic, Principal, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeInstallCode: update(
[Principal, blob],
bool,
async (canisterId, wasmModule) => {
await ic.call(managementCanister.install_code, {
args: [
{
mode: {
install: null
},
canister_id: canisterId,
wasm_module: wasmModule,
arg: Uint8Array.from([])
}
],
cycles: 100_000_000_000n
});
return true;
}
)
});
provisional_create_canister_with_cycles
This section is a work in progress.
Examples:
import { Canister, ic, None, update } from 'azle/experimental';
import {
managementCanister,
ProvisionalCreateCanisterWithCyclesResult
} from 'azle/canisters/management';
export default Canister({
provisionalCreateCanisterWithCycles: update(
[],
ProvisionalCreateCanisterWithCyclesResult,
async () => {
return await ic.call(
managementCanister.provisional_create_canister_with_cycles,
{
args: [
{
amount: None,
settings: None
}
]
}
);
}
)
});
provisional_top_up_canister
This section is a work in progress.
Examples:
import { bool, Canister, ic, nat, Principal, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
provisionalTopUpCanister: update(
[Principal, nat],
bool,
async (canisterId, amount) => {
await ic.call(managementCanister.provisional_top_up_canister, {
args: [
{
canister_id: canisterId,
amount
}
]
});
return true;
}
)
});
raw_rand
This section is a work in progress.
Examples:
import { blob, Canister, ic, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
getRawRand: update([], blob, async () => {
return await ic.call(managementCanister.raw_rand);
})
});
sign_with_ecdsa
This section is a work in progress.
Examples:
import { blob, Canister, ic, Record, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
const Signature = Record({
signature: blob
});
export default Canister({
sign: update([blob], Signature, async (messageHash) => {
if (messageHash.length !== 32) {
ic.trap('messageHash must be 32 bytes');
}
const caller = ic.caller().toUint8Array();
const signatureResult = await ic.call(
managementCanister.sign_with_ecdsa,
{
args: [
{
message_hash: messageHash,
derivation_path: [caller],
key_id: {
curve: { secp256k1: null },
name: 'dfx_test_key'
}
}
],
cycles: 10_000_000_000n
}
);
return {
signature: signatureResult.signature
};
})
});
start_canister
This section is a work in progress.
Examples:
import { bool, Canister, ic, Principal, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeStartCanister: update([Principal], bool, async (canisterId) => {
await ic.call(managementCanister.start_canister, {
args: [
{
canister_id: canisterId
}
]
});
return true;
})
});
stop_canister
This section is a work in progress.
Examples:
import { bool, Canister, ic, Principal, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeStopCanister: update([Principal], bool, async (canisterId) => {
await ic.call(managementCanister.stop_canister, {
args: [
{
canister_id: canisterId
}
]
});
return true;
})
});
uninstall_code
This section is a work in progress.
Examples:
import { bool, Canister, ic, Principal, update } from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeUninstallCode: update([Principal], bool, async (canisterId) => {
await ic.call(managementCanister.uninstall_code, {
args: [
{
canister_id: canisterId
}
]
});
return true;
})
});
update_settings
This section is a work in progress.
Examples:
import {
bool,
Canister,
ic,
None,
Principal,
Some,
update
} from 'azle/experimental';
import { managementCanister } from 'azle/canisters/management';
export default Canister({
executeUpdateSettings: update([Principal], bool, async (canisterId) => {
await ic.call(managementCanister.update_settings, {
args: [
{
canister_id: canisterId,
settings: {
controllers: None,
compute_allocation: Some(1n),
memory_allocation: Some(3_000_000n),
freezing_threshold: Some(2_000_000n)
}
}
]
});
return true;
})
});
Plugins
Azle plugins allow developers to wrap Rust code in TypeScript/JavaScript APIs that can then be exposed to Azle canisters, providing a clean and simple developer experience with the underlying Rust code.
Plugins are in a very early alpha state. You can create and use them now, but be aware that the API will be changing significantly in the near future.
You can use the following example plugins as you create your own plugins:
Local plugin
If you just want to create a plugin in the same repo as your project, see the plugins example.
npm plugin
If you want to create a plugin that can be published and/or used with npm, see the ic-sqlite-plugin example.
Stable Memory
stable structures
This section is a work in progress.
Examples:
- audio_recorder
- ethereum_json_rpc
- func_types
- http_counter
- inline_types
- persistent-storage
- pre_and_post_upgrade
- stable_structures
import {
bool,
Canister,
nat64,
nat8,
Opt,
query,
StableBTreeMap,
text,
Tuple,
update,
Vec
} from 'azle/experimental';
const Key = nat8;
type Key = typeof Key.tsType;
const Value = text;
type Value = typeof Value.tsType;
let map = StableBTreeMap<Key, Value>(0);
export default Canister({
containsKey: query([Key], bool, (key) => {
return map.containsKey(key);
}),
get: query([Key], Opt(Value), (key) => {
return map.get(key);
}),
insert: update([Key, Value], Opt(Value), (key, value) => {
return map.insert(key, value);
}),
isEmpty: query([], bool, () => {
return map.isEmpty();
}),
items: query([], Vec(Tuple(Key, Value)), () => {
return map.items();
}),
keys: query([], Vec(Key), () => {
return Uint8Array.from(map.keys());
}),
len: query([], nat64, () => {
return map.len();
}),
remove: update([Key], Opt(Value), (key) => {
return map.remove(key);
}),
values: query([], Vec(Value), () => {
return map.values();
})
});
Timers
clear timer
This section is a work in progress.
Examples:
import { Canister, ic, TimerId, update, Void } from 'azle/experimental';
export default Canister({
clearTimer: update([TimerId], Void, (timerId) => {
ic.clearTimer(timerId);
})
});
set timer
This section is a work in progress.
Examples:
import {
Canister,
Duration,
ic,
TimerId,
Tuple,
update
} from 'azle/experimental';
export default Canister({
setTimers: update([Duration], Tuple(TimerId, TimerId), (delay) => {
const functionTimerId = ic.setTimer(delay, callback);
const capturedValue = '🚩';
const closureTimerId = ic.setTimer(delay, () => {
console.log(`closure called and captured value ${capturedValue}`);
});
return [functionTimerId, closureTimerId];
})
});
function callback() {
console.log('callback called');
}
set timer interval
This section is a work in progress.
Examples:
import {
Canister,
Duration,
ic,
TimerId,
Tuple,
update
} from 'azle/experimental';
export default Canister({
setTimerIntervals: update(
[Duration],
Tuple(TimerId, TimerId),
(interval) => {
const functionTimerId = ic.setTimerInterval(interval, callback);
const capturedValue = '🚩';
const closureTimerId = ic.setTimerInterval(interval, () => {
console.log(
`closure called and captured value ${capturedValue}`
);
});
return [functionTimerId, closureTimerId];
}
)
});
function callback() {
console.log('callback called');
}