initial: sovereign Mukan Network fork
Some checks failed
CodeQL / Analyze (push) Waiting to run
Docker Build & Push Simapp (main) / docker-build (push) Waiting to run
golangci-lint / lint (push) Waiting to run
Tests / Code Coverage / build (amd64) (push) Waiting to run
Tests / Code Coverage / build (arm64) (push) Waiting to run
Tests / Code Coverage / unit-tests (map[additional-args:-tags="test_e2e" name:e2e path:./e2e]) (push) Waiting to run
Tests / Code Coverage / unit-tests (map[name:08-wasm path:./modules/light-clients/08-wasm]) (push) Waiting to run
Tests / Code Coverage / unit-tests (map[name:ibc-go path:.]) (push) Waiting to run
Deploy to GitHub Pages / Deploy to GitHub Pages (push) Has been cancelled
Buf-Push / push (push) Has been cancelled

This commit is contained in:
Mukan Erkin TÖRÜK 2026-05-11 03:18:28 +03:00
commit 6852832fe8
1709 changed files with 334985 additions and 0 deletions

116
.clang-format Normal file
View file

@ -0,0 +1,116 @@
---
Language: Proto
# BasedOnStyle: LLVM
AccessModifierOffset: -2
AlignAfterOpenBracket: Align
AlignConsecutiveAssignments: true
AlignConsecutiveDeclarations: true
AlignEscapedNewlines: Right
AlignOperands: true
AlignTrailingComments: true
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortBlocksOnASingleLine: true
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: Empty
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: false
AlwaysBreakTemplateDeclarations: false
BinPackArguments: true
BinPackParameters: true
BraceWrapping:
AfterClass: false
AfterControlStatement: false
AfterEnum: false
AfterFunction: false
AfterNamespace: false
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
AfterExternBlock: false
BeforeCatch: false
BeforeElse: false
IndentBraces: false
SplitEmptyFunction: true
SplitEmptyRecord: true
SplitEmptyNamespace: true
BreakBeforeBinaryOperators: None
BreakBeforeBraces: Attach
BreakBeforeInheritanceComma: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakConstructorInitializers: BeforeColon
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: true
ColumnLimit: 120
CommentPragmas: '^ IWYU pragma:'
CompactNamespaces: false
ConstructorInitializerAllOnOneLineOrOnePerLine: false
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
FixNamespaceComments: true
ForEachMacros:
- foreach
- Q_FOREACH
- BOOST_FOREACH
IncludeBlocks: Preserve
IncludeCategories:
- Regex: '^"(llvm|llvm-c|clang|clang-c)/'
Priority: 2
- Regex: '^(<|"(gtest|gmock|isl|json)/)'
Priority: 3
- Regex: '.*'
Priority: 1
IncludeIsMainRegex: '(Test)?$'
IndentCaseLabels: false
IndentPPDirectives: None
IndentWidth: 2
IndentWrappedFunctionNames: false
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: true
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCBlockIndentWidth: 2
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakAssignment: 2
PenaltyBreakBeforeFirstCallParameter: 19
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 60
PointerAlignment: Right
RawStringFormats:
- Delimiters:
- pb
Language: TextProto
BasedOnStyle: google
ReflowComments: true
SortIncludes: true
SortUsingDeclarations: true
SpaceAfterCStyleCast: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 1
SpacesInAngles: false
SpacesInContainerLiterals: false
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp11
TabWidth: 8
UseTab: Never
...

2
.github/.codespellignore vendored Normal file
View file

@ -0,0 +1,2 @@
clientA
connectionA

42
.github/ISSUE_TEMPLATE/bug-report.md vendored Normal file
View file

@ -0,0 +1,42 @@
---
name: Bug Report
about: Create a report to help us squash bugs!
---
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
<!--
IMPORTANT: Prior to opening a bug report, check if it affects one of the core modules
and if its eligible for a bug bounty on `SECURITY.md`. Bugs that are not submitted
through the appropriate channels won't receive any bounty.
-->
## Summary of Bug
<!-- Concisely describe the issue -->
## Expected Behaviour
<!-- What is the expected behaviour? -->
## Version
<!-- git commit hash or release version -->
## Steps to Reproduce
<!-- What commands in order should someone run to reproduce your problem? -->
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged/assigned
- [ ] Estimate provided

64
.github/ISSUE_TEMPLATE/epic-tracker.md vendored Normal file
View file

@ -0,0 +1,64 @@
---
name: Epic Tracker
about: Create an issue to track feature epic progress
---
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Word of caution: poorly thought-out proposals may be rejected
v without deliberation
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Requirements document
<!-- Link to requirements document -->
## IBC spec
<!-- Link to specification -->
## ADRs
<!-- Links to ADRs related to this epic -->
## Milestones
<!-- Links to alpha, beta, RC milestones -->
## Implementation issues
<!-- Links to specific issues, thematically/logically grouped -->
## QA scenarios
<!-- Lists of manual QA tests that need to be performed -->
## Automated e2e tests
<!-- List of automated e2e tests that need be added to CI -->
## Pre-releases
<!-- Links to alpha, beta, RC tags/releases -->
## Checklist
<!-- Remove any items that are not applicable. -->
- [ ] Internal audit(s)
- [ ] External audit(s)
- [ ] Documentation
- [ ] Swagger
- [ ] Integration with relayers:
- [ ] Hermes
- [ ] Rly
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged/assigned

View file

@ -0,0 +1,40 @@
---
name: Feature Request
about: Create a proposal to request a feature
---
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Word of caution: poorly thought-out proposals may be rejected
v without deliberation
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Summary
<!-- Short, concise description of the proposed feature -->
## Problem Definition
<!-- Why do we need this feature?
What problems may be addressed by introducing this feature?
What benefits does the ibc-go stand to gain by including this feature?
Are there any disadvantages of including this feature? -->
## Use cases
<!-- What use cases will be enabled by the introduction of this feature? -->
## Proposal
<!-- Detailed description of requirements of implementation -->
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged/assigned
- [ ] Estimate provided

View file

@ -0,0 +1,83 @@
---
name: Release tracker
about: Create an issue to track release progress
---
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Word of caution: poorly thought-out proposals may be rejected
v without deliberation
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Milestones
<!-- Links to alpha, beta, RC or final milestones -->
## IBC spec compatibility
<!-- Version of the IBC spec that this release is compatible with -->
## QA
### Backwards compatibility
<!-- List of tests that need to be performed with previous
versions of ibc-go to guarantee that no regression is introduced -->
- [ ] [Compatibility tests](https://github.com/cosmos/ibc-go/actions/workflows/e2e-compatibility.yaml) pass for the release branch.
- [ ] [Upgrade tests](https://github.com/cosmos/ibc-go/actions/workflows/e2e-upgrade.yaml) pass.
- [ ] Manual test with ledger signing.
### Other testing
## Migration
<!-- Link to migration document -->
## Checklist
<!-- Remove any items that are not applicable. -->
- [ ] Bump [go package version](https://github.com/cosmos/ibc-go/blob/main/go.mod#L3).
- [ ] Change all imports starting with `github.com/cosmos/ibc-go/v{x}` to `github.com/cosmos/ibc-go/v{x+1}`.
- [ ] Branch off main to create release branch in the form of `release/vx.y.z` and add branch protection rules.
- [ ] Add branch protection rules to new release branch.
- [ ] Add backport task to [`mergify.yml`](https://github.com/cosmos/ibc-go/blob/main/.github/mergify.yml)
- [ ] Upgrade ibc-go version in [interchaintest](https://github.com/cosmos/interchaintest).
- [ ] Check Swagger is up-to-date.
## Post-release checklist
- [ ] Update [`CHANGELOG.md`](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md)
- [ ] Update the table of supported release lines (and End of Life dates) in [`RELEASES.md`](https://github.com/cosmos/ibc-go/blob/main/RELEASES.md):
- Add the new release line.
- Remove any release lines that might have become discontinued.
- [ ] Update [version matrix](https://github.com/cosmos/ibc-go/blob/main/RELEASES.md#version-matrix) in `RELEASES.md`:
- Add the new release.
- Remove any tags that might not be recommended anymore.
- [ ] Update the list of [supported release lines in README.md](https://github.com/cosmos/ibc-go#releases), if necessary.
- [ ] Update docs site:
- [ ] Update permalinks with links of the released tag.
- [ ] If the release is occurring on the main branch, on the latest version, then run `npm run docusaurus docs:version vX.Y.Z` in the `docs/` directory. (where `X.Y.Z` is the new version number)
- [ ] If the release is occurring on an older release branch, then make a PR to the main branch called `docs: new release vX.Y.Z` doing the following:
- [ ] Update the content of the docs found in `docs/versioned_docs/version-vx.y.z` if needed. (where `x.y.z` is the previous version number)
- [ ] Update the version number of the older release branch by changing the version number of the older release branch in:
- [ ] In `docs/versions.json`.
- [ ] Rename `docs/versioned_sidebars/version-vx.y.z-sidebars.json`
- [ ] Rename `docs/versioned_docs/version-vx.y.z`
- [ ] Ensure annotations on tests are correct as per the [compatibility test tool](../../scripts/compatibility.md):
- Add the new release.
- Remove any tags that might not be recommended anymore.
- [ ] Update the manual [e2e `simd`](https://github.com/cosmos/ibc-go/blob/main/.github/workflows/e2e-manual-simd.yaml) test workflow:
- Remove any tags that might not be recommended anymore.
- [ ] After changes to docs site are deployed, check [ibc.cosmos.network](https://ibc.cosmos.network) is updated.
- [ ] Open issue in [SDK tutorials repo](https://github.com/cosmos/sdk-tutorials) to update tutorials to the released version of ibc-go.
---
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged/assigned

41
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,41 @@
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for creating a PR! ✰
v Before smashing the submit button please review the checkboxes.
v If a checkbox is n/a - please still include it but + a little note why
v Also: make sure that you are familiar with the contribution guidelines: (https://github.com/cosmos/ibc-go/blob/main/CONTRIBUTING.md
v Failure to do this can result in your PR getting closed without further discussion (we receive a lot of PRs, and it takes a lot of time to respond to everyone who doesn't read this)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Description
<!-- Add a description of the changes that this PR introduces and the files that
are the most critical to review.
-->
closes: #XXXX
---
Before we can merge this PR, please make sure that all the following items have been
checked off. If any of the checklist items are not applicable, please leave them but
write a little note why.
- [ ] Linked to GitHub issue with discussion and accepted design, OR link to spec that describes this work.
- [ ] Include changelog entry when appropriate (e.g. chores should be omitted from changelog).
- [ ] Wrote unit and integration [tests](https://github.com/cosmos/ibc-go/blob/main/testing/README.md#ibc-testing-package) if relevant.
- [ ] Updated documentation (`docs/`) if anything is changed.
- [ ] Added `godoc` [comments](https://blog.golang.org/godoc-documenting-go-code) if relevant.
- [ ] Self-reviewed `Files changed` in the GitHub PR explorer.
- [ ] Provide a [conventional commit message](https://github.com/cosmos/ibc-go/blob/main/docs/dev/pull-requests.md#commit-messages) to follow the repository standards.
<!-- Please refer to the [guidelines](https://github.com/cosmos/ibc-go/blob/main/docs/dev/pull-requests.md#commit-messages) for commit messages in ibc-go.
This repository uses [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/).
Example commit messages:
fix: skip emission of unpopulated memo field in ics20
deps: updating sdk to v0.46.4
chore: removed unused variables
e2e: adding e2e upgrade test for ibc-go/v6
docs: ics27 v6 documentation updates
feat: add semantic version utilities for e2e tests
feat(api)!: this is an api breaking feature
fix(statemachine)!: this is a statemachine breaking fix
-->

39
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,39 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: daily
open-pull-requests-limit: 10
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
open-pull-requests-limit: 10
labels:
- dependencies
- package-ecosystem: gomod
directory: "/e2e"
schedule:
interval: daily
open-pull-requests-limit: 10
labels:
- dependencies
- package-ecosystem: gomod
directory: "/modules/light-clients/08-wasm"
schedule:
interval: daily
open-pull-requests-limit: 10
labels:
- dependencies
- package-ecosystem: gomod
directory: "/simapp"
schedule:
interval: daily
open-pull-requests-limit: 10
labels:
- dependencies

84
.github/mergify.yml vendored Normal file
View file

@ -0,0 +1,84 @@
queue_rules:
- name: default
queue_conditions:
- "#approved-reviews-by>=1"
- base=main
- label=automerge
commit_message_template: |
{{ title }} (#{{ number }})
{{ body }}
merge_conditions:
- "#approved-reviews-by>=1"
- base=main
- label=automerge
merge_method: squash
pull_request_rules:
- name: backport patches to v0.2.x callbacks ibc-go v7.3.x branch
conditions:
- base=main
- label=backport-callbacks-to-v0.2.x+ibc-go-v7.3.x
actions:
backport:
branches:
- callbacks/release/v0.2.x+ibc-go-v7.3.x
- name: backport patches to v0.2.x callbacks ibc-go v8.0.x branch
conditions:
- base=main
- label=backport-callbacks-to-v0.2.x+ibc-go-v8.0.x
actions:
backport:
branches:
- callbacks/release/v0.2.x+ibc-go-v8.0.x
- name: backport patches to v0.4.x wasm ibc-go v7.4.x & wasmvm 1.5.x branch
conditions:
- base=main
- label=backport-wasm-v0.4.x+ibc-go-v7.4.x-wasmvm-v1.5.x
actions:
backport:
branches:
- 08-wasm/release/v0.4.x+ibc-go-v7.4.x-wasmvm-v1.5.x
- name: backport patches to v0.5.x wasm ibc-go v8.4.x & wasmvm 2.1.x branch
conditions:
- base=main
- label=backport-wasm-v0.5.x+ibc-go-v8.4.x-wasmvm-v2.1.x
actions:
backport:
branches:
- 08-wasm/release/v0.5.x+ibc-go-v8.4.x-wasmvm-v2.1.x
- name: backport patches to v7.10.x branch
conditions:
- base=main
- label=backport-to-v7.10.x
actions:
backport:
branches:
- release/v7.10.x
- name: backport patches to v8.7.x branch
conditions:
- base=main
- label=backport-to-v8.7.x
actions:
backport:
branches:
- release/v8.7.x
- name: backport patches to v10.2.x branch
conditions:
- base=main
- label=backport-to-v10.2.x
actions:
backport:
branches:
- release/v10.2.x
- name: backport patches to v10.3.x branch
conditions:
- base=main
- label=backport-to-v10.3.x
actions:
backport:
branches:
- release/v10.3.x
- name: automerge to main with label automerge and branch protection passing
conditions: []
actions:
queue:

View file

@ -0,0 +1,42 @@
name: Build Simd Image
on:
workflow_dispatch:
inputs:
tag:
description: 'The tag of the image to build'
required: true
type: string
ibc-go-version:
description: 'The ibc-go version to be added as a label'
required: true
type: string
env:
REGISTRY: ghcr.io
ORG: cosmos
IMAGE_NAME: ibc-go-simd
GIT_TAG: "${{ inputs.tag }}"
jobs:
build-image-at-tag:
runs-on: depot-ubuntu-22.04-4
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v4
with:
ref: "${{ env.GIT_TAG }}"
fetch-depth: 0
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push image
run: |
# remove any `/` characters from the docker tag and replace them with a -
docker_tag="$(echo $GIT_TAG | sed 's/[^a-zA-Z0-9\.]/-/g')"
docker build . -t "${REGISTRY}/${ORG}/${IMAGE_NAME}:${docker_tag}" --build-arg IBC_GO_VERSION=${{ inputs.ibc-go-version }}
docker push "${REGISTRY}/${ORG}/${IMAGE_NAME}:${docker_tag}"

View file

@ -0,0 +1,108 @@
name: Build Wasm Simd Image
on:
workflow_dispatch:
inputs:
tag:
description: 'The tag of the image to build'
required: true
type: string
env:
REGISTRY: ghcr.io
ORG: cosmos
IMAGE_NAME: ibc-go-wasm-simd
GIT_TAG: "${{ inputs.tag }}"
jobs:
build-image-at-tag:
permissions:
packages: write
contents: read
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-24.04
platform: linux/amd64
- os: ubuntu-24.04-arm
platform: linux/arm64
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
with:
ref: "${{ env.GIT_TAG }}"
fetch-depth: 0
# TODO: #7885 Get rid of this script, it is super unecessary and can probably be done in the Dockerfile or a bash script
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Install dependencies
run: make python-install-deps
- name: Get arguments
run: echo "LIBWASM_VERSION=$(scripts/get-libwasm-version.py --get-version)" >> $GITHUB_ENV
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: build
uses: docker/build-push-action@v6
with:
platforms: ${{ matrix.platform }}
file: modules/light-clients/08-wasm/Dockerfile
build-args: LIBWASM_VERSION=${{ env.LIBWASM_VERSION }}
outputs: type=image,"name=${{ env.REGISTRY }}/${{ env.ORG }}/${{ env.IMAGE_NAME }}",push-by-digest=true,name-canonical=true,push=true
- name: Export digest
run: |
mkdir -p ${{ runner.temp }}/digests
digest="${{ steps.build.outputs.digest }}"
touch "${{ runner.temp }}/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ matrix.os }} # If we end up running more builds on the same OS, we need to differentiate more here
path: ${{ runner.temp }}/digests/*
if-no-files-found: error
retention-days: 1
merge:
runs-on: depot-ubuntu-22.04-4
permissions:
packages: write
contents: read
needs:
- build-image-at-tag
steps:
- name: Download digests
uses: actions/download-artifact@v4
with:
path: ${{ runner.temp }}/digests
pattern: digests-*
merge-multiple: true
- name: Get docker tag
# remove all `/` or `+` characters from the docker tag and replace them with a -.
# this ensures the docker tag is valid.
run: echo "DOCKER_TAG=$(echo $GIT_TAG | sed 's/[^a-zA-Z0-9\.]/-/g')" >> $GITHUB_ENV
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create manifest list and push
working-directory: ${{ runner.temp }}/digests
run: |
docker buildx imagetools create --tag ${{ env.REGISTRY }}/${{ env.ORG }}/${{ env.IMAGE_NAME }}:${{ env.DOCKER_TAG }} $(printf '${{ env.REGISTRY }}/${{ env.ORG }}/${{ env.IMAGE_NAME }}@sha256:%s ' *)

81
.github/workflows/codeql-analysis.yml vendored Normal file
View file

@ -0,0 +1,81 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ main ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ main ]
schedule:
- cron: '37 21 * * 4'
jobs:
analyze:
name: Analyze
runs-on: depot-ubuntu-22.04-4
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'go' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v4
- uses: technote-space/get-diff-action@v6.1.2
with:
PATTERNS: |
**/**.go
go.mod
go.sum
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
queries: crypto-com/cosmos-sdk-codeql@main,security-and-quality
if: env.GIT_DIFF
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
if: env.GIT_DIFF
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
if: env.GIT_DIFF

70
.github/workflows/docker.yml vendored Normal file
View file

@ -0,0 +1,70 @@
name: Docker Build & Push Simapp (main)
# Build & Push builds the simapp docker image on every push to main and
# and pushes the image to https://ghcr.io/cosmos/ibc-go-simd
on:
workflow_dispatch:
push:
branches:
- main
- release/v*
paths:
- '.github/workflows/docker.yml'
- '**.go'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ibc-go-simd
jobs:
docker-build:
runs-on: depot-ubuntu-22.04-4
permissions:
packages: write
contents: read
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/cosmos/${{ env.IMAGE_NAME }}
- name: Compute release branch tag
id: reltag
if: startsWith(github.ref_name, 'release/v')
shell: bash
run: |
RAW="${GITHUB_REF_NAME}"
SANITIZED="${RAW//\//-}"
echo "full_tag=${{ env.REGISTRY }}/cosmos/${{ env.IMAGE_NAME }}:branch-${SANITIZED}" >> "$GITHUB_OUTPUT"
- name: Build Docker image
uses: docker/build-push-action@v6
with:
context: .
tags: ${{ steps.meta.outputs.tags }}
build-args: |
IBC_GO_VERSION=${{ github.ref_name }}
- name: Test simd is runnable
run: |
docker run --rm ${{ steps.meta.outputs.tags }}
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
${{ steps.meta.outputs.tags }}
${{ steps.reltag.outputs.full_tag }}
build-args: |
IBC_GO_VERSION=${{ github.ref_name }}

36
.github/workflows/docs-check.yml vendored Normal file
View file

@ -0,0 +1,36 @@
name: Check docs build
on:
merge_group:
pull_request:
branches:
- main
paths:
- 'docs/**'
- '.github/workflows/check-docs.yml'
jobs:
build:
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: docs/package-lock.json
- name: Install dependencies
run: cd docs && npm ci
- name: Test build website
run: cd docs && npm run build
lint:
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: DavidAnson/markdownlint-cli2-action@v20
with:
globs: ./docs/docs/**/*.md

34
.github/workflows/docs-deploy.yml vendored Normal file
View file

@ -0,0 +1,34 @@
# This deploy-docs workflow was created based on instructions from:
# https://docusaurus.io/docs/deployment
name: Deploy to GitHub Pages
on:
workflow_dispatch:
push:
branches:
- main
paths:
- "docs/**"
- .github/workflows/deploy-docs.yml
jobs:
deploy:
name: Deploy to GitHub Pages
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: docs/package-lock.json
- name: Build website
run: make build-docs
- name: Deploy 🚀
uses: JamesIves/github-pages-deploy-action@v4.7.3
with:
branch: gh-pages
folder: docs/build
single-commit: true

View file

@ -0,0 +1,74 @@
on:
workflow_call:
inputs:
test-file:
description: 'The test file'
required: true
type: string
release-version:
description: 'the release tag, e.g. release-v7.3.0'
required: true
type: string
chain:
description: 'Should be one of chain-a, chain-b or all. Split up workflows into multiple (chain-a and chain-b) versions if the job limit is exceeded.'
required: false
type: string
default: all
jobs:
load-test-matrix:
outputs:
test-matrix: ${{ steps.set-test-matrix.outputs.test-matrix }}
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- run: pip install -r requirements.txt
- run: |
# use jq -c to compact the full json contents into a single line. This is required when using the json body
# to create the matrix in the following job.
test_matrix="$(python scripts/generate-compatibility-json.py --file ${{ inputs.test-file }} --release-version ${{ inputs.release-version }} --chain ${{ inputs.chain }})"
echo "test-matrix=$test_matrix" >> $GITHUB_OUTPUT
id: set-test-matrix
e2e:
runs-on: depot-ubuntu-22.04-4
needs: load-test-matrix
# this job is skipped if the test-matrix generated is empty. i.e. if the file was not present.
# this allows us to not have to handle special case versions which may not have certain tests run against them.
if: needs.load-test-matrix.outputs.test-matrix
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.load-test-matrix.outputs.test-matrix) }}
steps:
- name: Checkout the ibc-go repo
uses: actions/checkout@v4
with:
repository: cosmos/ibc-go
- uses: actions/setup-go@v5
with:
go-version: '1.23'
cache-dependency-path: 'e2e/go.sum'
- name: Run e2e Test
run: |
cd e2e
make e2e-test test=${{ matrix.test }}
env:
# each test has its own set of variables to specify which images are used.
# Note: this is significant as the standard behaviour when running e2es on PRs
# is that there is a set of env vars that are the same for each run. e.g. the same docker image is used
# for every test. With compatibility tests, each test may be running different combinations of images.
CHAIN_A_TAG: '${{ matrix.chain-a }}'
CHAIN_B_TAG: '${{ matrix.chain-b }}'
RELAYER_ID: '${{ matrix.relayer-type }}'
- name: Upload Diagnostics
uses: actions/upload-artifact@v4
# we only want to upload logs on test failures.
if: ${{ failure() }}
continue-on-error: true
with:
name: '${{ matrix.entrypoint }}-${{ matrix.test }}'
path: e2e/diagnostics
retention-days: 5

230
.github/workflows/e2e-compatibility.yaml vendored Normal file
View file

@ -0,0 +1,230 @@
# Runs compatibility tests for ibc-go.
# Can be triggered manually by setting values for release-branch and ibc-go-version.
# On a weekly schedule with default values of 'main' for release-branch and 'main-cron-job' for ibc-go-version.
name: Compatibility E2E
on:
schedule:
# run on 20:00 on Sunday.
- cron: '0 20 * * 6'
workflow_dispatch:
inputs:
release-branch:
description: 'Release branch to test'
required: true
type: choice
options:
- release/v7.10.x
- release/v8.7.x
- release/v10.3.x
- main
ibc-go-version:
description: 'The version of ibc-go that is going to be released'
required: true
type: string
env:
REGISTRY: ghcr.io
ORG: cosmos
IMAGE_NAME: ibc-go-simd
RELEASE_BRANCH: ${{ inputs.release-branch || 'main' }}
IBC_GO_VERSION: ${{ inputs.ibc-go-version || 'latest' }}
jobs:
determine-image-tag:
runs-on: depot-ubuntu-22.04-4
outputs:
release-version: ${{ steps.set-release-version.outputs.release-version }}
steps:
- run: |
# we sanitize the release branch name. Docker images cannot contain "/"
# characters so we replace them with a "-".
release_version="$(echo $RELEASE_BRANCH | sed 's/\//-/')"
echo "release-version=$release_version" >> $GITHUB_OUTPUT
id: set-release-version
# build-release-images builds all docker images that are relevant for the compatibility tests. If a single release
# branch is specified, only that image will be built, e.g. release-v6.0.x.
build-release-images:
runs-on: depot-ubuntu-22.04-4
permissions:
packages: write
contents: read
strategy:
matrix:
release-branch:
- release/v7.10.x
- release/v8.7.x
- release/v10.3.x
- main
steps:
- uses: actions/checkout@v4
if: env.RELEASE_BRANCH == matrix.release-branch
with:
ref: "${{ matrix.release-branch }}"
fetch-depth: 0
- name: Log in to the Container registry
if: env.RELEASE_BRANCH == matrix.release-branch
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build image
if: env.RELEASE_BRANCH == matrix.release-branch
run: |
docker_tag="$(echo ${{ matrix.release-branch }} | sed 's/[^a-zA-Z0-9\.]/-/g')"
docker build . -t "${REGISTRY}/${ORG}/${IMAGE_NAME}:$docker_tag" --build-arg IBC_GO_VERSION=${{ env.IBC_GO_VERSION }}
docker push "${REGISTRY}/${ORG}/${IMAGE_NAME}:$docker_tag"
- name: Display image details
if: env.RELEASE_BRANCH == matrix.release-branch
run: |
docker_tag="$(echo ${{ matrix.release-branch }} | sed 's/[^a-zA-Z0-9\.]/-/g')"
docker inspect "${REGISTRY}/${ORG}/${IMAGE_NAME}:$docker_tag"
client-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/core/02-client/client_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
connection-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/core/03-connection/connection_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
ica-base-test-a:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/interchain_accounts/base_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
chain: "chain-a"
ica-base-test-b:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/interchain_accounts/base_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
chain: "chain-b"
ica-gov-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/interchain_accounts/gov_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
ica-groups-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/interchain_accounts/groups_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
ica-localhost-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/interchain_accounts/localhost_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
ica-params-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/interchain_accounts/params_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
ica-query-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/interchain_accounts/query_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
transfer-base-test-a:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/transfer/base_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
chain: "chain-a"
transfer-base-test-b:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/transfer/base_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
chain: "chain-b"
transfer-authz-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/transfer/authz_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
transfer-localhost-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/transfer/localhost_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
transfer-send-enabled-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/transfer/send_enabled_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
transfer-receive-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/transfer/send_receive_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"
upgrade-genesis-test:
needs:
- build-release-images
- determine-image-tag
uses: ./.github/workflows/e2e-compatibility-workflow-call.yaml
with:
test-file: "e2e/tests/upgrades/genesis_test.go"
release-version: "${{ needs.determine-image-tag.outputs.release-version }}"

View file

@ -0,0 +1,279 @@
on:
workflow_call:
inputs:
test-entry-point:
description: 'Test entry point'
required: false
type: string
default: '' # empty string means run all tests
temp-run-full-suite:
description: 'This flag exists to run a hard coded set of tests and will be phased out'
required: false
type: boolean
default: false
test:
description: 'test name to run as standalone'
required: false
type: string
default: ''
test-exclusions:
description: 'Comma separated list of tests to skip'
required: false
type: string
default: '' # empty string means don't skip any test.
chain-image:
description: 'The image to use for chains'
required: false
type: string
default: 'ghcr.io/cosmos/ibc-go-simd'
chain-a-tag:
description: 'The tag to use for chain A'
required: true
type: string
default: main
chain-b-tag:
default: main
description: 'The tag to use for chain B'
required: true
type: string
# upgrade-plan-name is only required during upgrade tests, and is otherwise ignored.
upgrade-plan-name:
default: ''
description: 'The upgrade plan name'
required: false
type: string
build-and-push-docker-image:
description: 'Flag to specify if the docker image should be built and pushed beforehand'
required: false
type: boolean
default: false
build-and-push-docker-image-wasm:
description: 'Flag to specify if the wasm docker image should be built and pushed beforehand'
required: false
type: boolean
default: false
upload-logs:
description: 'Specify flag to indicate that logs should be uploaded on failure'
required: false
type: boolean
default: false
e2e-config-path:
description: 'Specify relative or absolute path of config file for test'
required: false
type: string
default: 'ci-e2e-config.yaml'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ibc-go-simd
IMAGE_NAME_WASM: ibc-go-wasm-simd
jobs:
# test-details exists to provide an easy way to see the inputs for the e2e test.
test-details:
runs-on: depot-ubuntu-22.04-4
steps:
- name: Display Inputs
run: |
echo "Chain Image: ${{ inputs.chain-image }}"
echo "Chain A Tag: ${{ inputs.chain-a-tag }}"
echo "Chain B Tag: ${{ inputs.chain-b-tag }}"
echo "Upgrade Plan Name: ${{ inputs.upgrade-plan-name }}"
echo "Test Entry Point: ${{ inputs.test-entry-point }}"
echo "Test: ${{ inputs.test }}"
echo "Github Ref Name: ${{ github.ref_name }}"
# we skip individual steps rather than the full job as e2e-tests will not run if this task
# is skipped. But will run if every individual task is skipped. There is no current way of conditionally needing
# a job.
docker-build:
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
if: ${{ inputs.build-and-push-docker-image }}
- name: Log in to the Container registry
if: ${{ inputs.build-and-push-docker-image }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
if: ${{ inputs.build-and-push-docker-image }}
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/cosmos/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
if: ${{ inputs.build-and-push-docker-image }}
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
build-args: |
IBC_GO_VERSION=${{ github.ref_name }}
docker-build-wasm:
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
if: ${{ inputs.build-and-push-docker-image-wasm }}
- uses: actions/setup-python@v5
if: ${{ inputs.build-and-push-docker-image-wasm }}
with:
python-version: '3.10'
- name: Install dependencies
if: ${{ inputs.build-and-push-docker-image-wasm }}
run: make python-install-deps
- name: Determine Build arguments
if: ${{ inputs.build-and-push-docker-image-wasm }}
id: build-args
run: |
echo "version=$(scripts/get-libwasm-version.py --get-version)" >> $GITHUB_OUTPUT
echo "checksum=$(scripts/get-libwasm-version.py --get-checksum)" >> $GITHUB_OUTPUT
- name: Log in to the Container registry
if: ${{ inputs.build-and-push-docker-image-wasm }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
if: ${{ inputs.build-and-push-docker-image-wasm }}
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/cosmos/${{ env.IMAGE_NAME_WASM }}
- name: Build and push Docker image
if: ${{ inputs.build-and-push-docker-image-wasm }}
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
file: modules/light-clients/08-wasm/Dockerfile
build-args: |
LIBWASM_VERSION=${{ steps.build-args.outputs.version }}
LIBWASM_CHECKSUM=${{ steps.build-args.outputs.checksum }}
# dynamically build a matrix of test/test suite pairs to run.
# this job runs a go tool located at cmd/build_test_matrix/main.go.
# it walks the e2e/test directory in order to locate all test suite / test name
# pairs. The output of this job can be fed in as input to a workflow matrix and
# will expand to jobs which will run all tests present.
build-test-matrix:
runs-on: depot-ubuntu-22.04-4
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout@v4
with:
repository: cosmos/ibc-go
- uses: actions/setup-go@v5
with:
go-version: '1.23'
- id: set-matrix
run: |
output=$(go run cmd/build_test_matrix/main.go)
echo "matrix=$output" >> $GITHUB_OUTPUT
env:
TEST_ENTRYPOINT: '${{ inputs.test-entry-point }}'
TEST_EXCLUSIONS: '${{ inputs.test-exclusions }}'
TEST_NAME: '${{ inputs.test }}'
# e2e-tests runs the actual go test command to trigger the test.
# the tests themselves are configured via environment variables to specify
# things like chain and relayer images and tags.
e2e-tests:
runs-on: depot-ubuntu-22.04-4
needs:
- build-test-matrix
- docker-build
- docker-build-wasm
env:
CHAIN_IMAGE: '${{ inputs.chain-image }}'
CHAIN_UPGRADE_PLAN: '${{ inputs.upgrade-plan-name }}'
CHAIN_A_TAG: '${{ inputs.chain-a-tag }}'
CHAIN_B_TAG: '${{ inputs.chain-b-tag }}'
E2E_CONFIG_PATH: '${{ inputs.e2e-config-path }}'
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.build-test-matrix.outputs.matrix) }}
steps:
- uses: actions/checkout@v4
with:
repository: cosmos/ibc-go
- uses: actions/setup-go@v5
with:
go-version: '1.23'
cache-dependency-path: 'e2e/go.sum'
- name: Run e2e Test
id: e2e_test
run: |
cd e2e
make e2e-test test=${{ matrix.test }}
- name: Upload Diagnostics
uses: actions/upload-artifact@v4
if: ${{ failure() && inputs.upload-logs }}
continue-on-error: true
with:
name: '${{ matrix.entrypoint }}-${{ matrix.test }}'
path: e2e/diagnostics
retention-days: 5
e2e-test-suites:
# temporary flag. eventually this field will not exist and this will be the default.
if: ${{ inputs.temp-run-full-suite }}
runs-on: depot-ubuntu-22.04-4
needs:
- build-test-matrix
- docker-build
- docker-build-wasm
env:
CHAIN_IMAGE: '${{ inputs.chain-image }}'
CHAIN_A_TAG: '${{ inputs.chain-a-tag }}'
CHAIN_B_TAG: '${{ inputs.chain-b-tag }}'
E2E_CONFIG_PATH: '${{ inputs.e2e-config-path }}'
strategy:
fail-fast: false
matrix:
include:
# for now we explicitly specify this test suite.
- entrypoint: TestTransferTestSuite
- entrypoint: TestAuthzTransferTestSuite
- entrypoint: TestTransferTestSuiteSendReceive
- entrypoint: TestTransferTestSuiteSendEnabled
- entrypoint: TestTransferLocalhostTestSuite
- entrypoint: TestConnectionTestSuite
- entrypoint: TestInterchainAccountsGovTestSuite
steps:
- uses: actions/checkout@v4
with:
repository: cosmos/ibc-go
- uses: actions/setup-go@v5
with:
go-version: '1.23'
cache-dependency-path: 'e2e/go.sum'
- name: Run e2e Test
id: e2e_test
run: |
cd e2e
make e2e-suite entrypoint=${{ matrix.entrypoint }}
- name: Upload Diagnostics
uses: actions/upload-artifact@v4
if: ${{ failure() && inputs.upload-logs }}
continue-on-error: true
with:
name: '${{ matrix.entrypoint }}-${{ matrix.test }}'
path: e2e/diagnostics
retention-days: 5

82
.github/workflows/e2e-upgrade.yaml vendored Normal file
View file

@ -0,0 +1,82 @@
name: Tests / E2E Upgrade
on:
workflow_dispatch:
pull_request:
branches:
- main
paths:
# upgrade tests will run on any changes to the upgrade_test.go file,
# and changes to the workflow itself.
- 'e2e/tests/upgrades/*.go'
- '.github/workflows/e2e-upgrade.yaml'
schedule:
- cron: '0 0 * * *'
env:
DOCKER_IMAGE_NAME: ghcr.io/cosmos/ibc-go-simd
jobs:
e2e-upgrade-tests:
runs-on: depot-ubuntu-22.04-4
strategy:
fail-fast: false
matrix:
test-config: [
{
tag: v6.1.0,
upgrade-plan: v7,
test: TestV6ToV7ChainUpgrade
},
{
tag: v7.0.0,
upgrade-plan: v7.1,
test: TestV7ToV7_1ChainUpgrade
},
{
tag: v7.10.0,
upgrade-plan: v8,
test: TestV7ToV8ChainUpgrade
},
{
tag: v8.0.0,
upgrade-plan: v8.1,
test: TestV8ToV8_1ChainUpgrade
},
{
tag: v8.7.0,
upgrade-plan: v10,
test: TestV8ToV10ChainUpgrade
},
{
tag: v8.7.0,
upgrade-plan: v10,
test: TestV8ToV10ChainUpgrade_Localhost
},
]
steps:
- uses: actions/checkout@v4
with:
repository: cosmos/ibc-go
- uses: actions/setup-go@v5
with:
go-version: '1.23'
cache-dependency-path: 'e2e/go.sum'
- name: Run e2e Test
id: e2e_test
env:
CHAIN_IMAGE: '${{ env.DOCKER_IMAGE_NAME }}'
CHAIN_A_TAG: "${{ matrix['test-config'].tag }}"
CHAIN_B_TAG: "${{ matrix['test-config'].tag }}"
CHAIN_UPGRADE_PLAN: "${{ matrix['test-config']['upgrade-plan'] }}"
E2E_CONFIG_PATH: 'ci-e2e-config.yaml'
run: |
cd e2e
make e2e-test test=${{ matrix['test-config'].test }}
- name: Upload Diagnostics
uses: actions/upload-artifact@v4
if: ${{ failure() }}
continue-on-error: true
with:
name: "${{ matrix['test-config'].test }}"
path: e2e/diagnostics
retention-days: 5

118
.github/workflows/e2e.yaml vendored Normal file
View file

@ -0,0 +1,118 @@
name: Tests / E2E
on:
workflow_dispatch:
pull_request:
paths:
- '**/*.go'
- '.github/workflows/e2e.yaml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
env:
DOCKER_IMAGE_NAME: ghcr.io/cosmos/ibc-go-simd
jobs:
determine-image-tag:
runs-on: depot-ubuntu-22.04-4
outputs:
simd-tag: ${{ steps.get-tag.outputs.simd-tag }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.23'
- id: get-tag
run: |
if [ -z "${{ github.event.pull_request.number }}" ]
then
echo "simd-tag=main" >> $GITHUB_OUTPUT
else
tag="pr-${{ github.event.pull_request.number }}"
echo "Using tag $tag"
echo "simd-tag=$tag" >> $GITHUB_OUTPUT
fi
docker-build:
runs-on: depot-ubuntu-22.04-4
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.DOCKER_IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
build-args: |
IBC_GO_VERSION=${{ github.ref_name }}
build-test-matrix:
runs-on: depot-ubuntu-22.04-4
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout@v4
with:
repository: cosmos/ibc-go
- uses: actions/setup-go@v5
with:
go-version: '1.23'
cache-dependency-path: 'go.sum'
- id: set-matrix
run: |
output=$(go run cmd/build_test_matrix/main.go)
echo "matrix=$output" >> $GITHUB_OUTPUT
env:
TEST_EXCLUSIONS: 'TestUpgradeTestSuite'
e2e-tests:
runs-on: depot-ubuntu-22.04-4
needs:
- determine-image-tag
- build-test-matrix
- docker-build
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.build-test-matrix.outputs.matrix) }}
steps:
- uses: actions/checkout@v4
with:
repository: cosmos/ibc-go
- uses: actions/setup-go@v5
with:
go-version: '1.23'
cache-dependency-path: 'e2e/go.sum'
- name: Run e2e Test
id: e2e_test
env:
CHAIN_IMAGE: '${{ env.DOCKER_IMAGE_NAME }}'
CHAIN_A_TAG: '${{ needs.determine-image-tag.outputs.simd-tag }}'
CHAIN_B_TAG: '${{ needs.determine-image-tag.outputs.simd-tag }}'
E2E_CONFIG_PATH: 'ci-e2e-config.yaml'
run: |
cd e2e
make e2e-test test=${{ matrix.test }}
- name: Upload Diagnostics
uses: actions/upload-artifact@v4
if: ${{ failure() }}
continue-on-error: true
with:
name: '${{ matrix.entrypoint }}-${{ matrix.test }}'
path: e2e/diagnostics
retention-days: 5

31
.github/workflows/golangci.yml vendored Normal file
View file

@ -0,0 +1,31 @@
name: golangci-lint
on:
push:
pull_request:
permissions:
contents: read
# Optional: allow read access to pull request. Use with `only-new-issues` option.
pull-requests: read
jobs:
golangci:
name: lint
runs-on: depot-ubuntu-22.04-4
strategy:
matrix:
working-directory: ['.', 'modules/light-clients/08-wasm', 'e2e']
steps:
- uses: actions/setup-go@v5
with:
go-version: '1.23'
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: golangci-lint
uses: golangci/golangci-lint-action@v8.0.0
with:
version: v2.1
only-new-issues: true
args: --timeout 10m
working-directory: ${{ matrix.working-directory }}

View file

@ -0,0 +1,16 @@
name: proto breaking check
# proto breaking check workflow checks if Protobuf file contains breaking changes.
# This workflow runs when a PR that targets Protobuf is opened.
on:
merge_group:
pull_request:
paths:
- "proto/**/*.proto"
jobs:
proto-breaking-check:
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
- name: Run proto-breaking check
run: make proto-check-breaking

28
.github/workflows/proto-registry.yml vendored Normal file
View file

@ -0,0 +1,28 @@
name: Buf-Push
# Protobuf runs buf (https://buf.build/) push updated proto files to https://buf.build/cosmos/ibc
# This workflow is only run when a .proto file has been changed
on:
workflow_dispatch:
push:
branches:
- main
paths:
- "proto/**"
tags:
- 'v*.*.*'
jobs:
push:
runs-on: depot-ubuntu-22.04-4
steps:
- uses: actions/checkout@v4
- uses: bufbuild/buf-action@v1
with:
token: ${{ secrets.BUF_TOKEN }}
setup_only: false
github_token: ${{ secrets.GITHUB_TOKEN }}
input: "proto"
push: true
lint: false
format: false
breaking: false

86
.github/workflows/test.yml vendored Normal file
View file

@ -0,0 +1,86 @@
name: Tests / Code Coverage
# Tests / Code Coverage workflow runs unit tests and uploads a code coverage report
# This workflow is run on pushes to main & every pull requests
on:
merge_group:
pull_request:
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: depot-ubuntu-22.04-4
strategy:
matrix:
go-arch: ['amd64', 'arm64']
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Install compiler for arm64.
if: matrix.go-arch == 'arm64'
run: |
sudo apt-get update
sudo apt-get install -y gcc-aarch64-linux-gnu g++-aarch64-linux-gnu build-essential
echo "CC=aarch64-linux-gnu-gcc" >> $GITHUB_ENV
- name: Build ibc-go
run: GOARCH=${{ matrix.go-arch }} LEDGER_ENABLED=false make build
- name: Build 08-wasm
run: |
cd modules/light-clients/08-wasm
GOARCH=${{ matrix.go-arch }} CGO_ENABLED=1 go build ./...
- name: Build e2e
run: |
cd e2e
find ./tests -type d | while IFS= read -r dir
do
if ls "${dir}"/*.go >/dev/null 2>&1; then
CGO_ENABLED=1 GOARCH=${{ matrix.go-arch }} go test -c "$dir"
fi
done
unit-tests:
runs-on: depot-ubuntu-22.04-4
strategy:
matrix:
module: [
{
name: ibc-go,
path: .
},
{
name: 08-wasm,
path: ./modules/light-clients/08-wasm
},
{
name: e2e,
path: ./e2e,
additional-args: '-tags="test_e2e"'
}
]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.23'
cache-dependency-path: '${{ matrix.module.path }}/go.sum'
- name: test & coverage report creation
run: |
cd ${{ matrix.module.path }} && go test -mod=readonly -coverprofile=profile.out -covermode=atomic ${{ matrix.module.additional-args }} ./...
- uses: codecov/codecov-action@v5
with:
fail_ci_if_error: true
files: ${{ matrix.module.path }}/profile.out
flags: ${{ matrix.module.name }}
token: ${{ secrets.CODECOV_TOKEN }}

5
.gitignore vendored Normal file
View file

@ -0,0 +1,5 @@
build/
*.test
*.bench
.DS_Store
*.log

139
.golangci.yml Normal file
View file

@ -0,0 +1,139 @@
version: "2"
run:
tests: true
linters:
default: none
enable:
- errcheck
- goconst
- gocritic
- gosec
- govet
- ineffassign
- misspell
- nakedret
- revive
- staticcheck
- thelper
- unconvert
- unparam
- unused
settings:
gocritic:
disabled-checks:
- appendAssign
gosec:
excludes:
- G101
- G107
- G115
- G404
confidence: medium
revive:
enable-all-rules: true
rules:
- name: redundant-import-alias
disabled: true
- name: use-any
disabled: true
- name: if-return
disabled: true
- name: max-public-structs
disabled: true
- name: cognitive-complexity
disabled: true
- name: argument-limit
disabled: true
- name: cyclomatic
disabled: true
- name: file-header
disabled: true
- name: function-length
disabled: true
- name: function-result-limit
disabled: true
- name: line-length-limit
disabled: true
- name: flag-parameter
disabled: true
- name: add-constant
disabled: true
- name: empty-lines
disabled: true
- name: banned-characters
disabled: true
- name: deep-exit
disabled: true
- name: confusing-results
disabled: true
- name: unused-parameter
disabled: true
- name: modifies-value-receiver
disabled: true
- name: early-return
disabled: true
- name: confusing-naming
disabled: true
- name: defer
disabled: true
- name: unused-parameter
disabled: true
- name: unhandled-error
arguments:
- fmt.Printf
- fmt.Print
- fmt.Println
- myFunction
disabled: false
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- revive
text: differs only by capitalization to method
- linters:
- gosec
text: Use of weak random number generator
- linters:
- gosec
text: 'G115: integer overflow conversion'
- linters:
- staticcheck
text: 'SA1019:'
- linters:
- gosec
text: 'G115: integer overflow conversion'
paths:
- third_party$
- builtin$
- examples$
issues:
max-issues-per-linter: 10000
max-same-issues: 10000
formatters:
enable:
- gci
- gofumpt
settings:
gci:
sections:
- standard
- default
- blank
- dot
- prefix(cosmossdk.io)
- prefix(github.com/cosmos/cosmos-sdk)
- prefix(github.com/cometbft/cometbft)
- prefix(github.com/cosmos/ibc-go)
custom-order: true
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

1829
CHANGELOG.md Normal file

File diff suppressed because it is too large Load diff

46
CODE_OF_CONDUCT.md Normal file
View file

@ -0,0 +1,46 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at <ibc@interchain.io>. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

78
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,78 @@
# Contributing to ibc-go
Thank you for considering making contributions to ibc-go! 🎉👍
## Code of conduct
This project and everyone participating in it is governed by ibc-go's [code of conduct](./CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code
## How can I contribute?
Contributing to this repository can mean many things such as participating in discussions or proposing code changes. To ensure a smooth workflow for all contributors, the general procedure for contributing has been established:
### Reporting bugs
If you find that something is not working as expected, please open an issue using the [bug report template](https://github.com/cosmos/ibc-go/blob/main/.github/ISSUE_TEMPLATE/bug-report.md) and provide as much information possible: how can the bug be reproduced? What's the expected behavior? What version is affected?
This is also true if you plan to fix the bug yourself and submit a PR. As a general rule, we want contributing pull requests to reference an existing issue. See [Submitting pull requests](#submitting-pull-requests)
### Proposing improvements or new features
New features or improvements should be written in an issue using the [new feature template](https://github.com/cosmos/ibc-go/blob/main/.github/ISSUE_TEMPLATE/feature-request.md). Please include in the issue as many details as possible: what use case(s) would this new feature or improvement enable? Why are those use cases important or helpful? what user group would benefit? The team will evaluate and engage with you in a discussion of the proposal, which could have different outcomes:
- the core ibc-go team deciding to implement this feature and adding it to their planning,
- agreeing to support external contributors to implement it with the goal of merging it eventually in ibc-go,
- discarding the suggestion if deemed not aligned with the objectives of ibc-go;
- or proposing (in the case of applications or light clients) to be developed and maintained in a separate repository.
Unless the change is a minor bug fix with minor code changes, and you want to submit a pull request, please make sure to write a Github issue for it before opening the pull request.
### Architecture Decision Records (ADR)
When proposing an architecture decision for the ibc-go, please create an [ADR](./docs/architecture/README.md) so further discussions can be made. We are following this process so all involved parties are in agreement before any party begins coding the proposed implementation. Please use the [ADR template](./docs/architecture/adr.template.md) to scaffold any new ADR. If you would like to see some examples of how these are written refer to ibc-go's [ADRs](./docs/architecture/). ADRs are solidified designs that will be implemented in ibc-go (and do not have a spec). They should document the architecture that will be built. Most design feedback should be gathered before the initial draft of the ADR. ADR's can/should be written for any design decisions we make which may be changed at some point in the future.
### Participating in discussions
New features or improvements are sometimes also debated in [discussions](https://github.com/cosmos/ibc-go/discussions). Sharing feedback or ideas there is very helpful for us. high level discussions that may get a lot of comments on a variety of different aspects, design aspects still being considered.
### Submitting pull requests
Before opening a pull request, make sure there is an accompanying issue that has been assigned to you.
In the case of smaller changes, opening a pull request without being assigned to the issue **can** be accepted, but to avoid having to redesign or discard your work due to the change no longer being needed, asking to be assigned to the issue is the safest course of action. We welcome contributors, but we have put in place these guidelines to safeguard the time of both external and core contributors.
Unless you feel confident your change will be accepted (see [Unwanted pull requests](#unwanted-pull-requests)) you should first create an issue to discuss your change with us. This lets us all discuss the design and proposed implementation of your change, which helps ensure your time is well spent and that your contribution will be accepted.
Looking for a good place to start contributing? The issue tracker is always the first place to go. Issues are triaged to categorize them:
- Check out some [`good first issue`s](https://github.com/cosmos/ibc-go/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). These are issues whose scope of work should be pretty clearly specified and they are best suited for developers new to ibc-go (i.e. no deep knowledge of Cosmos SDK or ibc-go is required). For example, some of these issues may involve improving the logging, emitting new events or removing unused code.
- Or pick up a [`help wanted`](https://github.com/cosmos/ibc-go/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) issue. These issues should be a bit more involved than the good first issues and the developer working on them would benefit from some familiarity already with the codebase. These types of issues may involve adding new (or extending the functionality of existing) gRPC endpoints, bumping the version of Cosmos SDK or Tendermint or fixing bugs.
If you would like to contribute, follow this process:
1. If the issue is a proposal, ensure that the proposal has been accepted.
2. Ensure that nobody else has already begun working on this issue. If they have, make sure to contact them to collaborate.
3. If nobody has been assigned for the issue and you would like to work on it, comment on the issue to inform the community of your intentions to begin work. Then we will be able to assign the issue to you, making it visible for others that this issue is being tackled. If you end up not creating a pull request for this issue, please comment on the issue as well, so that it can be assigned to somebody else.
4. Follow standard GitHub best practices: fork the repo, branch from the HEAD of `main`, make some commits, and submit a PR to `main`. For core developers working within the ibc-go repo, branches must be named with the convention `{moniker}/{issue#}-branch-name` to ensure a clear ownership of branches.
5. Feel free to submit the pull request in `Draft` mode, even if the work is not complete, as this indicates to the community you are working on something and allows them to provide comments early in the development process.
6. When the code is complete it can be marked `Ready for Review`.
7. Be sure to include a relevant changelog entry in the [Commit Message / Changelog Entry section of pull request description](https://github.com/cosmos/ibc-go/blob/main/.github/PULL_REQUEST_TEMPLATE.md#commit-message--changelog-entry) so that we can add changelog entry when merging the pull request. Please follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/) and use one of the commit types mentioned in the [Commit messages section of the pull request guidelines](./docs/dev/pull-requests.md#commit-messages).
Please make sure to check out our [Pull request guidelines](./docs/dev/pull-requests.md) for more information.
#### Unwanted pull requests
To ensure the core maintainers time are spent well, we have certain pull requests we want to avoid:
- Any non-minor pull requests without an **assigned** issue
- Any non-minor bug fixes without an issue (ideally, also assigned, but we are less strict on this)
- Spelling mistakes/changes (instead, try to fix our CI so that it would be able to catch it automatically - that would be useful)
## Relevant development docs
- [Project structure](./docs/dev/project-structure.md)
- [Development setup](./docs/dev/development-setup.md)
- [Go style guide](./docs/dev/go-style-guide.md)
- [Documentation guide](./docs/README.md)
- [Writing tests](./testing/README.md)
- [Pull request guidelines](./docs/dev/pull-requests.md)
- [Release process](./docs/dev/release-management.md)

36
Dockerfile Normal file
View file

@ -0,0 +1,36 @@
FROM golang:1.23.8-alpine AS builder
ARG IBC_GO_VERSION
RUN set -eux; apk add --no-cache gcc git libusb-dev linux-headers make musl-dev;
ENV GOPATH=""
# ensure the ibc go version is being specified for this image.
RUN test -n "${IBC_GO_VERSION}"
# Copy relevant files before go mod download. Replace directives to local paths break if local
# files are not copied before go mod download.
ADD internal internal
ADD simapp simapp
ADD testing testing
ADD modules modules
ADD LICENSE LICENSE
COPY contrib/devtools/Makefile contrib/devtools/Makefile
COPY Makefile .
COPY go.mod .
COPY go.sum .
RUN go mod download
RUN make build
FROM alpine:3.21
ARG IBC_GO_VERSION
LABEL "org.cosmos.ibc-go"="${IBC_GO_VERSION}"
COPY --from=builder /go/build/simd /bin/simd
ENTRYPOINT ["simd"]

674
LICENSE Normal file
View file

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

407
Makefile Normal file
View file

@ -0,0 +1,407 @@
#!/usr/bin/make -f
PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation')
PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation')
CHANGED_GO_FILES := $(shell git diff --name-only | grep .go$$ | grep -v pb.go)
ALL_GO_FILES := $(shell find . -regex ".*\.go$$" | grep -v pb.go)
VERSION := $(shell echo $(shell git describe --always) | sed 's/^v//')
COMMIT := $(shell git log -1 --format='%H')
LEDGER_ENABLED ?= true
BINDIR ?= $(GOPATH)/bin
BUILDDIR ?= $(CURDIR)/build
SIMAPP = ./simapp
MOCKS_DIR = $(CURDIR)/tests/mocks
HTTPS_GIT := https://github.com/cosmos/ibc-go.git
DOCKER := $(shell which docker)
PROJECT_NAME = $(shell git remote get-url origin | xargs basename -s .git)
export GO111MODULE = on
# process build tags
build_tags = netgo
ifeq ($(LEDGER_ENABLED),true)
ifeq ($(OS),Windows_NT)
GCCEXE = $(shell where gcc.exe 2> NUL)
ifeq ($(GCCEXE),)
$(error gcc.exe not installed for ledger support, please install or set LEDGER_ENABLED=false)
else
build_tags += ledger
endif
else
UNAME_S = $(shell uname -s)
ifeq ($(UNAME_S),OpenBSD)
$(warning OpenBSD detected, disabling ledger support (https://github.com/cosmos/cosmos-sdk/issues/1988))
else
GCC = $(shell command -v gcc 2> /dev/null)
ifeq ($(GCC),)
$(error gcc not installed for ledger support, please install or set LEDGER_ENABLED=false)
else
build_tags += ledger
endif
endif
endif
endif
ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS)))
build_tags += gcc
endif
build_tags += $(BUILD_TAGS)
build_tags := $(strip $(build_tags))
whitespace :=
whitespace += $(whitespace)
comma := ,
build_tags_comma_sep := $(subst $(whitespace),$(comma),$(build_tags))
# process linker flags
ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \
-X github.com/cosmos/cosmos-sdk/version.AppName=simd \
-X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \
-X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \
-X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)"
# DB backend selection
ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS)))
ldflags += -X github.com/cosmos/cosmos-sdk/types.DBBackend=cleveldb
endif
ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS)))
ldflags += -X github.com/cosmos/cosmos-sdk/types.DBBackend=badgerdb
endif
# handle rocksdb
ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS)))
CGO_ENABLED=1
BUILD_TAGS += rocksdb
ldflags += -X github.com/cosmos/cosmos-sdk/types.DBBackend=rocksdb
endif
# handle boltdb
ifeq (boltdb,$(findstring boltdb,$(COSMOS_BUILD_OPTIONS)))
BUILD_TAGS += boltdb
ldflags += -X github.com/cosmos/cosmos-sdk/types.DBBackend=boltdb
endif
ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS)))
ldflags += -w -s
endif
ldflags += $(LDFLAGS)
ldflags := $(strip $(ldflags))
BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)'
# check for nostrip option
ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS)))
BUILD_FLAGS += -trimpath
endif
#? all: Run tools build lint test
all: build lint test
# The below include contains the tools and runsim targets.
include contrib/devtools/Makefile
###############################################################################
### Build ###
###############################################################################
BUILD_TARGETS := build install
#? tidy-all: Run go mod tidy for all modules
tidy-all:
./scripts/go-mod-tidy-all.sh
#? build: Build simapp and build_test_matrix
build: BUILD_ARGS=-o $(BUILDDIR)/
#? build-linux: Build simapp and build_test_matrix for GOOS=linux GOARCH=amd64
build-linux:
GOOS=linux GOARCH=amd64 LEDGER_ENABLED=false $(MAKE) build
$(BUILD_TARGETS): go.sum $(BUILDDIR)/
cd simapp && go $@ -mod=readonly $(BUILD_FLAGS) $(BUILD_ARGS) ./...
$(BUILDDIR)/:
mkdir -p $(BUILDDIR)/
.PHONY: build build-linux
#? distclean: Run `make clean`
distclean: clean
#? clean: Clean some auto generated directories
clean:
rm -rf \
$(BUILDDIR)/ \
artifacts/ \
tmp-swagger-gen/
.PHONY: distclean clean
#? build-docker-wasm: Build wasm simapp with specified tag.
build-docker-wasm:
./scripts/build-wasm-simapp-docker.sh $(tag)
build-docker-local:
docker build -t ghcr.io/cosmos/ibc-go-simd:local --build-arg IBC_GO_VERSION=local .
.PHONY: build-docker-wasm
###############################################################################
### Tools & Dependencies ###
###############################################################################
go.sum: go.mod
echo "Ensure dependencies have not been modified ..." >&2
go mod verify
go mod tidy
#? python-install-deps: Install python dependencies
python-install-deps:
@echo "Installing python dependencies..."
@pip3 install --upgrade pip
@pip3 install -r requirements.txt
###############################################################################
### Documentation ###
###############################################################################
#? godocs: Generate go documentation
godocs:
@echo "--> Wait a few seconds and visit http://localhost:6060/pkg/github.com/cosmos/cosmos-sdk/types"
godoc -http=:6060
#? build-docs: Build documentation
build-docs:
@cd docs && npm ci && npm run build
#? serve-docs: Run docs server
serve-docs:
@cd docs && npm run serve
# If the DOCS_VERSION variable is not set, display an error message and exit
ifndef DOCS_VERSION
#? tag-docs-version: Tag the docs version
tag-docs-version:
@echo "Error: DOCS_VERSION is not set. Use 'make tag-docs-version DOCS_VERSION=<version>' to set it. For example: 'make tag-docs-version DOCS_VERSION=v8.0.x'"
@exit 1
else
tag-docs-version:
@cd docs && npm run docusaurus docs:version $(DOCS_VERSION)
endif
check-docs-links:
@command -v lychee >/dev/null 2>&1 || { echo "ERROR: lychee is not installed (https://lychee.cli.rs/installation/)" >&2; exit 1; }
@echo "Checking links in documentation..."
@lychee --root-dir $(CURDIR)/docs/docs \
--cache \
--cache-exclude-status 429 \
--max-cache-age 1w \
--retry-wait-time 30 \
--max-retries 25 \
--max-concurrency 25 \
--remap '($(CURDIR)/docs)(/docs/)(architecture/|events/)([^#]+?)(#[^#]+)?$$ $$1/$$3/$$4.md' \
'./docs/docs'
lint-docs:
@command -v markdownlint-cli2 >/dev/null 2>&1 || { echo "ERROR: markdownlint-cli2 is not installed (https://github.com/DavidAnson/markdownlint-cli2#install)" >&2; exit 1; }
@echo "Linting documentation..."
@markdownlint-cli2 ./docs/docs/**/*.md
.PHONY: build-docs serve-docs tag-docs-version
###############################################################################
### Tests & Simulation ###
###############################################################################
# make init-simapp initializes a single local node network
# it is useful for testing and development
# Usage: make install && make init-simapp && simd start
# Warning: make init-simapp will remove all data in simapp home directory
#? init-simapp: Run scripts/init-simapp.sh
init-simapp:
./scripts/init-simapp.sh
#? test: Run make test-unit
test: test-unit
#? test-all: Run all test
test-all: test-unit test-ledger-mock test-race test-cover
TEST_PACKAGES=./...
TEST_TARGETS := test-unit test-unit-amino test-unit-proto test-ledger-mock test-race test-ledger test-race
# Test runs-specific rules. To add a new test target, just add
# a new rule, customise ARGS or TEST_PACKAGES ad libitum, and
# append the new rule to the TEST_TARGETS list.
test-unit: ARGS=-tags='cgo ledger test_ledger_mock test_e2e'
test-unit-amino: ARGS=-tags='ledger test_ledger_mock test_amino'
test-ledger: ARGS=-tags='cgo ledger'
test-ledger-mock: ARGS=-tags='ledger test_ledger_mock'
test-race: ARGS=-race -tags='cgo ledger test_ledger_mock'
test-race: TEST_PACKAGES=$(PACKAGES_NOSIMULATION)
$(TEST_TARGETS): run-tests
# check-* compiles and collects tests without running them
# note: go test -c doesn't support multiple packages yet (https://github.com/golang/go/issues/15513)
CHECK_TEST_TARGETS := check-test-unit check-test-unit-amino
check-test-unit: ARGS=-tags='cgo ledger test_ledger_mock'
check-test-unit-amino: ARGS=-tags='ledger test_ledger_mock test_amino'
$(CHECK_TEST_TARGETS): EXTRA_ARGS=-run=none
$(CHECK_TEST_TARGETS): run-tests
ARGS += -tags "$(test_tags)"
#? run-tests: Runs the go test command for all modules
run-tests:
@ARGS="$(ARGS)" TEST_PACKAGES=$(TEST_PACKAGES) EXTRA_ARGS="$(EXTRA_ARGS)" python3 ./scripts/go-test-all.py
.PHONY: run-tests test test-all $(TEST_TARGETS)
#? test-sim-nondeterminism: Run non-determinism test for simapp
test-sim-nondeterminism:
@echo "Running non-determinism test..."
@go test -mod=readonly $(SIMAPP) -run TestAppStateDeterminism -Enabled=true \
-NumBlocks=100 -BlockSize=200 -Commit=true -Period=0 -v -timeout 24h
test-sim-custom-genesis-fast:
@echo "Running custom genesis simulation..."
@echo "By default, ${HOME}/.gaiad/config/genesis.json will be used."
@go test -mod=readonly $(SIMAPP) -run TestFullAppSimulation -Genesis=${HOME}/.gaiad/config/genesis.json \
-Enabled=true -NumBlocks=100 -BlockSize=200 -Commit=true -Seed=99 -Period=5 -v -timeout 24h
test-sim-import-export: runsim
@echo "Running application import/export simulation. This may take several minutes..."
@$(BINDIR)/runsim -Jobs=4 -SimAppPkg=$(SIMAPP) -ExitOnFail 50 5 TestAppImportExport
test-sim-after-import: runsim
@echo "Running application simulation-after-import. This may take several minutes..."
@$(BINDIR)/runsim -Jobs=4 -SimAppPkg=$(SIMAPP) -ExitOnFail 50 5 TestAppSimulationAfterImport
test-sim-custom-genesis-multi-seed: runsim
@echo "Running multi-seed custom genesis simulation..."
@echo "By default, ${HOME}/.gaiad/config/genesis.json will be used."
@$(BINDIR)/runsim -Genesis=${HOME}/.gaiad/config/genesis.json -SimAppPkg=$(SIMAPP) -ExitOnFail 400 5 TestFullAppSimulation
test-sim-multi-seed-long: runsim
@echo "Running long multi-seed application simulation. This may take awhile!"
@$(BINDIR)/runsim -Jobs=4 -SimAppPkg=$(SIMAPP) -ExitOnFail 500 50 TestFullAppSimulation
test-sim-multi-seed-short: runsim
@echo "Running short multi-seed application simulation. This may take awhile!"
@$(BINDIR)/runsim -Jobs=4 -SimAppPkg=$(SIMAPP) -ExitOnFail 50 10 TestFullAppSimulation
test-sim-benchmark-invariants:
@echo "Running simulation invariant benchmarks..."
@go test -mod=readonly $(SIMAPP) -benchmem -bench=BenchmarkInvariants -run=^$ \
-Enabled=true -NumBlocks=1000 -BlockSize=200 \
-Period=1 -Commit=true -Seed=57 -v -timeout 24h
.PHONY: \
test-sim-nondeterminism \
test-sim-custom-genesis-fast \
test-sim-import-export \
test-sim-after-import \
test-sim-custom-genesis-multi-seed \
test-sim-multi-seed-short \
test-sim-multi-seed-long \
test-sim-benchmark-invariants
SIM_NUM_BLOCKS ?= 500
SIM_BLOCK_SIZE ?= 200
SIM_COMMIT ?= true
#? test-sim-benchmark: Run application benchmark
test-sim-benchmark:
@echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!"
@go test -mod=readonly -benchmem -run=^$$ $(SIMAPP) -bench ^BenchmarkFullAppSimulation$$ \
-Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h
#? test-sim-profile: Run application benchmark and output cpuprofile, memprofile
test-sim-profile:
@echo "Running application benchmark for numBlocks=$(SIM_NUM_BLOCKS), blockSize=$(SIM_BLOCK_SIZE). This may take awhile!"
@go test -mod=readonly -benchmem -run=^$$ $(SIMAPP) -bench ^BenchmarkFullAppSimulation$$ \
-Enabled=true -NumBlocks=$(SIM_NUM_BLOCKS) -BlockSize=$(SIM_BLOCK_SIZE) -Commit=$(SIM_COMMIT) -timeout 24h -cpuprofile cpu.out -memprofile mem.out
.PHONY: test-sim-profile test-sim-benchmark
#? test-cover: Run contrib/test_cover.sh
test-cover:
@export VERSION=$(VERSION); bash -x contrib/test_cover.sh
.PHONY: test-cover
#? benchmark: Run benchmark tests
benchmark:
@go test -mod=readonly -bench=. $(PACKAGES_NOSIMULATION)
.PHONY: benchmark
###############################################################################
### Linting ###
###############################################################################
#? setup-pre-commit: Set pre commit git hook
setup-pre-commit:
@cp .git/hooks/pre-commit .git/hooks/pre-commit.bak 2>/dev/null || true
@echo "Installing pre-commit hook..."
@ln -sf ../../scripts/hooks/pre-commit.sh .git/hooks/pre-commit
@echo "Pre-commit hook was installed at .git/hooks/pre-commit"
#? lint: Run golangci-lint on all modules
lint:
@echo "--> Running linter"
@./scripts/go-lint-all.sh --timeout=15m
#? lint-fix: Run golangci-lint and fix issues on all modules
lint-fix:
@echo "--> Running linter"
@./scripts/go-lint-all.sh --fix
#? format: Run gofumpt and misspell
format:
find . -name '*.go' -type f -not -path "./vendor*" -not -path "*.git*" -not -path "./docs/client/statik/statik.go" -not -path "./tests/mocks/*" -not -name '*.pb.go' -not -name '*.pb.gw.go' | xargs gofumpt -w
find . -name '*.go' -type f -not -path "./vendor*" -not -path "*.git*" -not -path "./docs/client/statik/statik.go" -not -path "./tests/mocks/*" -not -name '*.pb.go' -not -name '*.pb.gw.go' | xargs misspell -w
.PHONY: format
.PHONY: lint lint-fix format
###############################################################################
### Protobuf ###
###############################################################################
protoVer=0.14.0
protoImageName=ghcr.io/cosmos/proto-builder:$(protoVer)
protoImage=$(DOCKER) run --rm -v $(CURDIR):/workspace --workdir /workspace $(protoImageName)
#? proto-all: Format, lint and generate Protobuf files
proto-all: proto-format proto-lint proto-gen
#? proto-gen: Generate Protobuf files
proto-gen:
@echo "Generating Protobuf files"
@$(protoImage) sh ./scripts/protocgen.sh
#? proto-swagger-gen: Generate Protobuf Swagger
proto-swagger-gen:
@echo "Generating Protobuf Swagger"
@$(protoImage) sh ./scripts/protoc-swagger-gen.sh
#? proto-format: Format Protobuf files
proto-format:
@$(protoImage) find ./ -name "*.proto" -exec clang-format -i {} \;
#? proto-lint: Lint Protobuf files
proto-lint:
@$(protoImage) buf lint --error-format=json
#? proto-check-breaking: Check if Protobuf file contains breaking changes
proto-check-breaking:
@$(protoImage) buf breaking --against $(HTTPS_GIT)#branch=main
#? proto-update-deps: Update Protobuf dependencies
proto-update-deps:
@echo "Updating Protobuf dependencies"
$(DOCKER) run --rm -v $(CURDIR)/proto:/workspace --workdir /workspace $(protoImageName) buf mod update
.PHONY: proto-all proto-gen proto-gen-any proto-swagger-gen proto-format proto-lint proto-check-breaking proto-update-deps
#? help: Get more info on make commands
help: Makefile
@echo " Choose a command run in "$(PROJECT_NAME)":"
@sed -n 's/^#?//p' $< | column -t -s ':' | sort | sed -e 's/^/ /'
.PHONY: help

30
README.md Normal file
View file

@ -0,0 +1,30 @@
<h1 align="center">
<b>Mukan IBC</b>
</h1>
<p align="center">
The sovereign inter-blockchain communication layer of the <b>Mukan Network</b>, forked from IBC-go.
</p>
## Overview
**Mukan IBC** is the Inter-Blockchain Communication (IBC) protocol implementation for the Mukan Network. It is a permanent hard-fork of [IBC-go v10.4.0](https://github.com/cosmos/ibc-go), updated to reference the sovereign Mukan Network stack.
### Key Differences from IBC-go
- All upstream dependencies updated to reference `git.cw.tr/mukan-network/...` instead of original Cosmos GitHub paths.
- Future: Cross-chain UMC transfers and PoJ-verified IBC channel authentication.
## Integration
Mukan IBC is used by [Mukan Core](https://git.cw.tr/mukan-network/mukan-core) to enable cross-chain communication.
```go
require git.cw.tr/mukan-network/mukan-ibc v10.4.0-mukan.1
```
## License
Licensed under the **GNU General Public License v3.0 (GPLv3)**.
*Original IBC-go components remain under their respective Apache 2.0 licenses where applicable.*

163
RELEASES.md Normal file
View file

@ -0,0 +1,163 @@
# Releases
IBC-Go follows [semantic versioning](https://semver.org), but with the following deviations:
- A state-machine breaking change will result in an increase of the minor version Y (x.Y.z | x > 0).
- An API breaking change will result in an increase of the major number (X.y.z | x > 0). Please note that these changes **will be backwards compatible** (as opposed to canonical semantic versioning; read [Backwards compatibility](#backwards-compatibility) for a detailed explanation).
This is visually explained in the following decision tree:
<p align="center">
<img src="releases-decision-tree.png?raw=true" alt="Releases decision tree" width="40%" />
</p>
When bumping the dependencies of [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) and [CometBFT](https://github.com/cometbft/cometbft) we will only treat patch releases as non state-machine breaking.
## Backwards compatibility
[ibc-go](https://github.com/cosmos/ibc-go) and the [IBC protocol specification](https://github.com/cosmos/ibc) maintain different versions. Furthermore, ibc-go serves several different user groups (chains, IBC app developers, relayers, IBC light client developers). Each of these groups has different expectations of what *backwards compatible* means. It simply isn't possible to categorize a change as backwards or non backwards compatible for all user groups. We are primarily interested in when our API breaks and when changes are state machine breaking (thus requiring a coordinated upgrade). This is scoping the meaning of ibc-go to that of those interacting with the code (IBC app developers, relayers, IBC light client developers), not chains using IBC to communicate (that should be encapsulated by the IBC protocol specification versioning).
To summarize: **All our ibc-go releases allow chains to communicate successfully with any chain running any version of our code**. That is to say, we are still using IBC protocol specification v1.0 (v10 will also include support for the IBC protocol specification v2.0)
## Release cycle
IBC-Go follows a traditional release cycle involving an alpha, beta, and rc (release candidate) releases before finalizing a new version. As ibc-go works in a non-traditional area, we apply our own interpretation to each release type. We reserve the right to make both go API breaking changes and state machine breaking changes throughout the entire release cycle. The stable release guarantees do not go into affect until a final release is performed.
It is never advisable to use a non-final release in production.
### Alpha
Alpha releases are intended to make available new features as soon as they are functional. No correctness guarantees are made and alpha releases **may** contain serious security vulnerabilities, bugs, and lack of user tooling, so long as they don't affect the core functionality.
Initial users of alpha releases are expected to be advanced, patient, and capable of handling unusual errors. Very basic integration testing will be performed by the ibc-go development team before alpha releases.
An internal audit is typically performed before the alpha release allowing the development team to gauge the maturity and stability of changes included in the next release.
### Beta
Beta releases are intended to signal design stability. While the go API is still subject to change, the core design of the new features should not be. Developers integrating the new features should expect to handle breaking changes when upgrading to RC's.
Beta releases should not be made with known bugs or security vulnerabilities. Beta releases should focus on ironing out remaining bugs and filling out the UX functionality required by a final release. Beta releases should have a clearly defined scope of the features that will be included in the release. Only highly requested feature additions should be acted upon in this phase.
When the development team has determined a release is ready to enter the RC phase, a final security audit should be performed. The security audit should be limited to looking for bugs and security vulnerabilities. Code improvements may be noted, but they should not be acted upon unless highly desirable.
### RC
RC's are release candidates. Final releases should contain little to no changes in comparison to the latest RC. Changes included in between RC releases should be limited to:
- Improved testing
- UX additions
- Bug fixes
- Highly requested changes by the community
A release should not be finalized until the development team and the external community have done sufficient integration tests on the targeted release.
## Stable Release Policy
The beginning of a new major release series is marked by the release of a new major version. A major release series is comprised of all minor and patch releases made under the same major version number. The series continues to receive bug fixes (released as minor or patch releases) until it reaches end of life. The date when a major release series reaches end of life is determined by one of the two following methods:
- If the next major release is made within the first 6 months, then the end of life date of the major release series is 18 months after its initial release.
- If the next major release is made 6 months after the initial release, then the end of life date of the major release series is 12 months after the release date of the next major release.
For example, if the current major release series is v1 and was released on January 1st, 2022, then v1 will be supported at least until January 1st, 2023. If v2 is published on August 1st 2022, then v1's end of life will be March 1st, 2023.
Only the following major release series have a stable release status. All missing minor release versions have been discontinued.
We reserve the right to drop support for releases if they are deemed unused (for example, because the Cosmos SDK version they depend on is not used or has been deprecated). Likewise, we also reserve the right to drop support for pre v1.0 versions of modules if we deem them unnecessary to maintain (we are only looking to give support for stable major releases).
### ibc-go
|Release|End of Life Date|
|-------|----------------|
|`v7.10.x`|March 17, 2025|
|`v8.7.x`|May 10, 2025|
### Callbacks middleware
|Release|End of Life Date|
|-------|----------------|
|`v0.2.x+ibc-go-v7.3.x`|March 17, 2025|
|`v0.2.x+ibc-go-v8.0.x`|May 10, 2025|
### `08-wasm` light client proxy module
|Release|End of Life Date|
|-------|----------------|
|`v0.3.x+ibc-go-v7.4.x-wasmvm-v1.5.x`|March 17, 2025|
|`v0.4.x+ibc-go-v8.4.x-wasmvm-v2.0.x`|May 10, 2025|
### What pull requests will be included in stable patch-releases?
Pull requests that fix bugs and add features that fall in the following categories:
- **Severe regressions**.
- Bugs that may cause **client applications** to be **largely unusable**.
- Bugs that may cause **state corruption or data loss**.
- Bugs that may directly or indirectly cause a **security vulnerability**.
- Non-breaking features that are strongly requested by the community.
- Non-breaking CLI improvements that are strongly requested by the community.
### What pull requests will NOT be automatically included in stable patch-releases?
As rule of thumb, the following changes will **NOT** be automatically accepted into stable point-releases:
- **State machine changes**, unless the previous behaviour would result in a consensus halt.
- **Protobuf-breaking changes**.
- **Client-breaking changes**, i.e. changes that prevent gRPC, HTTP and RPC clients to continue interacting with the node without any change.
- **API-breaking changes**, i.e. changes that prevent client applications to *build without modifications* to the client application's source code.
- **CLI-breaking changes**, i.e. changes that require usage changes for CLI users.
## Deprecation notice
Code that is marked as deprecated in a release will be removed 2 major releases afterwards. For example: deprecation notice is added in v8.3.0, then code will be deleted in v10.0.0.
## Version matrix
### ibc-go
Versions of Golang, Cosmos SDK and CometBFT used by ibc-go in the currently active releases:
| Go | ibc-go | Cosmos SDK | Tendermint/CometBFT |
|----|--------|------------|---------------------|
| 1.19 | v7.10.0 | v0.47.13 | v0.37.5 |
| 1.21 | v8.7.0 | v0.50.9 | v0.38.11 |
### Callbacks middleware
Versions of Golang, ibc-go, Cosmos SDK and CometBFT used by callbacks middleware in the currently active releases:
| Go | callbacks | ibc-go | Cosmos SDK | Tendermint/CometBFT |
|----|-----------|--------|------------|---------------------|
| 1.19 | v0.2.0+ibc-go-v7.3 | v7.3.0 | v0.47.5 | v0.37.2 |
| 1.21 | v0.2.0+ibc-go-v8.0 | v8.0.0 | v0.50.1 | v0.38.0 |
### `08-wasm` light client proxy module
Versions of Golang, ibc-go, Cosmos SDK and CometBFT used by `08-wasm` module in the currently active releases:
| Go | 08-wasm | ibc-go | Cosmos SDK | Tendermint/CometBFT |
|----|-----------|--------|------------|---------------------|
| 1.19 | v0.3.1+ibc-go-v7.4-wasmvm-v1.5 | v7.4.0 | v0.47.8 | v0.37.4 |
| 1.21 | v0.4.1+ibc-go-v8.4-wasmvm-v2.0 | v8.4.0 | v0.50.7 | v0.38.9 |
## Graphics
The decision tree above was generated with the following code:
```text
%%{init:
{'theme': 'default',
'themeVariables':
{'fontFamily': 'verdana', 'fontSize': '13px'}
}
}%%
flowchart TD
A(Change):::c --> B{API breaking?}
B:::c --> |Yes| C(Increase major version):::c
B:::c --> |No| D{state-machine breaking?}
D:::c --> |Yes| G(Increase minor version):::c
D:::c --> |No| H(Increase patch version):::c
classDef c fill:#eee,stroke:#aaa
```
using [Mermaid](https://mermaid-js.github.io)'s [live editor](https://mermaid.live).

16
SECURITY.md Normal file
View file

@ -0,0 +1,16 @@
# How to Report a Security Bug
If you believe you have found a security vulnerability in the Interchain Stack, you can report it to our primary vulnerability disclosure channel, the [Cosmos HackerOne Bug Bounty program](https://hackerone.com/cosmos?type=team).
<!-- markdown-link-check-disable-next-line -->
If you prefer to report an issue via email, you may send a bug report to [security@interchain.io](mailto:security@interchain.io) with the issue details, reproduction, impact, and other information. Please submit only one unique email thread per vulnerability. Any issues reported via email are ineligible for bounty rewards.
Artifacts from an email report are saved at the time the email is triaged. Please note: our team is not able to monitor dynamic content (e.g. a Google Docs link that is edited after receipt) throughout the lifecycle of a report. If you would like to share additional information or modify previous information, please include it in an additional reply as an additional attachment.
Please DO NOT file a public issue in this repository to report a security vulnerability.
# Coordinated Vulnerability Disclosure Policy and Safe Harbor
For the most up-to-date version of the policies that govern vulnerability disclosure, please consult the [HackerOne program page](https://hackerone.com/cosmos?type=team&view_policy=true).
The policy hosted on HackerOne is the official Coordinated Vulnerability Disclosure policy and Safe Harbor for the Interchain Stack, and the teams and infrastructure it supports, and it supersedes previous security policies that have been used in the past by individual teams and projects with targets in scope of the program.

8
buf.work.yaml Normal file
View file

@ -0,0 +1,8 @@
# Generated by "buf config migrate-v1beta1". Edit as necessary, and
# remove this comment when you're finished.
#
# This workspace file points to the roots found in your
# previous "buf.yaml" configuration.
version: v1
directories:
- proto

View file

@ -0,0 +1,195 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"go/ast"
"go/parser"
"go/token"
"io/fs"
"os"
"path/filepath"
"slices"
"sort"
"strings"
)
const (
testNamePrefix = "Test"
testFileNameSuffix = "_test.go"
e2eTestDirectory = "e2e"
// testEntryPointEnv specifies a single test function to run if provided.
testEntryPointEnv = "TEST_ENTRYPOINT"
// testExclusionsEnv is a comma separated list of test function names that will not be included
// in the results of this script.
testExclusionsEnv = "TEST_EXCLUSIONS"
// testNameEnv if provided returns a single test entry so that only one test is actually run.
testNameEnv = "TEST_NAME"
)
// GithubActionTestMatrix represents
type GithubActionTestMatrix struct {
Include []TestSuitePair `json:"include"`
}
type TestSuitePair struct {
Test string `json:"test"`
EntryPoint string `json:"entrypoint"`
}
func main() {
githubActionMatrix, err := getGithubActionMatrixForTests(e2eTestDirectory, getTestToRun(), getTestEntrypointToRun(), getExcludedTestFunctions())
if err != nil {
fmt.Printf("error generating github action json: %s", err)
os.Exit(1)
}
ghBytes, err := json.Marshal(githubActionMatrix)
if err != nil {
fmt.Printf("error marshalling github action json: %s", err)
os.Exit(1)
}
fmt.Println(string(ghBytes))
}
// getTestEntrypointToRun returns the specified test function to run if present, otherwise
// it returns an empty string which will result in running all test suites.
func getTestEntrypointToRun() string {
testSuite, ok := os.LookupEnv(testEntryPointEnv)
if !ok {
return ""
}
return testSuite
}
// getTestToRun returns the specified test function to run if present.
// If specified, only this test will be run.
func getTestToRun() string {
testName, ok := os.LookupEnv(testNameEnv)
if !ok {
return ""
}
return testName
}
// getExcludedTestFunctions returns a list of test functions that we don't want to run.
func getExcludedTestFunctions() []string {
exclusions, ok := os.LookupEnv(testExclusionsEnv)
if !ok {
return nil
}
return strings.Split(exclusions, ",")
}
// getGithubActionMatrixForTests returns a json string representing the contents that should go in the matrix
// field in a github action workflow. This string can be used with `fromJSON(str)` to dynamically build
// the workflow matrix to include all E2E tests under the e2eRootDirectory directory.
func getGithubActionMatrixForTests(e2eRootDirectory, testName string, suite string, excludedItems []string) (GithubActionTestMatrix, error) {
testSuiteMapping := map[string][]string{}
fset := token.NewFileSet()
err := filepath.Walk(e2eRootDirectory, func(path string, info fs.FileInfo, err error) error {
if err != nil {
return fmt.Errorf("error walking e2e directory: %s", err)
}
// only look at test files
if !strings.HasSuffix(path, testFileNameSuffix) {
return nil
}
f, err := parser.ParseFile(fset, path, nil, 0)
if err != nil {
return fmt.Errorf("failed parsing file: %s", err)
}
suiteNameForFile, testCases, err := extractSuiteAndTestNames(f)
if err != nil {
return nil
}
if testName != "" && slices.Contains(testCases, testName) {
testCases = []string{testName}
}
if slices.Contains(excludedItems, suiteNameForFile) {
return nil
}
if suite == "" || suiteNameForFile == suite {
testSuiteMapping[suiteNameForFile] = testCases
}
return nil
})
if err != nil {
return GithubActionTestMatrix{}, err
}
gh := GithubActionTestMatrix{
Include: []TestSuitePair{},
}
for testSuiteName, testCases := range testSuiteMapping {
for _, testCaseName := range testCases {
gh.Include = append(gh.Include, TestSuitePair{
Test: testCaseName,
EntryPoint: testSuiteName,
})
}
}
if len(gh.Include) == 0 {
return GithubActionTestMatrix{}, errors.New("no test cases found")
}
// Sort the test cases by name so that the order is consistent.
sort.SliceStable(gh.Include, func(i, j int) bool {
return gh.Include[i].Test < gh.Include[j].Test
})
if testName != "" && len(gh.Include) != 1 {
return GithubActionTestMatrix{}, fmt.Errorf("expected exactly 1 test in the output matrix but got %d", len(gh.Include))
}
return gh, nil
}
// extractSuiteAndTestNames extracts the name of the test suite function as well
// as all tests associated with it in the same file.
func extractSuiteAndTestNames(file *ast.File) (string, []string, error) {
var suiteNameForFile string
var testCases []string
for _, d := range file.Decls {
if f, ok := d.(*ast.FuncDecl); ok {
functionName := f.Name.Name
if isTestSuiteMethod(f) {
if suiteNameForFile != "" {
return "", nil, fmt.Errorf("found a second test function: %s when %s was already found", f.Name.Name, suiteNameForFile)
}
suiteNameForFile = functionName
continue
}
if isTestFunction(f) {
testCases = append(testCases, functionName)
}
}
}
if suiteNameForFile == "" {
return "", nil, fmt.Errorf("file %s had no test suite test case", file.Name.Name)
}
return suiteNameForFile, testCases, nil
}
// isTestSuiteMethod returns true if the function is a test suite function.
// e.g. func TestFeeMiddlewareTestSuite(t *testing.T) { ... }
func isTestSuiteMethod(f *ast.FuncDecl) bool {
return strings.HasPrefix(f.Name.Name, testNamePrefix) && len(f.Type.Params.List) == 1
}
// isTestFunction returns true if the function name starts with "Test" and has no parameters.
// as test suite functions do not accept a *testing.T.
func isTestFunction(f *ast.FuncDecl) bool {
return strings.HasPrefix(f.Name.Name, testNamePrefix) && len(f.Type.Params.List) == 0
}

View file

@ -0,0 +1,191 @@
package main
import (
"os"
"path"
"sort"
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
const (
nonTestFile = "not_test_file.go"
goTestFileNameOne = "first_go_file_test.go"
goTestFileNameTwo = "second_go_file_test.go"
)
func TestGetGithubActionMatrixForTests(t *testing.T) {
t.Run("empty dir with no test cases fails", func(t *testing.T) {
testingDir := t.TempDir()
_, err := getGithubActionMatrixForTests(testingDir, "", "", nil)
assert.Error(t, err)
})
t.Run("only test functions are picked up", func(t *testing.T) {
testingDir := t.TempDir()
createFileWithTestSuiteAndTests(t, "FeeMiddlewareTestSuite", "TestA", "TestB", testingDir, goTestFileNameOne)
gh, err := getGithubActionMatrixForTests(testingDir, "", "", nil)
assert.NoError(t, err)
expected := GithubActionTestMatrix{
Include: []TestSuitePair{
{
EntryPoint: "TestFeeMiddlewareTestSuite",
Test: "TestA",
},
{
EntryPoint: "TestFeeMiddlewareTestSuite",
Test: "TestB",
},
},
}
assertGithubActionTestMatricesEqual(t, expected, gh)
})
t.Run("all files are picked up", func(t *testing.T) {
testingDir := t.TempDir()
createFileWithTestSuiteAndTests(t, "FeeMiddlewareTestSuite", "TestA", "TestB", testingDir, goTestFileNameOne)
createFileWithTestSuiteAndTests(t, "TransferTestSuite", "TestC", "TestD", testingDir, goTestFileNameTwo)
gh, err := getGithubActionMatrixForTests(testingDir, "", "", nil)
assert.NoError(t, err)
expected := GithubActionTestMatrix{
Include: []TestSuitePair{
{
EntryPoint: "TestTransferTestSuite",
Test: "TestC",
},
{
EntryPoint: "TestFeeMiddlewareTestSuite",
Test: "TestA",
},
{
EntryPoint: "TestFeeMiddlewareTestSuite",
Test: "TestB",
},
{
EntryPoint: "TestTransferTestSuite",
Test: "TestD",
},
},
}
assertGithubActionTestMatricesEqual(t, expected, gh)
})
t.Run("single test can be specified", func(t *testing.T) {
testingDir := t.TempDir()
createFileWithTestSuiteAndTests(t, "FeeMiddlewareTestSuite", "TestA", "TestB", testingDir, goTestFileNameOne)
createFileWithTestSuiteAndTests(t, "TransferTestSuite", "TestC", "TestD", testingDir, goTestFileNameTwo)
gh, err := getGithubActionMatrixForTests(testingDir, "TestA", "TestFeeMiddlewareTestSuite", nil)
assert.NoError(t, err)
expected := GithubActionTestMatrix{
Include: []TestSuitePair{
{
EntryPoint: "TestFeeMiddlewareTestSuite",
Test: "TestA",
},
},
}
assertGithubActionTestMatricesEqual(t, expected, gh)
})
t.Run("error if single test doesn't exist", func(t *testing.T) {
testingDir := t.TempDir()
createFileWithTestSuiteAndTests(t, "FeeMiddlewareTestSuite", "TestA", "TestB", testingDir, goTestFileNameOne)
_, err := getGithubActionMatrixForTests(testingDir, "TestThatDoesntExist", "TestFeeMiddlewareTestSuite", nil)
assert.Error(t, err)
})
t.Run("non test files are skipped", func(t *testing.T) {
testingDir := t.TempDir()
createFileWithTestSuiteAndTests(t, "FeeMiddlewareTestSuite", "TestA", "TestB", testingDir, nonTestFile)
gh, err := getGithubActionMatrixForTests(testingDir, "", "", nil)
assert.Error(t, err)
assert.Empty(t, gh.Include)
})
t.Run("fails when there are multiple suite runs", func(t *testing.T) {
testingDir := t.TempDir()
createFileWithTestSuiteAndTests(t, "FeeMiddlewareTestSuite", "TestA", "TestB", testingDir, nonTestFile)
fileWithTwoSuites := `package foo
func SuiteOne(t *testing.T) {
suite.Run(t, new(FeeMiddlewareTestSuite))
}
func SuiteTwo(t *testing.T) {
suite.Run(t, new(FeeMiddlewareTestSuite))
}
type FeeMiddlewareTestSuite struct {}
`
err := os.WriteFile(path.Join(testingDir, goTestFileNameOne), []byte(fileWithTwoSuites), os.FileMode(0o777))
assert.NoError(t, err)
_, err = getGithubActionMatrixForTests(testingDir, "", "", nil)
assert.Error(t, err)
})
}
func assertGithubActionTestMatricesEqual(t *testing.T, expected, actual GithubActionTestMatrix) {
t.Helper()
// sort by both suite and test as the order of the end result does not matter as
// all tests will be run.
sort.SliceStable(expected.Include, func(i, j int) bool {
memberI := expected.Include[i]
memberJ := expected.Include[j]
if memberI.EntryPoint == memberJ.EntryPoint {
return memberI.Test < memberJ.Test
}
return memberI.EntryPoint < memberJ.EntryPoint
})
sort.SliceStable(actual.Include, func(i, j int) bool {
memberI := actual.Include[i]
memberJ := actual.Include[j]
if memberI.EntryPoint == memberJ.EntryPoint {
return memberI.Test < memberJ.Test
}
return memberI.EntryPoint < memberJ.EntryPoint
})
assert.Equal(t, expected.Include, actual.Include)
}
func goTestFileContents(suiteName, fnName1, fnName2 string) string {
replacedSuiteName := strings.ReplaceAll(`package foo
func TestSuiteName(t *testing.T) {
suite.Run(t, new(SuiteName))
}
type SuiteName struct {}
func (s *SuiteName) fnName1() {}
func (s *SuiteName) fnName2() {}
func (s *SuiteName) suiteHelper() {}
func helper() {}
`, "SuiteName", suiteName)
replacedFn1Name := strings.ReplaceAll(replacedSuiteName, "fnName1", fnName1)
return strings.ReplaceAll(replacedFn1Name, "fnName2", fnName2)
}
func createFileWithTestSuiteAndTests(t *testing.T, suiteName, fn1Name, fn2Name, dir, filename string) {
t.Helper()
goFileContents := goTestFileContents(suiteName, fn1Name, fn2Name)
err := os.WriteFile(path.Join(dir, filename), []byte(goFileContents), os.FileMode(0o777))
assert.NoError(t, err)
}

24
codecov.yml Normal file
View file

@ -0,0 +1,24 @@
coverage:
precision: 2
range:
- 70.0
- 100.0
round: down
status:
project:
default:
target: auto
threshold: 0%
base: auto
comment:
require_changes: "coverage_drop OR uncovered_patch" # Only comment when coverage drops or there is uncovered code in the commit
ignore:
- "**/*.pb.go"
- "**/*.pb.gw.go"
- "docs"
- "simapp"
- "testing"
- "modules/light-clients/08-wasm/testing"
- "scripts"
- "contrib"
- "cmd"

76
contrib/devtools/Makefile Normal file
View file

@ -0,0 +1,76 @@
###
# Find OS and Go environment
# GO contains the Go binary
# FS contains the OS file separator
###
ifeq ($(OS),Windows_NT)
GO := $(shell where go.exe 2> NUL)
FS := "\\"
else
GO := $(shell command -v go 2> /dev/null)
FS := "/"
endif
ifeq ($(GO),)
$(error could not find go. Is it in PATH? $(GO))
endif
###############################################################################
### Functions ###
###############################################################################
go_get = $(if $(findstring Windows_NT,$(OS)),\
IF NOT EXIST $(GITHUBDIR)$(FS)$(1)$(FS) ( mkdir $(GITHUBDIR)$(FS)$(1) ) else (cd .) &\
IF NOT EXIST $(GITHUBDIR)$(FS)$(1)$(FS)$(2)$(FS) ( cd $(GITHUBDIR)$(FS)$(1) && git clone https://github.com/$(1)/$(2) ) else (cd .) &\
,\
mkdir -p $(GITHUBDIR)$(FS)$(1) &&\
(test ! -d $(GITHUBDIR)$(FS)$(1)$(FS)$(2) && cd $(GITHUBDIR)$(FS)$(1) && git clone https://github.com/$(1)/$(2)) || true &&\
)\
cd $(GITHUBDIR)$(FS)$(1)$(FS)$(2) && git fetch origin && git checkout -q $(3)
mkfile_path := $(abspath $(lastword $(MAKEFILE_LIST)))
mkfile_dir := $(shell cd $(shell dirname $(mkfile_path)); pwd)
###############################################################################
### Tools ###
###############################################################################
PREFIX ?= /usr/local
BIN ?= $(PREFIX)/bin
UNAME_S ?= $(shell uname -s)
UNAME_M ?= $(shell uname -m)
GOPATH ?= $(shell $(GO) env GOPATH)
GITHUBDIR := $(GOPATH)$(FS)src$(FS)github.com
BUF_VERSION ?= 0.11.0
TOOLS_DESTDIR ?= $(GOPATH)/bin
STATIK = $(TOOLS_DESTDIR)/statik
RUNSIM = $(TOOLS_DESTDIR)/runsim
tools: tools-stamp
tools-stamp: statik runsim
# Create dummy file to satisfy dependency and avoid
# rebuilding when this Makefile target is hit twice
# in a row.
touch $@
# Install the runsim binary
statik: $(STATIK)
$(STATIK):
@echo "Installing statik..."
@go install github.com/rakyll/statik@v0.1.6
# Install the runsim binary
runsim: $(RUNSIM)
$(RUNSIM):
@echo "Installing runsim..."
@go install github.com/cosmos/tools/cmd/runsim@v1.0.0
tools-clean:
rm -f $(STATIK) $(GOLANGCI_LINT) $(RUNSIM)
rm -f tools-stamp
.PHONY: tools-clean statik runsim

14
contrib/test_cover.sh Normal file
View file

@ -0,0 +1,14 @@
#!/usr/bin/env bash
set -e
PKGS=$(go list ./... | grep -v '/simapp')
set -e
echo "mode: atomic" > coverage.txt
for pkg in ${PKGS[@]}; do
go test -v -timeout 30m -race -coverprofile=profile.out -covermode=atomic -tags='ledger test_ledger_mock' "$pkg"
if [ -f profile.out ]; then
tail -n +2 profile.out >> coverage.txt;
rm profile.out
fi
done

20
docs/.gitignore vendored Normal file
View file

@ -0,0 +1,20 @@
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View file

@ -0,0 +1,7 @@
// This file is used by markdownlint-cli2 to configure the linting process
// in conjunction with .markdownlint.jsonc.
{
"ignores": [
"node_modules/**"
]
}

27
docs/.markdownlint.jsonc Normal file
View file

@ -0,0 +1,27 @@
{
"default": true,
"MD003": {
"style": "atx"
}, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md003---heading-style
"MD004": {
"style": "dash"
}, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md004---unordered-list-style
"MD007": {
"indent": 4
}, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md007---unordered-list-indentation
"MD009": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md009---trailing-spaces
"MD010": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md010---hard-tabs
"MD013": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md013---line-length
"MD024": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md024---multiple-headings-with-the-same-content
"MD025": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md025---multiple-top-level-headings-in-the-same-document
"MD029": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix
"MD033": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md033---inline-html
"MD036": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md036---emphasis-used-instead-of-a-heading
"MD041": false, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md041---first-line-in-a-file-should-be-a-top-level-heading
"MD049": {
"style": "asterisk"
}, // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md049---emphasis-style-should-be-consistent
"MD050": {
"style": "asterisk"
} // https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md050---strong-style-should-be-consistent
}

289
docs/README.md Normal file
View file

@ -0,0 +1,289 @@
# IBC-Go Documentation
Welcome to the IBC-Go documentation! This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
## Table of Contents
- [IBC-Go Documentation](#ibc-go-documentation)
- [Table of Contents](#table-of-contents)
- [Configuration](#configuration)
- [Local Development and Deployment](#local-development-and-deployment)
- [Installation](#installation)
- [Local Development](#local-development)
- [Build](#build)
- [Serve](#serve)
- [Updating the Documentation](#updating-the-documentation)
- [Best practices](#best-practices)
- [File and Directory Naming Conventions](#file-and-directory-naming-conventions)
- [Code Blocks](#code-blocks)
- [Links](#links)
- [Multi-Documentation Linking](#multi-documentation-linking)
- [Static Assets](#static-assets)
- [Raw Assets](#raw-assets)
- [Technical writing course](#technical-writing-course)
- [Versioning](#versioning)
- [Terminology](#terminology)
- [Overview](#overview)
- [Tagging a new version](#tagging-a-new-version)
- [Adding a new version](#adding-a-new-version)
- [Updating an existing version](#updating-an-existing-version)
- [Deleting a version](#deleting-a-version)
## Configuration
Docusaurus configuration file is located at `./docusaurus.config.js`. This file contains the configuration for the sidebar, navbar, footer, and other settings. Sidebars are created in `./sidebars.js`.
## Local Development and Deployment
### Installation
```bash
npm install
```
### Local Development
```bash
npm start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server. However, in the local development environment, some plugins like `@docusaurus/plugin-client-redirects`, will not work at all. This is why the landing page is an error page in the local development environment, and why you have to click on the correct docs version to see the documentation. This is not the case in the production environment. To view the production environment, you must [build](#build) and [serve](#serve) the website locally.
### Build
```bash
npm run build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Serve
```bash
npm run serve
```
This command starts a local production server and opens up a browser window.
### Lint
From the root of the repo:
```bash
make lint-docs
```
This command will run `markdownlint-cli2` (if you don't have it installed, see the [install docs](https://github.com/DavidAnson/markdownlint-cli2#install) and lint all markdown files in `./docs/docs` (i.e. it will not lint versioned docs).
### Check links
From the root of the repo:
```bash
make check-docs-link
```
This command will run `lychee` (if you don't have it, see the [install docs](https://lychee.cli.rs/installation/)) and check all links in `./docs/docs` (i.e. it will not check versioned docs).
Since a lot of our links are to github, this command easily gets rate limited, so it has been set up with a long retry sequence for links. You may need to run it multiple times to check all links.
The results (except rate limit responses) are cached for 1 week, so once you have run it, it will not keep checking the same links twice (this is primarly to help with rate limiting).
## Updating the Documentation
The documentation website is autogenerated from the markdown files found in [docs](./docs) directory. Each directory in `./docs/` represents a category to be displayed in the sidebar. If you create a new directory, you must create a `_category_.json` file in that directory with the following contents:
```json
{
"label": "Sidebar Label",
"position": 1, // position of the category in the sidebar
"link": null
}
```
The `position` key above is used to order the categories in the sidebar. This position key pertains to the order of this category in the parent directory.
If you create a new markdown file within a category (`.docs/` directory is itself a category), you must add the following frontmatter to the top of the markdown file:
```yaml
---
title: Title of the file # title of the file in the sidebar
sidebar_label: Sidebar Label # title of the file in the sidebar
sidebar_position: 1 # position of the file in the sidebar
slug: /migrations/v5-to-v6 # the url of the file
---
```
The `link` key in `_category_.json` determines if the category has an introductory page that comes before any content pages. If `link` is `null`, then the category does not have an introductory page. If there is a markdown file you wish to link, you should do
```json
{
"label": "Sidebar Label",
"position": 1, // position of the category in the sidebar
"link": { "type": "doc", "id": "intro" }
}
```
The `id` key can be defined in the frontmatter of the markdown file. Or, you can use the id tag as an extension to the url of the current page. For example, the following frontmatter on a markdown file in the same directory as the `_category_.json` file shown above will link to the markdown file:
```yaml
---
title: Title
sidebar_label: Sidebar Label
sidebar_position: 0 # should be zero for intro pages
slug: /ibc/upgrades/intro
---
```
## Best practices
- Check the spelling and grammar, even if you have to copy and paste from an external source.
- Use simple sentences. Easy-to-read sentences mean the reader can quickly use the guidance you share.
- Try to express your thoughts in a concise and clean way.
- Either Leave a space or use a `-` between the acronyms ADR and ICS and the corresponding number (e.g. ADR 008 or ADR-008, and ICS 27 or ICS-27).
- Don't overuse `code` format when writing in plain English.
- Follow Google developer documentation [style guide](https://developers.google.com/style).
- Check the meaning of words in Microsoft's [A-Z word list and term collections](https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/accessibility-terms) (use the search input!).
- We recommend using RFC keywords in user documentation (lowercase). The RFC keywords are: "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL. They are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
- Lint the markdown files for documentation with [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli). Run `make docs-lint` (you will need to have `markdownlint-cli` installed, so please follow the [installation instructions](https://github.com/igorshubovych/markdownlint-cli#installation)).
### File and Directory Naming Conventions
Inside `/docs/docs/`:
- All files should be named in `kebab-case`.
- All files should have a two digit prefix, indicating the order in which they should be read and displayed in their respective categories. For example, `01-overview.md` should be read before `02-integration.md`. If this order changes, the prefix should be updated. Note that the ordering is enforced by the frontmatter and not the file name.
- **All files that end in `.template.md` will be ignored by the build process.**
- The prefix `00-` is reserved for root links of categories (if a category has a root link this should be defined in `_category_.json`). For example, see [`docs/01-ibc/05-upgrades/00-intro.md`](./docs/01-ibc/05-upgrades/00-intro.md) and [`docs/01-ibc/05-upgrades/_category_.json`](./docs/01-ibc/05-upgrades/_category_.json).
- All category directories should be named in `kebab-case`.
- All category directories must have a `_category_.json` file.
- All category directories should have a two digit prefix (except for the root `./docs` category), indicating the order in which they should be read and displayed in their respective categories. For example, contents of `./docs/01-ibc/03-apps/` should be read before `./docs/01-ibc/07-relayer.md`. If this order changes, the prefix should be updated. Note that the ordering is enforced by the frontmatter of the markdown files and `_category_.json` files, not the file name.
- The images for each documentation should be kept in the same directory as the markdown file that uses them. This will likely require creating a new directory for each new category. The goal of this is to make versioning easier, discourage repeated use of the image, and make it easier to find images.
### Code Blocks
Code blocks in docusaurus are super-powered, read more about them [here](https://docusaurus.io/docs/markdown-features/code-blocks). Three most important features for us are:
1. We can add a `title` to the code block, which will be displayed above the code block. (This should be used to display the file path of the code block.)
2. We can add a `reference` tag to the code block, which will reference github to create the code block. **You should always use hyperlinks in reference codeblocks.** Here is what a typical code block should look like:
````ignore
```go reference title="modules/apps/transfer/keeper/keeper.go"
https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/transfer/keeper/keeper.go#L19-L31
```
````
3. We can highlight lines in the code block by adding `// highlight-next-line` before the line we want to highlight. For example, we should use this to highlight diffs. Here is an example:
````ignore
```go
import (
...
// highlight-next-line
+ ibctm "github.com/cosmos/ibc-go/v6/modules/light-clients/07-tendermint"
...
)
```
````
### Links
In docusaurus, there are three ways to link to other pages:
1. File Paths (relative or absolute)
2. URLs (relative or absolute)
3. Hyperlinks
In this section, we will discuss when to use each.
#### Multi-Documentation Linking
Technically, there are four docs being maintained in this repo:
1. Found in `docs/docs/` (this is the one displayed on the website in the "Documentation" tab)
2. Found in `docs/architecture/` (this is the one displayed on the website in the "Architecture Decision Records" tab)
3. Found in `docs/events/` (depreciated, this is not displayed on the website, but is hosted under `/events/` url)
4. Found in `docs/params/` (depreciated, this is not displayed on the website, but is hosted under `/params/` url)
When referencing a markdown file, you should use relative file paths if they are in the same docs directory from above. For example, if you are in `docs/docs/01-ibc` and want to link to `docs/docs/02-apps/01-transfer/01-overview.md`, you should use the relative link `../02-apps/01-transfer/01-overview.md`.
If the file you are referencing is in a different docs directory, you should use an absolute URL. For example, if you are in `docs/docs/01-ibc` and want to link to `docs/architecture/adr-001-coin-source-tracing.md`, you should use the absolute URL (not absolute file path), in this case `/architecture/adr-001-coin-source-tracing`. You can find the absolute URL by looking at the slug in the frontmatter of the markdown file you want to link to. If the frontmatter slug is not set (such as in `docs/architecture/adr-001-coin-source-tracing.md`), you should use the url that docusaurus generates for it. You can find this by looking at the url of the page in the browser.
Note that when referencing any file outside of the parent `docs/` directory, you should always use a hyperlink.
#### Static Assets
Static assets are the non-code files that are directly copied to the build output. They include **images**, stylesheets, favicons, fonts, etc.
By default, you are suggested to put these assets in the `static/` directory. Every file you put into that directory will be copied into the root of the generated build folder with the directory hierarchy preserved. E.g. if you add a file named `sun.jpg` to the static folder, it will be copied to `build/sun.jpg`.
These assets should be referenced using absolute URLs. For example, if you have an image in `static/img/cosmos-logo-bw.png`, you should reference it using `/img/cosmos-logo-bw.png`.
#### Raw Assets
If you want to link a raw file, you should link to it using `@site` + its base path. For example, if you want to link to the raw markdown file `/architecture/adr.template.md`, you should use the absolute URL `@site/architecture/adr.template.md`.
### Technical writing course
Google provides a free [course](https://developers.google.com/tech-writing/overview) for technical writing.
## Versioning
Versioning only applies to documentation and not the ADRs found in the `./architecture/` directory.
### Terminology
- Current version: The version placed in the `.docs/` folder. This version is the one that is displayed on the website by default, referred to as next.
- Latest version: This version is defined in `./docusaurus.config.js` file under the `lastVersion` key.
### Overview
A typical versioned doc site looks like below:
```ignore
docs/
├── sidebars.json # sidebar for the current docs version
├── docs/ # docs directory for the current docs version
│ ├── 01-foo/
│ │ └── 01-bar.md # https://mysite.com/docs/next/01-foo/01-bar
│ └── 00-intro.md # https://mysite.com/docs/next/00-intro
├── versions.json # file to indicate what versions are available
├── versioned_docs/
│ ├── version-v1.1.0/
│ │ ├── 01-foo/
│ │ │ └── 01-bar.md # https://mysite.com/docs/01-foo/01-bar
│ │ └── 00-intro.md
│ └── version-v1.0.0/
│ ├── 01-foo/
│ │ └── 01-bar.md # https://mysite.com/docs/v1.0.0/01-foo/01-bar
│ └── 00-intro.md
├── versioned_sidebars/
│ ├── version-v1.1.0-sidebars.json
│ └── version-v1.0.0-sidebars.json
├── docusaurus.config.js
└── package.json
```
The `./versions.json` file is a list of version names, ordered from newest to oldest.
### Tagging a new version
It is possible to tag the current version of the docs as a new version. This will create the appropriate files in `./versioned_docs/` and `./versioned_sidebars/` directories, and modify the `./versions.json` file. To do this, run the following command:
```bash
npm run docusaurus docs:version v7.1.0
```
### Adding a new version
To add a new version:
1. Create a new directory in `./versioned_docs/` called `version-vX.Y.Z` where `X.Y.Z` is the version number. This directory should contain the markdown files for the new version.
2. Create a new file in `./versioned_sidebars/` called `version-vX.Y.Z-sidebars.json`. This file should contain the sidebar for the new version.
3. Add the version to the `./versions.json` file. The list should be ordered from newest to oldest.
4. If needed, make any configuration changes in `./docusaurus.config.js`. For example, updating the `lastVersion` key in `./docusaurus.config.js` to the latest version.
### Updating an existing version
You can update multiple docs versions at the same time because each directory in `./versioned_docs/` represents specific routes when published. Make changes by editing the markdown files in the appropriate version directory.
### Deleting a version
When a version is no longer supported, you can delete it by removing it from `versions.json` and deleting the corresponding files in `./versioned_docs/` and `./versioned_sidebars/`.

View file

@ -0,0 +1,48 @@
---
sidebar_position: 1
---
# Architecture Decision Records (ADR)
This is a location to record all high-level architecture decisions in the ibc-go project.
You can read more about the ADR concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).
An ADR should provide:
- Context on the relevant goals and the current state
- Proposed changes to achieve the goals
- Summary of pros and cons
- References
- Changelog
Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and
justification for a change in architecture, or for the architecture of something
new. The spec is much more compressed and streamlined summary of everything as
it is or should be.
If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
Note the context/background should be written in the present tense.
To suggest an ADR, please make use of the [ADR template](https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr.template.md) provided.
## Table of Contents
| ADR \# | Description | Status |
| ------ | ----------- | ------ |
| [001](./adr-001-coin-source-tracing.md) | ICS-20 coin denomination format | Accepted, Implemented |
| [002](./adr-002-go-module-versioning.md) | Go module versioning | Accepted |
| [003](./adr-003-ics27-acknowledgement.md) | ICS27 acknowledgement format | Accepted |
| [004](./adr-004-ics29-lock-fee-module.md) | ICS29 module locking upon escrow out of balance | Accepted |
| [005](./adr-005-consensus-height-events.md) | `UpdateClient` events - `ClientState` consensus heights | Accepted |
| [006](./adr-006-02-client-refactor.md) | ICS02 client refactor | Accepted |
| [007](./adr-007-solomachine-signbytes.md) | ICS06 Solo machine sign bytes | Accepted |
| [008](./adr-008-app-caller-cbs.md) | Callback to IBC Actors | Accepted |
| [009](./adr-009-v6-ics27-msgserver.md) | ICS27 message server addition | Accepted |
| [010](./adr-010-light-clients-as-sdk-modules.md) | IBC light clients as SDK modules | Accepted |
| [011](./adr-011-transfer-total-escrow-state-entry.md) | ICS20 state entry for total amount of tokens in escrow | Accepted |
| [015](./adr-015-ibc-packet-receiver.md) | IBC Packet Routing | Accepted |
| [025](./adr-025-ibc-passive-channels.md) | IBC passive channels | Deprecated |
| [026](./adr-026-ibc-client-recovery-mechanisms.md) | IBC client recovery mechanisms | Accepted |
| [027](./adr-027-ibc-wasm.md) | Wasm based light clients | Accepted |

View file

@ -0,0 +1,375 @@
# ADR 001: Coin Source Tracing
## Changelog
- 2020-07-09: Initial Draft
- 2020-08-11: Implementation changes
## Status
Accepted, Implemented
## Context
The specification for IBC cross-chain fungible token transfers
([ICS20](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer)), needs to
be aware of the origin of any token denomination in order to relay a `Packet` which contains the sender
and recipient addresses in the
[`FungibleTokenPacketData`](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer#data-structures).
The Packet relay sending works based in 2 cases (per
[specification](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer#packet-relay) and [Colin Axnér](https://github.com/colin-axner)'s description):
1. Sender chain is acting as the source zone. The coins are transferred
to an escrow address (i.e locked) on the sender chain and then transferred
to the receiving chain through IBC TAO logic. It is expected that the
receiving chain will mint vouchers to the receiving address.
2. Sender chain is acting as the sink zone. The coins (vouchers) are burned
on the sender chain and then transferred to the receiving chain through IBC
TAO logic. It is expected that the receiving chain, which had previously
sent the original denomination, will unescrow the fungible token and send
it to the receiving address.
Another way of thinking of source and sink zones is through the token's
timeline. Each send to any chain other than the one it was previously
received from is a movement forwards in the token's timeline. This causes
trace to be added to the token's history and the destination port and
destination channel to be prefixed to the denomination. In these instances
the sender chain is acting as the source zone. When the token is sent back
to the chain it previously received from, the prefix is removed. This is
a backwards movement in the token's timeline and the sender chain
is acting as the sink zone.
### Example
Assume the following channel connections exist and that all channels use the port ID `transfer`:
- chain `A` has channels with chain `B` and chain `C` with the IDs `channelToB` and `channelToC`, respectively
- chain `B` has channels with chain `A` and chain `C` with the IDs `channelToA` and `channelToC`, respectively
- chain `C` has channels with chain `A` and chain `B` with the IDs `channelToA` and `channelToB`, respectively
These steps of transfer between chains occur in the following order: `A -> B -> C -> A -> C`. In particular:
1. `A -> B`: sender chain is source zone. `A` sends packet with `denom` (escrowed on `A`), `B` receives `denom` and mints and sends voucher `transfer/channelToA/denom` to recipient.
2. `B -> C`: sender chain is source zone. `B` sends packet with `transfer/channelToA/denom` (escrowed on `B`), `C` receives `transfer/channelToA/denom` and mints and sends voucher `transfer/channelToB/transfer/channelToA/denom` to recipient.
3. `C -> A`: sender chain is source zone. `C` sends packet with `transfer/channelToB/transfer/channelToA/denom` (escrowed on `C`), `A` receives `transfer/channelToB/transfer/channelToA/denom` and mints and sends voucher `transfer/channelToC/transfer/channelToB/transfer/channelToA/denom` to recipient.
4. `A -> C`: sender chain is sink zone. `A` sends packet with `transfer/channelToC/transfer/channelToB/transfer/channelToA/denom` (burned on `A`), `C` receives `transfer/channelToC/transfer/channelToB/transfer/channelToA/denom`, and unescrows and sends `transfer/channelToB/transfer/channelToA/denom` to recipient.
The token has a final denomination on chain `C` of `transfer/channelToB/transfer/channelToA/denom`, where `transfer/channelToB/transfer/channelToA` is the trace information.
In this context, upon a receive of a cross-chain fungible token transfer, if the sender chain is the source of the token, the protocol prefixes the denomination with the port and channel identifiers in the following format:
```typescript
prefix + denom = {destPortN}/{destChannelN}/.../{destPort0}/{destChannel0}/denom
```
Example: transferring `100 uatom` from port `HubPort` and channel `HubChannel` on the Hub to
Ethermint's port `EthermintPort` and channel `EthermintChannel` results in `100
EthermintPort/EthermintChannel/uatom`, where `EthermintPort/EthermintChannel/uatom` is the new
denomination on the receiving chain.
In the case those tokens are transferred back to the Hub (i.e the **source** chain), the prefix is
trimmed and the token denomination updated to the original one.
### Problem
The problem of adding additional information to the coin denomination is twofold:
1. The ever increasing length if tokens are transferred to zones other than the source:
If a token is transferred `n` times via IBC to a sink chain, the token denom will contain `n` pairs
of prefixes, as shown on the format example above. This poses a problem because, while port and
channel identifiers have a maximum length of 64 each, the SDK `Coin` type only accepts denoms up to
64 characters. Thus, a single cross-chain token, which again, is composed by the port and channels
identifiers plus the base denomination, can exceed the length validation for the SDK `Coins`.
This can result in undesired behaviours such as tokens not being able to be transferred to multiple
sink chains if the denomination exceeds the length or unexpected `panics` due to denomination
validation failing on the receiving chain.
2. The existence of special characters and uppercase letters on the denomination:
In the SDK every time a `Coin` is initialized through the constructor function `NewCoin`, a validation
of a coin's denom is performed according to a
[Regex](https://github.com/cosmos/cosmos-sdk/blob/a940214a4923a3bf9a9161cd14bd3072299cd0c9/types/coin.go#L583),
where only lowercase alphanumeric characters are accepted. While this is desirable for native denominations
to keep a clean UX, it presents a challenge for IBC as ports and channels might be randomly
generated with special and uppercase characters as per the [ICS 024 - Host
Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements#paths-identifiers-separators)
specification.
## Decision
The issues outlined above, are applicable only to SDK-based chains, and thus the proposed solution
are do not require specification changes that would result in modification to other implementations
of the ICS20 spec.
Instead of adding the identifiers on the coin denomination directly, the proposed solution hashes
the denomination prefix in order to get a consistent length for all the cross-chain fungible tokens.
This will be used for internal storage only, and when transferred via IBC to a different chain, the
denomination specified on the packed data will be the full prefix path of the identifiers needed to
trace the token back to the originating chain, as specified on ICS20.
The new proposed format will be the following:
```go
ibcDenom = "ibc/" + hash(trace path + "/" + base denom)
```
The hash function will be a SHA256 hash of the fields of the `DenomTrace`:
```protobuf
// DenomTrace contains the base denomination for ICS20 fungible tokens and the source tracing
// information
message DenomTrace {
// chain of port/channel identifiers used for tracing the source of the fungible token
string path = 1;
// base denomination of the relayed fungible token
string base_denom = 2;
}
```
The `IBCDenom` function constructs the `Coin` denomination used when creating the ICS20 fungible token packet data:
```go
// Hash returns the hex bytes of the SHA256 hash of the DenomTrace fields using the following formula:
//
// hash = sha256(tracePath + "/" + baseDenom)
func (dt DenomTrace) Hash() tmbytes.HexBytes {
return tmhash.Sum(dt.Path + "/" + dt.BaseDenom)
}
// IBCDenom a coin denomination for an ICS20 fungible token in the format 'ibc/{hash(tracePath + baseDenom)}'.
// If the trace is empty, it will return the base denomination.
func (dt DenomTrace) IBCDenom() string {
if dt.Path != "" {
return fmt.Sprintf("ibc/%s", dt.Hash())
}
return dt.BaseDenom
}
```
### `x/ibc-transfer` Changes
In order to retrieve the trace information from an IBC denomination, a lookup table needs to be
added to the `ibc-transfer` module. These values need to also be persisted between upgrades, meaning
that a new `[]DenomTrace` `GenesisState` field state needs to be added to the module:
```go
// GetDenomTrace retrieves the full identifiers trace and base denomination from the store.
func (k Keeper) GetDenomTrace(ctx Context, denomTraceHash []byte) (DenomTrace, bool) {
store := ctx.KVStore(k.storeKey)
bz := store.Get(types.KeyDenomTrace(traceHash))
if bz == nil {
return &DenomTrace, false
}
var denomTrace DenomTrace
k.cdc.MustUnmarshalBinaryBare(bz, &denomTrace)
return denomTrace, true
}
// HasDenomTrace checks if a the key with the given trace hash exists on the store.
func (k Keeper) HasDenomTrace(ctx Context, denomTraceHash []byte) bool {
store := ctx.KVStore(k.storeKey)
return store.Has(types.KeyTrace(denomTraceHash))
}
// SetDenomTrace sets a new {trace hash -> trace} pair to the store.
func (k Keeper) SetDenomTrace(ctx Context, denomTrace DenomTrace) {
store := ctx.KVStore(k.storeKey)
bz := k.cdc.MustMarshalBinaryBare(&denomTrace)
store.Set(types.KeyTrace(denomTrace.Hash()), bz)
}
```
The `MsgTransfer` will validate that the `Coin` denomination from the `Token` field contains a valid
hash, if the trace info is provided, or that the base denominations matches:
```go
func (msg MsgTransfer) ValidateBasic() error {
// ...
return ValidateIBCDenom(msg.Token.Denom)
}
```
```go
// ValidateIBCDenom validates that the given denomination is either:
//
// - A valid base denomination (eg: 'uatom')
// - A valid fungible token representation (i.e 'ibc/{hash}') per ADR 001 https://github.com/cosmos/ibc-go/blob/main/docs/architecture/adr-001-coin-source-tracing.md
func ValidateIBCDenom(denom string) error {
denomSplit := strings.SplitN(denom, "/", 2)
switch {
case strings.TrimSpace(denom) == "",
len(denomSplit) == 1 && denomSplit[0] == "ibc",
len(denomSplit) == 2 && (denomSplit[0] != "ibc" || strings.TrimSpace(denomSplit[1]) == ""):
return sdkerrors.Wrapf(ErrInvalidDenomForTransfer, "denomination should be prefixed with the format 'ibc/{hash(trace + \"/\" + %s)}'", denom)
case denomSplit[0] == denom && strings.TrimSpace(denom) != "":
return sdk.ValidateDenom(denom)
}
if _, err := ParseHexHash(denomSplit[1]); err != nil {
return Wrapf(err, "invalid denom trace hash %s", denomSplit[1])
}
return nil
}
```
The denomination trace info only needs to be updated when token is received:
- Receiver is **source** chain: The receiver created the token and must have the trace lookup already stored (if necessary *ie* native token case wouldn't need a lookup).
- Receiver is **not source** chain: Store the received info. For example, during step 1, when chain `B` receives `transfer/channelToA/denom`.
```go
// SendTransfer
// ...
fullDenomPath := token.Denom
// deconstruct the token denomination into the denomination trace info
// to determine if the sender is the source chain
if strings.HasPrefix(token.Denom, "ibc/") {
fullDenomPath, err = k.DenomPathFromHash(ctx, token.Denom)
if err != nil {
return err
}
}
if types.SenderChainIsSource(sourcePort, sourceChannel, fullDenomPath) {
//...
```
```go
// DenomPathFromHash returns the full denomination path prefix from an ibc denom with a hash
// component.
func (k Keeper) DenomPathFromHash(ctx sdk.Context, denom string) (string, error) {
hexHash := denom[4:]
hash, err := ParseHexHash(hexHash)
if err != nil {
return "", Wrap(ErrInvalidDenomForTransfer, err.Error())
}
denomTrace, found := k.GetDenomTrace(ctx, hash)
if !found {
return "", Wrap(ErrTraceNotFound, hexHash)
}
fullDenomPath := denomTrace.GetFullDenomPath()
return fullDenomPath, nil
}
```
```go
// OnRecvPacket
// ...
// This is the prefix that would have been prefixed to the denomination
// on sender chain IF and only if the token originally came from the
// receiving chain.
//
// NOTE: We use SourcePort and SourceChannel here, because the counterparty
// chain would have prefixed with DestPort and DestChannel when originally
// receiving this coin as seen in the "sender chain is the source" condition.
if ReceiverChainIsSource(packet.GetSourcePort(), packet.GetSourceChannel(), data.Denom) {
// sender chain is not the source, unescrow tokens
// remove prefix added by sender chain
voucherPrefix := types.GetDenomPrefix(packet.GetSourcePort(), packet.GetSourceChannel())
unprefixedDenom := data.Denom[len(voucherPrefix):]
token := sdk.NewCoin(unprefixedDenom, sdk.NewIntFromUint64(data.Amount))
// unescrow tokens
escrowAddress := types.GetEscrowAddress(packet.GetDestPort(), packet.GetDestChannel())
return k.bankKeeper.SendCoins(ctx, escrowAddress, receiver, sdk.NewCoins(token))
}
// sender chain is the source, mint vouchers
// since SendPacket did not prefix the denomination, we must prefix denomination here
sourcePrefix := types.GetDenomPrefix(packet.GetDestPort(), packet.GetDestChannel())
// NOTE: sourcePrefix contains the trailing "/"
prefixedDenom := sourcePrefix + data.Denom
// construct the denomination trace from the full raw denomination
denomTrace := types.ParseDenomTrace(prefixedDenom)
// set the value to the lookup table if not stored already
traceHash := denomTrace.Hash()
if !k.HasDenomTrace(ctx, traceHash) {
k.SetDenomTrace(ctx, traceHash, denomTrace)
}
voucherDenom := denomTrace.IBCDenom()
voucher := sdk.NewCoin(voucherDenom, sdk.NewIntFromUint64(data.Amount))
// mint new tokens if the source of the transfer is the same chain
if err := k.bankKeeper.MintCoins(
ctx, types.ModuleName, sdk.NewCoins(voucher),
); err != nil {
return err
}
// send to receiver
return k.bankKeeper.SendCoinsFromModuleToAccount(
ctx, types.ModuleName, receiver, sdk.NewCoins(voucher),
)
```
```go
func NewDenomTraceFromRawDenom(denom string) DenomTrace{
denomSplit := strings.Split(denom, "/")
trace := ""
if len(denomSplit) > 1 {
trace = strings.Join(denomSplit[:len(denomSplit)-1], "/")
}
return DenomTrace{
BaseDenom: denomSplit[len(denomSplit)-1],
Trace: trace,
}
}
```
One final remark is that the `FungibleTokenPacketData` will remain the same, i.e with the prefixed full denomination, since the receiving chain may not be an SDK-based chain.
### Coin Changes
The coin denomination validation will need to be updated to reflect these changes. In particular, the denomination validation
function will now:
- Accept slash separators (`"/"`) and uppercase characters (due to the `HexBytes` format)
- Bump the maximum character length to 128, as the hex representation used by Tendermint's
`HexBytes` type contains 64 characters.
Additional validation logic, such as verifying the length of the hash, the may be added to the bank module in the future if the [custom base denomination validation](https://github.com/cosmos/cosmos-sdk/pull/6755) is integrated into the SDK.
### Positive
- Clearer separation of the source tracing behaviour of the token (transfer prefix) from the original
`Coin` denomination
- Consistent validation of `Coin` fields (i.e no special characters, fixed max length)
- Cleaner `Coin` and standard denominations for IBC
- No additional fields to SDK `Coin`
### Negative
- Store each set of tracing denomination identifiers on the `ibc-transfer` module store
- Clients will have to fetch the base denomination every time they receive a new relayed fungible token over IBC. This can be mitigated using a map/cache for already seen hashes on the client side. Other forms of mitigation, would be opening a websocket connection subscribe to incoming events.
### Neutral
- Slight difference with the ICS20 spec
- Additional validation logic for IBC coins on the `ibc-transfer` module
- Additional genesis fields
- Slightly increases the gas usage on cross-chain transfers due to access to the store. This should
be inter-block cached if transfers are frequent.
## References
- [ICS 20 - Fungible token transfer](https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer)
- [Custom Coin Denomination validation](https://github.com/cosmos/cosmos-sdk/pull/6755)

View file

@ -0,0 +1,112 @@
# ADR 002: Go module versioning
## Changelog
- 05/01/2022: initial draft
## Status
Accepted
## Context
The IBC module was originally developed in the Cosmos SDK and released during the Stargate release series (v0.42).
It was subsequently migrated to its own repository, ibc-go.
The first official release on ibc-go was v1.0.0.
v1.0.0 was decided to be used instead of v0.1.0 primarily for the following reasons:
- Maintaining compatibility with the IBC specification v1 requires stronger support/guarantees.
- Using the major, minor, and patch numbers allows for easier communication of what breaking changes are included in a release.
- The IBC module is being used by numerous high value projects which require stability.
### Problems
#### Go module version must be incremented
When a Go module is released under v1.0.0, all following releases must follow Go semantic versioning.
Thus when the go API is broken, the Go module major version **must** be incremented.
For example, changing the go package version from `v2` to `v3` bumps the import from `github.com/cosmos/ibc-go/v2` to `github.com/cosmos/ibc-go/v3`.
If the Go module version is not incremented then attempting to go get a module @v3.0.0 without the suffix results in:
`invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v3`
Version validation was added in Go 1.13. This means that in order to release a v3.0.0 git tag without a /v3 suffix on the module definition, the tag must explicitly **not** contain a go.mod file.
Not including a go.mod in our release is not a viable option.
#### Attempting to import multiple go module versions for ibc-go
Attempting to import two versions of ibc-go, such as `github.com/cosmos/ibc-go/v2` and `github.com/cosmos/ibc-go/v3`, will result in multiple issues.
The Cosmos SDK does global registration of error and governance proposal types.
The errors and proposals used in ibc-go would need to now register their naming based on the go module version.
The more concerning problem is that protobuf definitions will also reach a namespace collision.
ibc-go and the Cosmos SDK in general rely heavily on using extended functions for go structs generated from protobuf definitions.
This requires the go structs to be defined in the same package as the extended functions.
Thus, bumping the import versioning causes the protobuf definitions to be generated in two places (in v2 and v3).
When registering these types at compile time, the go compiler will panic.
The generated types need to be registered against the proto codec, but there exist two definitions for the same name.
The protobuf conflict policy can be overridden via the environment variable `GOLANG_PROTOBUF_REGISTRATION_CONFLICT`, but it is possible this could lead to various runtime errors or unexpected behaviour (see [here](https://github.com/protocolbuffers/protobuf-go/blob/master/reflect/protoregistry/registry.go#L46)).
More information [here](https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict) on namespace conflicts for protobuf versioning.
### Potential solutions
#### Changing the protobuf definition version
The protobuf definitions all have a type URL containing the protobuf version for this type.
Changing the protobuf version would solve the namespace collision which arise from importing multiple versions of ibc-go, but it leads to new issues.
In the Cosmos SDK, `Any`s are unpacked and decoded using the type URL.
Changing the type URL thus is creating a distinctly different type.
The same registration on the proto codec cannot be used to unpack the new type.
For example:
All Cosmos SDK messages are packed into `Any`s. If we incremented the protobuf version for our IBC messages, clients which submitted the v1 of our Cosmos SDK messages would now be rejected since the old type is not registered on the codec.
The clients must know to submit the v2 of these messages. This pushes the burden of versioning onto relayers and wallets.
A more serious problem is that the `ClientState` and `ConsensusState` are packed as `Any`s. Changing the protobuf versioning of these types would break compatibility with IBC specification v1.
#### Moving protobuf definitions to their own go module
The protobuf definitions could be moved to their own go module which uses 0.x versioning and will never go to 1.0.
This prevents the Go module version from being incremented with breaking changes.
It also requires all extended functions to live in the same Go module, disrupting the existing code structure.
The version that implements this change will still be incompatible with previous versions, but future versions could be imported together without namespace collisions.
For example, let's say this solution is implemented in v3. Then
`github.com/cosmos/ibc-go/v2` cannot be imported with any other ibc-go version
`github.com/cosmos/ibc-go/v3` cannot be imported with any previous ibc-go versions
`github.com/cosmos/ibc-go/v4` may be imported with ibc-go versions v3+
`github.com/cosmos/ibc-go/v5` may be imported with ibc-go versions v3+
## Decision
Supporting importing multiple versions of ibc-go requires a non-trivial amount of complexity.
It is unclear when a user of the ibc-go code would need multiple versions of ibc-go.
Until there is an overwhelming reason to support importing multiple versions of ibc-go:
**Major releases cannot be imported simultaneously**.
Releases should focus on keeping backwards compatibility for go code clients, within reason.
Old functionality should be marked as deprecated and there should exist upgrade paths between major versions.
Deprecated functionality may be removed when no clients rely on that functionality.
How this is determined is to be decided.
**Error and proposal type registration will not be changed between go module version increments**.
This explicitly stops external clients from trying to import two major versions (potentially risking a bug due to the instability of proto name collisions override).
## Consequences
This only affects clients relying directly on the go code.
### Positive
### Negative
Multiple ibc-go versions cannot be imported.
### Neutral

View file

@ -0,0 +1,120 @@
# ADR 003: ICS27 Acknowledgement Format
## Changelog
- January 28th, 2022: Initial Draft
## Status
Accepted
## Context
Upon receiving an IBC packet, an IBC application can optionally return an acknowledgement.
This acknowledgement will be hashed and written into state. Thus any changes to the information included in an acknowledgement are state machine breaking.
ICS27 executes transactions on behalf of a controller chain. Information such as the message result or message error may be returned from other SDK modules outside the control of the ICS27 module.
It might be very valuable to return message execution information inside the ICS27 acknowledgement so that controller chain interchain account auth modules can act upon this information.
Only deterministic information returned from the message execution is allowed to be returned in the packet acknowledgement otherwise the network will halt due to a fork in the expected app hash.
## Decision
At the time of this writing, Tendermint includes the following information in the [ABCI.ResponseDeliverTx](https://github.com/tendermint/tendermint/blob/release/v0.34.13/types/results.go#L47-#L53):
```go
// deterministicResponseDeliverTx strips non-deterministic fields from
// ResponseDeliverTx and returns another ResponseDeliverTx.
func deterministicResponseDeliverTx(response *abci.ResponseDeliverTx) *abci.ResponseDeliverTx {
return &abci.ResponseDeliverTx{
Code: response.Code,
Data: response.Data,
GasWanted: response.GasWanted,
GasUsed: response.GasUsed,
}
}
```
### Successful acknowledgements
Successful acknowledgements should return information about the transaction execution.
Given the deterministic fields in the `abci.ResponseDeliverTx`, the transaction `Data` can be used to indicate information about the transaction execution.
The `abci.ResponseDeliverTx.Data` will be set in the ICS27 packet acknowledgement upon successful transaction execution.
The format for the `abci.ResponseDeliverTx.Data` is constructed by the SDK.
At the time of this writing, the next major release of the SDK will change the format for constructing the transaction response data.
#### v0.45 format
The current version, v0.45 constructs the transaction response as follows:
```go
proto.Marshal(&sdk.TxMsgData{
Data: []*sdk.MsgData{msgResponses...},
}
```
Where `msgResponses` is a slice of `*sdk.MsgData`.
The `MsgData.MsgType` contains the `sdk.MsgTypeURL` of the `sdk.Msg` being executed.
The `MsgData.Data` contains the proto marshaled `MsgResponse` for the associated message executed.
#### Next major version format
The next major version will construct the transaction response as follows:
```go
proto.Marshal(&sdk.TxMsgData{
MsgResponses: []*codectypes.Any{msgResponses...},
}
```
Where `msgResponses` is a slice of the `MsgResponse`s packed into `Any`s.
#### Forwards compatible approach
A forwards compatible approach was deemed infeasible.
The `handler` provided by the `MsgServiceRouter` will only include the `*sdk.Result` and an error (if one occurred).
In v0.45 of the SDK, the `*sdk.Result.Data` will contain the MsgResponse marshaled data.
However, the MsgResponse is not packed and marshaled as a `*codectypes.Any`, thus making it impossible from a generalized point of view to unmarshal the bytes.
If the bytes could be unmarshaled, then they could be packed into an `*codectypes.Any` in anticipation of the upcoming format.
Intercepting the MsgResponse before it becomes marshaled requires replicating this [code](https://github.com/cosmos/cosmos-sdk/blob/dfd47f5b449f558a855da284a9a7eabbfbad435d/baseapp/msg_service_router.go#L109-#L128).
It may not even be possible to replicate the linked code. The method handler would need to be accessed somehow.
For these reasons it is deemed infeasible to attempt a forwards compatible approach.
ICA auth developers can interpret which format was used when constructing the transaction response by checking if the `sdk.TxMsgData.Data` field is non-empty.
If the `sdk.TxMsgData.Data` field is not empty then the format for v0.45 was used, otherwise ICA auth developers can assume the transaction response uses the newer format.
#### Decision
Replicate the transaction response format as provided by the current SDK version.
When the SDK version changes, adjust the transaction response format to use the updated transaction response format.
Include the transaction response bytes in the result channel acknowledgement.
A test has been [written](https://github.com/cosmos/ibc-go/blob/v3.0.0/modules/apps/27-interchain-accounts/host/ibc_module_test.go#L716-#L774) to fail if the `MsgResponse` is no longer included in consensus.
### Error acknowledgements
As indicated above, the `abci.ResponseDeliverTx.Code` is deterministic.
Upon transaction execution errors, an error acknowledgement should be returned including the abci code.
A test has been [written](https://github.com/cosmos/ibc-go/blob/v3.0.0/modules/apps/27-interchain-accounts/host/types/ack_test.go#L41-#L82) to fail if the ABCI code is no longer deterministic.
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
- interchain account auth modules can act upon transaction results without requiring a query module
- transaction results align with those returned by execution of a normal SDK message.
### Negative
- the security assumptions of this decision rest on the inclusion of the ABCI error code and the Msg response in the ResponseDeliverTx hash created by Tendermint
- events are non-deterministic and cannot be included in the packet acknowledgement
### Neutral
No neutral consequences.

View file

@ -0,0 +1,58 @@
# ADR 004: Lock fee module upon escrow out of balance
## Changelog
- 03/03/2022: initial draft
## Status
Accepted
## Context
The fee module maintains an escrow account for all fees escrowed to incentivize packet relays.
It also tracks each packet fee escrowed separately from the escrow account. This is because the escrow account only maintains a total balance. It has no reference for which coins belonged to which packet fee.
In the presence of a severe bug, it is possible the escrow balance will become out of sync with the packet fees marked as escrowed.
The ICS29 module should be capable of elegantly handling such a scenario.
## Decision
We will allow for the ICS29 module to become "locked" if the escrow balance is determined to be out of sync with the packet fees marked as escrowed.
A "locked" fee module will not allow for packet escrows to occur nor will it distribute fees. All IBC callbacks will skip performing fee logic, similar to fee disabled channels.
Manual intervention will be needed to unlock the fee module.
### Sending side
Special behaviour will have to be accounted for in `OnAcknowledgementPacket`. Since the counterparty will continue to send incentivized acknowledgements for fee enabled channels, the acknowledgement will still need to be unmarshalled into an incentivized acknowledgement before calling the underlying application `OnAcknowledgePacket` callback.
When distributing fees, a cached context should be used. If the escrow account balance would become negative, the current state changes should be discarded and the fee module should be locked using the uncached context. This prevents fees from being partially distributed for a given packetID.
### Receiving side
`OnRecvPacket` should remain unaffected by the fee module becoming locked since escrow accounts only affect the sending side.
## Consequences
### Positive
The fee module can be elegantly disabled in the presence of severe bugs.
### Negative
Extra logic is added to account for edge cases which are only possible in the presence of bugs.
### Neutral
## References
Issues:
- [#821](https://github.com/cosmos/ibc-go/issues/821)
- [#860](https://github.com/cosmos/ibc-go/issues/860)
PR's:
- [#1031](https://github.com/cosmos/ibc-go/pull/1031)
- [#1029](https://github.com/cosmos/ibc-go/pull/1029)
- [#1056](https://github.com/cosmos/ibc-go/pull/1056)

View file

@ -0,0 +1,92 @@
# ADR 005: UpdateClient Events - ClientState Consensus Heights
## Changelog
- 25/04/2022: initial draft
## Status
Accepted
## Context
The `ibc-go` implementation leverages the [Cosmos-SDK's EventManager](https://github.com/cosmos/cosmos-sdk/blob/v0.45.4/docs/core/events.md#EventManager) to provide subscribers a method of reacting to application specific events.
Some IBC relayers depend on the [`consensus_height`](https://github.com/cosmos/ibc-go/blob/v3.0.0/modules/core/02-client/keeper/events.go#L33) attribute emitted as part of `UpdateClient` events in order to run `07-tendermint` misbehaviour detection by cross-checking the details of the *Header* emitted at a given consensus height against those of the *Header* from the originating chain. This includes such details as:
- The `SignedHeader` containing the commitment root.
- The `ValidatorSet` that signed the *Header*.
- The `TrustedHeight` seen by the client at less than or equal to the height of *Header*.
- The last `TrustedValidatorSet` at the trusted height.
Following the refactor of the `02-client` submodule and associated `ClientState` interfaces, it will now be possible for
light client implementations to perform such actions as batch updates, inserting `N` number of `ConsensusState`s into the application state tree with a single `UpdateClient` message. This flexibility is provided in `ibc-go` by the usage of the [Protobuf `Any`](https://developers.google.com/protocol-buffers/docs/proto3#any) field contained within the [`UpdateClient`](https://github.com/cosmos/ibc-go/blob/v3.0.0/proto/ibc/core/client/v1/tx.proto#L44) message.
For example, a batched client update message serialized as a Protobuf `Any` type for the `07-tendermint` lightclient implementation could be defined as follows:
```protobuf
message BatchedHeaders {
repeated Header headers = 1;
}
```
To complement this flexibility, the `UpdateClient` handler will now support the submission of [client misbehaviour](https://github.com/cosmos/ibc/tree/master/spec/core/ics-002-client-semantics#misbehaviour) by consolidating the `Header` and `Misbehaviour` interfaces into a single `ClientMessage` interface type:
```go
// ClientMessage is an interface used to update an IBC client.
// The update may be done by a single header, a batch of headers, misbehaviour, or any type which when verified produces
// a change to state of the IBC client
type ClientMessage interface {
proto.Message
ClientType() string
ValidateBasic() error
}
```
To support this functionality the `GetHeight()` method has been omitted from the new `ClientMessage` interface.
Emission of standardised events from the `02-client` submodule now becomes problematic and is two-fold:
1. The `02-client` submodule previously depended upon the `GetHeight()` method of `Header` types in order to [retrieve the updated consensus height](https://github.com/cosmos/ibc-go/blob/v3.0.0/modules/core/02-client/keeper/client.go#L90).
2. Emitting a single `consensus_height` event attribute is not sufficient in the case of a batched client update containing multiple *Headers*.
## Decision
The following decisions have been made in order to provide flexibility to consumers of `UpdateClient` events in a non-breaking fashion:
1. Return a list of updated consensus heights `[]exported.Height` from the new `UpdateState` method of the `ClientState` interface.
```go
// UpdateState updates and stores as necessary any associated information for an IBC client, such as the ClientState and corresponding ConsensusState.
// Upon successful update, a list of consensus heights is returned. It assumes the ClientMessage has already been verified.
UpdateState(sdk.Context, codec.BinaryCodec, sdk.KVStore, ClientMessage) []Height
```
2. Maintain the `consensus_height` event attribute emitted from the `02-client` update handler, but mark as deprecated for future removal. For example, with tendermint lightclients this will simply be `consensusHeights[0]` following a successful update using a single *Header*.
3. Add an additional `consensus_heights` event attribute, containing a comma separated list of updated heights. This provides flexibility for emitting a single consensus height or multiple consensus heights in the example use-case of batched header updates.
## Consequences
### Positive
- Subscribers of IBC core events can act upon `UpdateClient` events containing one or more consensus heights.
- Deprecation of the existing `consensus_height` attribute allows consumers to continue to process `UpdateClient` events as normal, with a path to upgrade to using the `consensus_heights` attribute moving forward.
### Negative
- Consumers of IBC core `UpdateClient` events are forced to make future code changes.
### Neutral
## References
Discussions:
- [#1208](https://github.com/cosmos/ibc-go/pull/1208#discussion_r839691927)
Issues:
- [#594](https://github.com/cosmos/ibc-go/issues/594)
PRs:
- [#1285](https://github.com/cosmos/ibc-go/pull/1285)

View file

@ -0,0 +1,203 @@
# ADR 006: ICS-02 client refactor
## Changelog
- 2022-08-01: Initial Draft
## Status
Accepted and applied in v7 of ibc-go
## Context
During the initial development of the 02-client submodule, each light client supported (06-solomachine, 07-tendermint, 09-localhost) was referenced through hardcoding.
Here is an example of the [code](https://github.com/cosmos/cosmos-sdk/commit/b93300288e3a04faef9c0774b75c13b24450ba1c#diff-c5f6b956947375f28d611f18d0e670cf28f8f305300a89c5a9b239b0eeec5064R83) that existed in the 02-client submodule:
```go
func (k Keeper) UpdateClient(ctx sdk.Context, clientID string, header exported.Header) (exported.ClientState, error) {
...
switch clientType {
case exported.Tendermint:
clientState, consensusState, err = tendermint.CheckValidityAndUpdateState(
clientState, header, ctx.BlockTime(),
)
case exported.Localhost:
// override client state and update the block height
clientState = localhosttypes.NewClientState(
ctx.ChainID(), // use the chain ID from context since the client is from the running chain (i.e self).
ctx.BlockHeight(),
)
default:
err = types.ErrInvalidClientType
}
```
To add additional light clients, code would need to be added directly to the 02-client submodule.
Evidently, this would likely become problematic as IBC scaled to many chains using consensus mechanisms beyond the initial supported light clients.
Issue [#6064](https://github.com/cosmos/cosmos-sdk/issues/6064) on the SDK addressed this problem by creating a more modular 02-client submodule.
The 02-client submodule would now interact with each light client via an interface.
While, this change was positive in development, increasing the flexibility and adoptability of IBC, it also opened the door to new problems.
The difficulty of generalizing light clients became apparent once changes to those light clients were required.
Each light client represents a different consensus algorithm which may contain a host of complexity and nuances.
Here are some examples of issues which arose for light clients that are not applicable to all the light clients supported (06-solomachine, 07-tendermint, 09-localhost):
### Tendermint non-zero height upgrades
Before the launch of IBC, it was determined that the golang implementation of [tendermint](https://github.com/tendermint/tendermint) would not be capable of supporting non-zero height upgrades.
This implies that any upgrade would require changing of the chain ID and resetting the height to 0.
A chain is uniquely identified by its chain-id and validator set.
Two different chain ID's can be viewed as different chains and thus a normal update produced by a validator set cannot change the chain ID.
To work around the lack of support for non-zero height upgrades, an abstract height type was created along with an upgrade mechanism.
This type would indicate the revision number (the number of times the chain ID has been changed) and revision height (the current height of the blockchain).
Refs:
- Issue [#439](https://github.com/cosmos/ibc/issues/439) on IBC specification repository.
- Specification changes in [#447](https://github.com/cosmos/ibc/pull/447)
- Implementation changes for the abstract height type, [SDK#7211](https://github.com/cosmos/cosmos-sdk/pull/7211)
### Tendermint requires misbehaviour detection during updates
The initial release of the IBC module and the 07-tendermint light client implementation did not support misbehaviour detection during update nor did it prevent overwriting of previous updates.
Despite the fact that we designed the `ClientState` interface and developed the 07-tendermint client, we failed to detect even a duplicate update that constituted misbehaviour and thus should freeze the client.
This was fixed in PR [#141](https://github.com/cosmos/ibc-go/pull/141) which required light client implementations to be aware that they must handle duplicate updates and misbehaviour detection.
Misbehaviour detection during updates is not applicable to the solomachine nor localhost.
It is also not obvious that `CheckHeaderAndUpdateState` should be performing this functionality.
### Localhost requires access to the entire client store
The localhost has been broken since the initial version of the IBC module.
The localhost tried to be developed underneath the 02-client interfaces without special exception, but this proved to be impossible.
The issues were outlined in [#27](https://github.com/cosmos/ibc-go/issues/27) and further discussed in the attempted ADR in [#75](https://github.com/cosmos/ibc-go/pull/75).
Unlike all other clients, the localhost requires access to the entire IBC store and not just the prefixed client store.
### Solomachine doesn't set consensus states
The 06-solomachine does not set the consensus states within the prefixed client store.
It has a single consensus state that is stored within the client state.
This causes setting of the consensus state at the 02-client level to use unnecessary storage.
It also causes timeouts to fail with solo machines.
Previously, the timeout logic within IBC would obtain the consensus state at the height a timeout is being proved.
This is problematic for the solo machine as no consensus state is set.
See issue [#562](https://github.com/cosmos/ibc/issues/562) on the IBC specification repo.
### New clients may want to do batch updates
New light clients may not function in a similar fashion to 06-solomachine and 07-tendermint.
They may require setting many consensus states in a single update.
As @seunlanlege [states](https://github.com/cosmos/ibc-go/issues/284#issuecomment-1005583679):
> I'm in support of these changes for 2 reasons:
>
> - This would allow light clients to handle batch header updates in CheckHeaderAndUpdateState, for the special case of 11-beefy proving the finality for a batch of headers is much more space and time efficient than the space/time complexity of proving each individual headers in that batch, combined.
>
> - This also allows for a single light client instance of 11-beefy be used to prove finality for every parachain connected to the relay chain (Polkadot/Kusama). We achieve this by setting the appropriate ConsensusState for individual parachain headers in CheckHeaderAndUpdateState
## Decision
### Require light clients to set client and consensus states
The IBC specification states:
> If the provided header was valid, the client MUST also mutate internal state to store now-finalised consensus roots and update any necessary signature authority tracking (e.g. changes to the validator set) for future calls to the validity predicate.
The initial version of the IBC go SDK based module did not fulfill this requirement.
Instead, the 02-client submodule required each light client to return the client and consensus state which should be updated in the client prefixed store.
This decision lead to the issues "Solomachine doesn't set consensus states" and "New clients may want to do batch updates".
Each light client should be required to set its own client and consensus states on any update necessary.
The go implementation should be changed to match the specification requirements.
This will allow more flexibility for light clients to manage their own internal storage and do batch updates.
### Merge `Header`/`Misbehaviour` interface and rename to `ClientMessage`
Remove `GetHeight()` from the header interface (as light clients now set the client/consensus states).
This results in the `Header`/`Misbehaviour` interfaces being the same.
To reduce complexity of the codebase, the `Header`/`Misbehaviour` interfaces should be merged into `ClientMessage`.
`ClientMessage` will provide the client with some authenticated information which may result in regular updates, misbehaviour detection, batch updates, or other custom functionality a light client requires.
### Split `CheckHeaderAndUpdateState` into 4 functions
See [#668](https://github.com/cosmos/ibc-go/issues/668).
Split `CheckHeaderAndUpdateState` into 4 functions:
- `VerifyClientMessage`
- `CheckForMisbehaviour`
- `UpdateStateOnMisbehaviour`
- `UpdateState`
`VerifyClientMessage` checks the that the structure of a `ClientMessage` is correct and that all authentication data provided is valid.
`CheckForMisbehaviour` checks to see if a `ClientMessage` is evidence of misbehaviour.
`UpdateStateOnMisbehaviour` freezes the client and updates its state accordingly.
`UpdateState` performs a regular update or a no-op on duplicate updates.
The code roughly looks like:
```go
func (k Keeper) UpdateClient(ctx sdk.Context, clientID string, header exported.Header) error {
...
if err := clientState.VerifyClientMessage(clientMessage); err != nil {
return err
}
foundMisbehaviour := clientState.CheckForMisbehaviour(clientMessage)
if foundMisbehaviour {
clientState.UpdateStateOnMisbehaviour(header)
// emit misbehaviour event
return
}
clientState.UpdateState(clientMessage) // expects no-op on duplicate header
// emit update event
return
}
```
### Add `GetTimestampAtHeight` to the client state interface
By adding `GetTimestampAtHeight` to the ClientState interface, we allow light clients which do non-traditional consensus state/timestamp storage to process timeouts correctly.
This fixes the issues outlined for the solo machine client.
### Add generic verification functions
As the complexity and the functionality grows, new verification functions will be required for additional paths.
This was explained in [#684](https://github.com/cosmos/ibc/issues/684) on the specification repo.
These generic verification functions would be immediately useful for the new paths added in connection/channel upgradability as well as for custom paths defined by IBC applications such as Interchain Queries.
The old verification functions (`VerifyClientState`, `VerifyConnection`, etc) should be removed in favor of the generic verification functions.
## Consequences
### Positive
- Flexibility for light client implementations
- Well defined interfaces and their required functionality
- Generic verification functions
- Applies changes necessary for future client/connection/channel upgrabability features
- Timeout processing for solo machines
- Reduced code complexity
### Negative
- The refactor touches on sensitive areas of the ibc-go codebase
- Changing of established naming (`Header`/`Misbehaviour` to `ClientMessage`)
### Neutral
No notable consequences
## References
Issues:
- [#284](https://github.com/cosmos/ibc-go/issues/284)
PRs:
- [#1871](https://github.com/cosmos/ibc-go/pull/1871)

View file

@ -0,0 +1,52 @@
# ADR 007: Solo machine sign bytes
## Changelog
- 2022-08-02: Initial draft
## Status
Accepted, applied in v7
## Context
The `06-solomachine` implementation up until ibc-go v7 constructed sign bytes using a `DataType` which described what type of data was being signed.
This design decision arose from a misunderstanding of the security implications.
It was noted that the proto definitions do not [provide uniqueness](https://github.com/cosmos/cosmos-sdk/pull/7237#discussion_r484264573) which is a necessity for ensuring two signatures over different data types can never be the same.
What was missed is that the uniqueness is not provided by the proto definition, but by the usage of the proto definition.
The path provided by core IBC will be unique and is already encoded into the signature data.
Thus two different paths with the same data values will encode differently which provides signature uniqueness.
Furthermore, the current construction does not support the proposed changes in the spec repo to support [Generic Verification functions](https://github.com/cosmos/ibc/issues/684).
This is because in order to verify a new path, a new `DataType` must be added for that path.
## Decision
Remove `DataType` and change the `DataType` in the `SignBytes` and `SignatureAndData` to be `Path`.
The new `Path` field should be bytes.
Remove all `...Data` proto definitions except for `HeaderData`
These `...Data` definitions were created previously for each `DataType`.
The proto version of the solo machine proto definitions should be bumped to `v3`.
This removes an extra layer of complexity from signature construction and allows for support of generic verification.
## Consequences
### Positive
- Simplification of solo machine signature construction
- Support for generic verification
### Negative
- Breaks existing signature construction in a non-backwards compatible way
- Solo machines must update to handle the new format
- Migration required for solo machine client and consensus states
### Neutral
No notable consequences
## References
- [#1141](https://github.com/cosmos/ibc-go/issues/1141)

View file

@ -0,0 +1,569 @@
# ADR 008: Callback to IBC Actors
## Changelog
- 2022-08-10: Initial Draft
- 2023-03-22: Merged
- 2023-09-13: Updated with decisions made in implementation
- 2025-02-24: RecvPacket callback error now returns error acknowledgement
## Status
Accepted, middleware implemented
## Context
IBC was designed with callbacks between core IBC and IBC applications. IBC apps would send a packet to core IBC. When the result of the packet lifecycle eventually resolved into either an acknowledgement or a timeout, core IBC called a callback on the IBC application so that the IBC application could take action on the basis of the result (e.g. unescrow tokens for ICS-20).
This setup worked well for off-chain users interacting with IBC applications.
We are now seeing the desire for secondary applications (e.g. smart contracts, modules) to call into IBC apps as part of their state machine logic and then do some actions on the basis of the packet result. Or to receive a packet from IBC and do some logic upon receipt.
Example Usecases:
- Send an ICS-20 packet, and if it is successful, then send an ICA-packet to swap tokens on LP and return funds to sender
- Execute some logic upon receipt of token transfer to a smart contract address
This requires a second layer of callbacks. The IBC application already gets the result of the packet from core IBC, but currently there is no standardized way to pass this information on to an actor module/smart contract.
## Definitions
- Actor: an actor is an on-chain module (this may be a hardcoded module in the chain binary or a smart contract) that wishes to execute custom logic whenever IBC receives a packet flow that it has either sent or received. It **must** be addressable by a string value.
## Decision
Create a middleware that can interface between IBC applications and smart contract VMs. The IBC applications and smart contract VMs will implement respective interfaces that will then be composed together by the callback middleware to allow a smart contract of any compatible VM to interact programmatically with an IBC application.
## Data structures
The `CallbackPacketData` struct will get constructed from custom callback data in the application packet. The `CallbackAddress` is the IBC Actor address on which the callback should be called on. The `SenderAddress` is also provided to optionally allow a VM to ensure that the sender is the same as the callback address.
The struct also defines a `CommitGasLimit` which is the maximum gas a callback is allowed to use. If the callback exceeds this limit, the callback will panic and the tx will commit without the callback's state changes.
The `ExecutionGasLimit` is the practical limit of the tx execution that is set in the context gas meter. It is the minimum of the `CommitGasLimit` and the gas left in the context gas meter which is determined by the relayer's choice of tx gas limit. If `ExecutionGasLimit < CommitGasLimit`, then an out-of-gas error will revert the entire transaction without committing anything, allowing for a different relayer to retry with a larger tx gas limit.
Any middleware targeting this interface for callback handling should define a global limit that caps the gas that a callback is allowed to take (especially on AcknowledgePacket and TimeoutPacket) so that a custom callback does not prevent the packet lifecycle from completing. However, since this is a global cap it is likely to be very large. Thus, users may specify a smaller limit to cap the amount of fees a relayer must pay in order to complete the packet lifecycle on the user's behalf.
```go
// Implemented by any packet data type that wants to support PacketActor callbacks
// PacketActor's will be unable to act on any packet data type that does not implement
// this interface.
type CallbackPacketData struct {
CallbackAddress: string
ExecutionGasLimit: uint64
SenderAddress: string
CommitGasLimit: uint64
}
```
IBC Apps or middleware can then call the IBCActor callbacks like so in their own callbacks:
### Callback Middleware
The CallbackMiddleware wraps an underlying IBC application along with a contractKeeper that delegates the callback to a virtual machine. This allows the Callback middleware to interface any compatible IBC application with any compatible VM (e.g. EVM, WASM) so long as the application implements the `CallbacksCompatibleModule` interface and the VM implements the `ContractKeeper` interface.
```go
// IBCMiddleware implements the ICS26 callbacks for the ibc-callbacks middleware given
// the underlying application.
type IBCMiddleware struct {
app types.CallbacksCompatibleModule
ics4Wrapper porttypes.ICS4Wrapper
contractKeeper types.ContractKeeper
// maxCallbackGas defines the maximum amount of gas that a callback actor can ask the
// relayer to pay for. If a callback fails due to insufficient gas, the entire tx
// is reverted if the relayer hadn't provided the minimum(userDefinedGas, maxCallbackGas).
// If the actor hasn't defined a gas limit, then it is assumed to be the maxCallbackGas.
maxCallbackGas uint64
}
```
### Callback-Compatible IBC Application
The `CallbacksCompatibleModule` extends `porttypes.IBCModule` to include an `UnmarshalPacketData` function that allows the middleware to request that the underlying app unmarshal the packet data. This will then allow the middleware to retrieve the callback specific data from an arbitrary set of IBC application packets.
```go
// CallbacksCompatibleModule is an interface that combines the IBCModule and PacketDataUnmarshaler
// interfaces to assert that the underlying application supports both.
type CallbacksCompatibleModule interface {
porttypes.IBCModule
porttypes.PacketDataUnmarshaler
}
// PacketDataUnmarshaler defines an optional interface which allows a middleware to
// request the packet data to be unmarshaled by the base application.
type PacketDataUnmarshaler interface {
// UnmarshalPacketData unmarshals the packet data into a concrete type
// ctx, portID, channelID are provided as arguments, so that (if needed)
// the packet data can be unmarshaled based on the channel version.
// the version of the underlying app is also returned.
UnmarshalPacketData(ctx sdk.Context, portID, channelID string, bz []byte) (interface{}, string, error)
}
```
The application's packet data must additionally implement the following interfaces:
```go
// PacketData defines an optional interface which an application's packet data structure may implement.
type PacketData interface {
// GetPacketSender returns the sender address of the packet data.
// If the packet sender is unknown or undefined, an empty string should be returned.
GetPacketSender(sourcePortID string) string
}
// PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application.
// An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information.
// A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application.
// This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time.
type PacketDataProvider interface {
// GetCustomPacketData returns the packet data held on behalf of another application.
// The name the information is stored under should be provided as the key.
// If no custom packet data exists for the key, nil should be returned.
GetCustomPacketData(key string) interface{}
}
```
The callback data can be embedded in an application packet by providing custom packet data for source and destination callback in the custom packet data under the appropriate key.
```jsonc
// Custom Packet data embedded as a JSON object in the packet data
// src callback custom data
{
"src_callback": {
"address": "callbackAddressString",
// optional
"gas_limit": "userDefinedGasLimitString",
}
}
// dest callback custom data
{
"dest_callback": {
"address": "callbackAddressString",
// optional
"gas_limit": "userDefinedGasLimitString",
}
}
// src and dest callback custom data embedded together
{
"src_callback": {
"address": "callbackAddressString",
// optional
"gas_limit": "userDefinedGasLimitString",
},
"dest_callback": {
"address": "callbackAddressString",
// optional
"gas_limit": "userDefinedGasLimitString",
}
}
```
## ContractKeeper
The `ContractKeeper` interface must be implemented by any VM that wants to support IBC callbacks. This allows for separation of concerns
between the middleware which is handling logic intended for all VMs (e.g. setting gas meter, extracting callback data, emitting events),
while the ContractKeeper can handle the specific details of calling into the VM in question.
The `ContractKeeper` **may** impose additional checks such as ensuring that the contract address is the same as the packet sender in source callbacks.
It may also disable certain callback methods by simply performing a no-op.
```go
// ContractKeeper defines the entry points exposed to the VM module which invokes a smart contract
type ContractKeeper interface {
// IBCSendPacketCallback is called in the source chain when a PacketSend is executed. The
// packetSenderAddress is determined by the underlying module, and may be empty if the sender is
// unknown or undefined. The contract is expected to handle the callback within the user defined
// gas limit, and handle any errors, or panics gracefully.
// This entry point is called with a cached context. If an error is returned, then the changes in
// this context will not be persisted, and the error will be propagated to the underlying IBC
// application, resulting in a packet send failure.
//
// Implementations are provided with the packetSenderAddress and MAY choose to use this to perform
// validation on the origin of a given packet. It is recommended to perform the same validation
// on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This
// defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks.
//
// The version provided is the base application version for the given packet send. This allows
// contracts to determine how to unmarshal the packetData.
IBCSendPacketCallback(
cachedCtx sdk.Context,
sourcePort string,
sourceChannel string,
timeoutHeight clienttypes.Height,
timeoutTimestamp uint64,
packetData []byte,
contractAddress,
packetSenderAddress string,
version string,
) error
// IBCOnAcknowledgementPacketCallback is called in the source chain when a packet acknowledgement
// is received. The packetSenderAddress is determined by the underlying module, and may be empty if
// the sender is unknown or undefined. The contract is expected to handle the callback within the
// user defined gas limit, and handle any errors, or panics gracefully.
// This entry point is called with a cached context. If an error is returned, then the changes in
// this context will not be persisted, but the packet lifecycle will not be blocked.
//
// Implementations are provided with the packetSenderAddress and MAY choose to use this to perform
// validation on the origin of a given packet. It is recommended to perform the same validation
// on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This
// defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks.
//
// The version provided is the base application version for the given packet send. This allows
// contracts to determine how to unmarshal the packetData.
IBCOnAcknowledgementPacketCallback(
cachedCtx sdk.Context,
packet channeltypes.Packet,
acknowledgement []byte,
relayer sdk.AccAddress,
contractAddress,
packetSenderAddress string,
version string,
) error
// IBCOnTimeoutPacketCallback is called in the source chain when a packet is not received before
// the timeout height. The packetSenderAddress is determined by the underlying module, and may be
// empty if the sender is unknown or undefined. The contract is expected to handle the callback
// within the user defined gas limit, and handle any error, out of gas, or panics gracefully.
// This entry point is called with a cached context. If an error is returned, then the changes in
// this context will not be persisted, but the packet lifecycle will not be blocked.
//
// Implementations are provided with the packetSenderAddress and MAY choose to use this to perform
// validation on the origin of a given packet. It is recommended to perform the same validation
// on all source chain callbacks (SendPacket, AcknowledgementPacket, TimeoutPacket). This
// defensively guards against exploits due to incorrectly wired SendPacket ordering in IBC stacks.
//
// The version provided is the base application version for the given packet send. This allows
// contracts to determine how to unmarshal the packetData.
IBCOnTimeoutPacketCallback(
cachedCtx sdk.Context,
packet channeltypes.Packet,
relayer sdk.AccAddress,
contractAddress,
packetSenderAddress string,
version string,
) error
// IBCReceivePacketCallback is called in the destination chain when a packet acknowledgement is written.
// The contract is expected to handle the callback within the user defined gas limit.
// This entry point is called with a cached context. If an error is returned, then the error
// will be written as an error acknowledgement. This will cause the context changes made by the contract
// to be reverted along with any state changes made by the underlying application.
// The error acknowledgement will then be relayed to the sending application which can perform
// its error acknowledgement logic (e.g. refunding tokens back to user)
//
// The version provided is the base application version for the given packet send. This allows
// contracts to determine how to unmarshal the packetData.
IBCReceivePacketCallback(
cachedCtx sdk.Context,
packet ibcexported.PacketI,
ack ibcexported.Acknowledgement,
contractAddress string,
version string,
) error
}
```
### PacketCallbacks
The packet callbacks implemented in the middleware will first call the underlying application and then route to the IBC actor callback in the post-processing step.
It will extract the callback data from the application packet and set the callback gas meter depending on the global limit, the user limit, and the gas left in the transaction gas meter.
The callback will then be routed through the callback keeper which will either panic or return a result (success or failure). In the event of a (non-oog) panic or an error, the callback state changes
are discarded and the transaction is committed.
If the relayer-defined gas limit is exceeded before the user-defined gas limit or global callback gas limit is exceeded, then the entire transaction is reverted to allow for resubmission. If the chain-defined or user-defined gas limit is reached,
the callback state changes are reverted and the transaction is committed.
For the `SendPacket` callback, we will revert the entire transaction on any kind of error or panic. This is because the packet lifecycle has not yet started, so we can revert completely to avoid starting the packet lifecycle if the callback is not successful.
```go
// SendPacket implements source callbacks for sending packets.
// It defers to the underlying application and then calls the contract callback.
// If the contract callback returns an error, panics, or runs out of gas, then
// the packet send is rejected.
func (im IBCMiddleware) SendPacket(
ctx sdk.Context,
chanCap *capabilitytypes.Capability,
sourcePort string,
sourceChannel string,
timeoutHeight clienttypes.Height,
timeoutTimestamp uint64,
data []byte,
) (uint64, error) {
// run underlying app logic first
// IBCActor logic will postprocess
seq, err := im.ics4Wrapper.SendPacket(ctx, chanCap, sourcePort, sourceChannel, timeoutHeight, timeoutTimestamp, data)
if err != nil {
return 0, err
}
// use underlying app to get source callback information from packet data
callbackData, err := types.GetSourceCallbackData(im.app, data, sourcePort, ctx.GasMeter().GasRemaining(), im.maxCallbackGas)
// SendPacket is not blocked if the packet does not opt-in to callbacks
if err != nil {
return seq, nil
}
callbackExecutor := func(cachedCtx sdk.Context) error {
return im.contractKeeper.IBCSendPacketCallback(
cachedCtx, sourcePort, sourceChannel, timeoutHeight, timeoutTimestamp, data, callbackData.CallbackAddress, callbackData.SenderAddress,
)
}
err = im.processCallback(ctx, types.CallbackTypeSendPacket, callbackData, callbackExecutor)
// contract keeper is allowed to reject the packet send.
if err != nil {
return 0, err
}
types.EmitCallbackEvent(ctx, sourcePort, sourceChannel, seq, types.CallbackTypeSendPacket, callbackData, nil)
return seq, nil
}
// WriteAcknowledgement implements the ReceivePacket destination callbacks for the ibc-callbacks middleware
// during asynchronous packet acknowledgement.
// It defers to the underlying application and then calls the contract callback.
// If the contract callback runs out of gas and may be retried with a higher gas limit then the state changes are
// reverted via a panic.
func (im IBCMiddleware) WriteAcknowledgement(
ctx sdk.Context,
chanCap *capabilitytypes.Capability,
packet ibcexported.PacketI,
ack ibcexported.Acknowledgement,
) error {
// run underlying app logic first
// IBCActor logic will postprocess
err := im.ics4Wrapper.WriteAcknowledgement(ctx, chanCap, packet, ack)
if err != nil {
return err
}
// use underlying app to get destination callback information from packet data
callbackData, err := types.GetDestCallbackData(
im.app, packet.GetData(), packet.GetSourcePort(), ctx.GasMeter().GasRemaining(), im.maxCallbackGas,
)
// WriteAcknowledgement is not blocked if the packet does not opt-in to callbacks
if err != nil {
return nil
}
callbackExecutor := func(cachedCtx sdk.Context) error {
return im.contractKeeper.IBCReceivePacketCallback(cachedCtx, packet, ack, callbackData.CallbackAddress)
}
// callback execution errors are not allowed to block the packet lifecycle, they are only used in event emissions
err = im.processCallback(ctx, types.CallbackTypeReceivePacket, callbackData, callbackExecutor)
// emit events
types.EmitCallbackEvent(
ctx, packet.GetSourcePort(), packet.GetSourceChannel(), packet.GetSequence(),
types.CallbackTypeAcknowledgementPacket, callbackData, err,
)
return nil
}
// Call the IBCActor recvPacket callback after processing the packet
// if the recvPacket callback exists. If the callback returns an error
// then return an error ack to revert all packet data processing.
func (im IBCMiddleware) OnRecvPacket(
ctx sdk.Context,
packet channeltypes.Packet,
relayer sdk.AccAddress,
) (ack exported.Acknowledgement) {
// run underlying app logic first
// IBCActor logic will postprocess
ack := im.app.OnRecvPacket(ctx, packet, relayer)
// if ack is nil (asynchronous acknowledgements), then the callback will be handled in WriteAcknowledgement
// if ack is not successful, all state changes are reverted. If a packet cannot be received, then there is
// no need to execute a callback on the receiving chain.
if ack == nil || !ack.Success() {
return ack
}
// use underlying app to get destination callback information from packet data
callbackData, err := types.GetDestCallbackData(
im.app, packet.GetData(), packet.GetSourcePort(), ctx.GasMeter().GasRemaining(), im.maxCallbackGas,
)
// OnRecvPacket is not blocked if the packet does not opt-in to callbacks
if err != nil {
return ack
}
callbackExecutor := func(cachedCtx sdk.Context) error {
return im.contractKeeper.IBCReceivePacketCallback(cachedCtx, packet, ack, callbackData.CallbackAddress)
}
// callback execution errors are not allowed to block the packet lifecycle, they are only used in event emissions
err = im.processCallback(ctx, types.CallbackTypeReceivePacket, callbackData, callbackExecutor)
types.EmitCallbackEvent(
ctx, packet.GetDestPort(), packet.GetDestChannel(), packet.GetSequence(),
types.CallbackTypeReceivePacket, callbackData, err,
)
if err != nil {
return channeltypes.NewErrorAcknowledgement(err)
}
return ack
}
// Call the IBCActor acknowledgementPacket callback after processing the packet
// if the ackPacket callback exists and returns an error
// DO NOT return the error upstream. The acknowledgement must complete for the packet
// lifecycle to end, so the custom callback cannot block completion.
// Instead we emit error events and set the error in state
// so that users and on-chain logic can handle this appropriately
func (im IBCModule) OnAcknowledgementPacket(
ctx sdk.Context,
packet channeltypes.Packet,
acknowledgement []byte,
relayer sdk.AccAddress,
) error {
// we first call the underlying app to handle the acknowledgement
// IBCActor logic will postprocess
err := im.app.OnAcknowledgementPacket(ctx, packet, acknowledgement, relayer)
if err != nil {
return err
}
// use underlying app to get source callback information from packet data
callbackData, err := types.GetSourceCallbackData(
im.app, packet.GetData(), packet.GetSourcePort(), ctx.GasMeter().GasRemaining(), im.maxCallbackGas,
)
// OnAcknowledgementPacket is not blocked if the packet does not opt-in to callbacks
if err != nil {
return nil
}
callbackExecutor := func(cachedCtx sdk.Context) error {
return im.contractKeeper.IBCOnAcknowledgementPacketCallback(
cachedCtx, packet, acknowledgement, relayer, callbackData.CallbackAddress, callbackData.SenderAddress,
)
}
// callback execution errors are not allowed to block the packet lifecycle, they are only used in event emissions
err = im.processCallback(ctx, types.CallbackTypeAcknowledgementPacket, callbackData, callbackExecutor)
types.EmitCallbackEvent(
ctx, packet.GetSourcePort(), packet.GetSourceChannel(), packet.GetSequence(),
types.CallbackTypeAcknowledgementPacket, callbackData, err,
)
return nil
}
// Call the IBCActor timeoutPacket callback after processing the packet
// if the timeoutPacket callback exists and returns an error
// DO NOT return the error upstream. The timeout must complete for the packet
// lifecycle to end, so the custom callback cannot block completion.
// Instead we emit error events and set the error in state
// so that users and on-chain logic can handle this appropriately
func (im IBCModule) OnTimeoutPacket(
ctx sdk.Context,
packet channeltypes.Packet,
relayer sdk.AccAddress,
) error {
// application-specific onTimeoutPacket logic
err := im.app.OnTimeoutPacket(ctx, packet, relayer)
if err != nil {
return err
}
// use underlying app to get source callback information from packet data
callbackData, err := types.GetSourceCallbackData(
im.app, packet.GetData(), packet.GetSourcePort(), ctx.GasMeter().GasRemaining(), im.maxCallbackGas,
)
// OnTimeoutPacket is not blocked if the packet does not opt-in to callbacks
if err != nil {
return nil
}
callbackExecutor := func(cachedCtx sdk.Context) error {
return im.contractKeeper.IBCOnTimeoutPacketCallback(cachedCtx, packet, relayer, callbackData.CallbackAddress, callbackData.SenderAddress)
}
// callback execution errors are not allowed to block the packet lifecycle, they are only used in event emissions
err = im.processCallback(ctx, types.CallbackTypeTimeoutPacket, callbackData, callbackExecutor)
types.EmitCallbackEvent(
ctx, packet.GetSourcePort(), packet.GetSourceChannel(), packet.GetSequence(),
types.CallbackTypeTimeoutPacket, callbackData, err,
)
return nil
}
// processCallback executes the callbackExecutor and reverts contract changes if the callbackExecutor fails.
//
// Error Precedence and Returns:
// - oogErr: Takes the highest precedence. If the callback runs out of gas, an error wrapped with types.ErrCallbackOutOfGas is returned.
// - panicErr: Takes the second-highest precedence. If a panic occurs and it is not propagated, an error wrapped with types.ErrCallbackPanic is returned.
// - callbackErr: If the callbackExecutor returns an error, it is returned as-is.
//
// panics if
// - the contractExecutor panics for any reason, and the callbackType is SendPacket, or
// - the contractExecutor runs out of gas and the relayer has not reserved gas grater than or equal to
// CommitGasLimit.
func (IBCMiddleware) processCallback(
ctx sdk.Context, callbackType types.CallbackType,
callbackData types.CallbackData, callbackExecutor func(sdk.Context) error,
) (err error) {
cachedCtx, writeFn := ctx.CacheContext()
cachedCtx = cachedCtx.WithGasMeter(storetypes.NewGasMeter(callbackData.ExecutionGasLimit))
defer func() {
// consume the minimum of g.consumed and g.limit
ctx.GasMeter().ConsumeGas(cachedCtx.GasMeter().GasConsumedToLimit(), fmt.Sprintf("ibc %s callback", callbackType))
// recover from all panics except during SendPacket callbacks
if r := recover(); r != nil {
if callbackType == types.CallbackTypeSendPacket {
panic(r)
}
err = errorsmod.Wrapf(types.ErrCallbackPanic, "ibc %s callback panicked with: %v", callbackType, r)
}
// if the callback ran out of gas and the relayer has not reserved enough gas, then revert the state
if cachedCtx.GasMeter().IsPastLimit() {
if callbackData.AllowRetry() {
panic(storetypes.ErrorOutOfGas{Descriptor: fmt.Sprintf("ibc %s callback out of gas; commitGasLimit: %d", callbackType, callbackData.CommitGasLimit)})
}
err = errorsmod.Wrapf(types.ErrCallbackOutOfGas, "ibc %s callback out of gas", callbackType)
}
// allow the transaction to be committed, continuing the packet lifecycle
}()
err = callbackExecutor(cachedCtx)
if err == nil {
writeFn()
}
return err
}
```
Chains are expected to specify a `maxCallbackGas` to ensure that callbacks do not consume an arbitrary amount of gas. Thus, it should always be possible for a relayer to complete the packet lifecycle even if the actor callbacks cannot run successfully.
## Consequences
### Positive
- IBC Actors can now programmatically execute logic that involves sending a packet and then performing some additional logic once the packet lifecycle is complete
- Middleware implementing ADR-8 can be generally used for any application
- Leverages a similar callback architecture to the one used between core IBC and IBC applications
### Negative
- Callbacks may now have unbounded gas consumption since the actor may execute arbitrary logic. Chains implementing this feature should take care to place limitations on how much gas an actor callback can consume.
- The relayer pays for the callback gas instead of the IBCActor
### Neutral
- Application packets that want to support ADR-8 must additionally have their packet data implement `PacketDataProvider` and `PacketData` interfaces.
- Applications must implement `PacketDataUnmarshaler` interface
- Callback receiving module must implement the `ContractKeeper` interface
## References
- [Original issue](https://github.com/cosmos/ibc-go/issues/1660)
- [CallbackPacketData interface implementation](https://github.com/cosmos/ibc-go/pull/3287)
- [ICS 20, ICS 27 implementations of the CallbackPacketData interface](https://github.com/cosmos/ibc-go/pull/3287)

View file

@ -0,0 +1,115 @@
# ADR 009: ICS27 message server addition
## Changelog
- 2022/09/07: Initial draft
## Status
Accepted, implemented in v6 of ibc-go
## Context
ICS 27 (Interchain Accounts) brought a cross-chain account management protocol built upon IBC.
It enabled chains to programmatically create accounts on behalf of counterparty chains which may enable a variety of authentication methods for this interchain account.
The initial release of ICS 27 focused on enabling authentication schemes that may not require signing with a private key, such as via on-chain mechanisms like governance.
Following the initial release of ICS 27 it became evident that:
- a default authentication module would enable more usage of ICS 27
- generic authentication modules should be capable of authenticating an interchain account registration
- application logic which wraps ICS 27 packet sends does not need to be associated with the authentication logic
## Decision
The controller module should be simplified to remove the correlation between the authentication logic for an interchain account and the application logic for an interchain account.
To minimize disruption to developers working on the original design of the ICS 27 controller module, all changes will be made in a backwards compatible fashion.
### Msg server
To achieve this, as stated by [@damiannolan](https://github.com/cosmos/ibc-go/issues/2026#issue-1341640594), it was proposed to:
> Add a new `MsgServer` to `27-interchain-accounts` which exposes two distinct rpc endpoints:
>
> - `RegisterInterchainAccount`
> - `SendTx`
This will enable any SDK (authentication) module to register interchain accounts and send transactions on their behalf.
Examples of existing SDK modules which would benefit from this change include:
- x/auth
- x/gov
- x/group
The existing go functions: `RegisterInterchainAccount()` and `SendTx()` will remain to operate as they did in previous release versions.
This will be possible for SDK v0.46.x and above.
### Allow `nil` underlying applications
Authentication modules should interact with the controller module via the message server and should not be associated with application logic.
For now, it will be allowed to set a `nil` underlying application.
A future version may remove the underlying application entirely.
See issue [#2040](https://github.com/cosmos/ibc-go/issues/2040)
### Channel capability claiming
The controller module will now claim the channel capability in `OnChanOpenInit`.
Underlying applications will be passed a `nil` capability in `OnChanOpenInit`.
Channel capability migrations will be added in two steps:
- Upgrade handler migration which modifies the channel capability owner from the underlying app to the controller module
- ICS 27 module automatic migration which asserts the upgrade handler channel capability migration has been performed successfully
See issue [#2033](https://github.com/cosmos/ibc-go/issues/2033)
### Middleware enabled channels
In order to maintain backwards compatibility and avoid requiring underlying application developers to account for interchain accounts they did not register, a boolean mapping has been added to track the behaviour of how an account was created.
If the account was created via the legacy API, then the underlying application callbacks will be executed.
If the account was created with the new API (message server), then the underlying application callbacks will not be executed.
See issue [#2145](https://github.com/cosmos/ibc-go/issues/2145)
### Future considerations
[ADR 008](https://github.com/cosmos/ibc-go/pull/1976) proposes the creation of a middleware which enables callers of an IBC packet send to perform application logic in conjunction with the IBC application.
The underlying application can be removed at the availability of such a middleware as that will be the preferred method for executing application logic upon a ICS 27 packet send.
### Miscellaneous
In order to avoid import cycles, the genesis types have been moved to their own directory.
A new protobuf package has been created for the genesis types.
See PR [#2133](https://github.com/cosmos/ibc-go/pull/2133)
An additional field has been added to the `ActiveChannel` type to store the `IsMiddlewareEnabled` field upon genesis import/export.
See issue [#2165](https://github.com/cosmos/ibc-go/issues/2165)
## Consequences
### Positive
- default authentication modules are provided (x/auth, x/group, x/gov)
- any SDK authentication module may now be used with ICS 27
- separation of authentication from application logic in relation to ICS 27
- minimized disruption to existing development around ICS 27 controller module
- underlying applications no longer have to handle capabilities
- removal of the underlying application upon the creation of ADR 008 may be done in a minimally disruptive fashion
- only underlying applications which registered the interchain account will perform application logic for that account (underlying applications do not need to be aware of accounts they did not register)
### Negative
- the security model has been reduced to that of the SDK. SDK modules may send packets for any interchain account.
- additional maintenance of the messages added and the middleware enabled flag
- underlying applications which will become ADR 008 modules are not required to be aware of accounts they did not register
- calling legacy API vs the new API results in different behaviour for ICS 27 application stacks which have an underlying application
### Neutral
- A major release is required

View file

@ -0,0 +1,106 @@
# ADR 010: IBC light clients as SDK modules
## Changelog
- 12/12/2022: initial draft
## Status
Proposed
## Context
ibc-go has 3 main consumers:
- IBC light clients
- IBC applications
- relayers
Relayers listen and respond to events emitted by ibc-go while IBC light clients and applications are invoked by core IBC.
Currently there exists two different approaches to callbacks being invoked by core IBC.
IBC light clients currently are invoked by a `ClientState` and `ConsensusState` interface as defined by [core IBC](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/exported/client.go#L36).
The 02-client submodule will retrieve the `ClientState` or `ConsensusState` from the IBC store in order to perform callbacks to the light client.
This design requires all required information for the light client to function to be stored in the `ClientState` or `ConsensusState` or potentially under metadata keys for a specific client instance.
Additional information may be provided by core IBC via the defined interface arguments if that information is generic enough to be useful to all IBC light clients.
This constraint has proved problematic as pass through clients (such as wasm) cannot maintain easy access to a VM instance.
In addition, without increasing the size of the defined `ClientState` interface, light clients are unable to take advantage of basic built-in SDK functionality such as genesis import/export and migrations.
The other approach used to perform callback logic is via registered SDK modules.
This approach is used by core IBC to interact with IBC applications.
IBC applications will register their callbacks on the IBC router at compile time.
When a packet comes in, core IBC will use the IBC router to lookup the registered callback functions for the provided packet.
The benefit of registered callbacks opposed to interface functions is that additional information may be accessed via external keepers.
Because the IBC applications are also SDK modules, they additionally get access to a host of functionality provided by the SDK.
This includes: genesis import/export, migrations, query/transaction CLI commands, type registration, gRPC query registration, and message server registration.
As described in [ADR 006](./adr-006-02-client-refactor.md), generalizing light client behaviour is difficult.
IBC light clients will obtain greater flexibility and control via the registered SDK module approach.
## Decision
Instead of using two different approaches to invoking callbacks, IBC light clients should be invoked as SDK modules.
Over time and as necessary, core IBC should adjust its interactions with light clients such that they are SDK modules as opposed to interfaces.
One immediate decision that has already been applied is to formalize light client type registration via the inclusion of an `AppModuleBasic` within the `ModuleManager` for a chain.
The [tendermint](https://github.com/cosmos/ibc-go/pull/2825) and [solo machine](https://github.com/cosmos/ibc-go/pull/2826) clients were refactored to include this `AppModuleBasic` implementation and core IBC will no longer include either type as registered by default.
Longer term solutions include using internal module communication as described in [ADR 033](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-033-protobuf-inter-module-comm.md) on the SDK.
The following functions should become callbacks invoked via intermodule communication:
- `Status`
- `GetTimestampAtHeight`
- `VerifyMembership`
- `VerifyNonMembership`
- `Initialize`
- `VerifyClientMessage`
- `CheckForMisbehaviour`
- `UpdateStateOnMisbehaviour`
- `UpdateState`
- `CheckSubstituteAndUpdateState`
- `VerifyUpgradeAndUpdateState`
The ClientState interface should eventually be trimmed down to something along the lines of:
```go
type ClientState interface {
proto.Message
ClientType() string
GetLatestHeight() Height
Validate() error
ZeroCustomFields() ClientState
// ADDITION
Route() string // route used for intermodule communication
}
```
For the most part, any functions which require access to the client store should likely not be an interface function of the `ClientState`.
`ExportMetadata` should eventually be replaced by a light client's ability to import/export it's own genesis information.
### Intermodule communication
To keep the transition from interface callbacks to SDK module callbacks as simple as possible, intermodule communication (when available) should be used to route to light client modules.
Without intermodule communication, a routing system would need to be developed/maintained to register callbacks.
This functionality of routing to another SDK module should and will be provided by the SDK.
Once it is possible to route to SDK modules, a `ClientState` type could expose the function `Route` which returns the callback route used to call the light client module.
## Consequences
### Positive
- use a single approach for interacting with callbacks
- greater flexibility and control for IBC light clients
- does not require developing another routing system
### Negative
- requires breaking changes
- requires waiting for intermodule communication
### Neutral
N/A

View file

@ -0,0 +1,145 @@
# ADR 011: ICS-20 transfer state entry for total amount of tokens in escrow
## Changelog
- 2023-05-24: Initial draft
## Status
Accepted and applied in v7.1 of ibc-go
## Context
Every ICS-20 transfer channel has its own escrow bank account. This account is used to lock tokens that are transferred out of a chain that acts as the source of the tokens (i.e. when the tokens being transferred have not returned to the originating chain). This design makes it easy to query the balance of the escrow accounts and find out the total amount of tokens in escrow in a particular channel. However, there are use cases where it would be useful to determine the total escrowed amount of a given denomination across all channels where those tokens have been transferred out.
For example: assuming that there are three channels between Cosmos Hub to Osmosis and 10 ATOM have been transferred from the Cosmos Hub to Osmosis on each of those channels, then we would like to know that 30 ATOM have been transferred (i.e. are locked in the escrow accounts of each channel) without needing to iterate over each escrow account to add up the balances of each.
For a sample use case where this feature would be useful, please refer to Osmosis' rate limiting use case described in [#2664](https://github.com/cosmos/ibc-go/issues/2664).
## Decision
### State entry denom -> amount
The total amount of tokens in escrow (across all transfer channels) for a given denomination is stored in state in an entry keyed by the denomination: `totalEscrowForDenom/{denom}`.
### Panic if amount is negative
If a negative amount is ever attempted to be stored, then the keeper function will panic:
```go
if coin.Amount.IsNegative() {
panic(fmt.Sprintf("amount cannot be negative: %s", coin.Amount))
}
```
### Delete state entry if amount is zero
When setting the amount for a particular denomination, the value might be zero if all tokens that were transferred out of the chain have been transferred back. If this happens, then the state entry for this particular denomination will be deleted, since Cosmos SDK's `x/bank` module prunes any non-zero balances:
```go
if coin.Amount.IsZero() {
store.Delete(key) // delete the key since Cosmos SDK x/bank module will prune any non-zero balances
return
}
```
### Bundle escrow/unescrow with setting state entry
Two new functions are implemented that bundle together the operations of escrowing/unescrowing and setting the total escrow amount in state, since these operations need to be executed together.
For escrowing tokens:
```go
// escrowToken will send the given token from the provided sender to the escrow address. It will also
// update the total escrowed amount by adding the escrowed token to the current total escrow.
func (k Keeper) escrowToken(ctx sdk.Context, sender, escrowAddress sdk.AccAddress, token sdk.Coin) error {
if err := k.bankKeeper.SendCoins(ctx, sender, escrowAddress, sdk.NewCoins(token)); err != nil {
// failure is expected for insufficient balances
return err
}
// track the total amount in escrow keyed by denomination to allow for efficient iteration
currentTotalEscrow := k.GetTotalEscrowForDenom(ctx, token.GetDenom())
newTotalEscrow := currentTotalEscrow.Add(token)
k.SetTotalEscrowForDenom(ctx, newTotalEscrow)
return nil
}
```
For unescrowing tokens:
```go
// unescrowToken will send the given token from the escrow address to the provided receiver. It will also
// update the total escrow by deducting the unescrowed token from the current total escrow.
func (k Keeper) unescrowToken(ctx sdk.Context, escrowAddress, receiver sdk.AccAddress, token sdk.Coin) error {
if err := k.bankKeeper.SendCoins(ctx, escrowAddress, receiver, sdk.NewCoins(token)); err != nil {
// NOTE: this error is only expected to occur given an unexpected bug or a malicious
// counterparty module. The bug may occur in bank or any part of the code that allows
// the escrow address to be drained. A malicious counterparty module could drain the
// escrow address by allowing more tokens to be sent back then were escrowed.
return errorsmod.Wrap(err, "unable to unescrow tokens, this may be caused by a malicious counterparty module or a bug: please open an issue on counterparty module")
}
// track the total amount in escrow keyed by denomination to allow for efficient iteration
currentTotalEscrow := k.GetTotalEscrowForDenom(ctx, token.GetDenom())
newTotalEscrow := currentTotalEscrow.Sub(token)
k.SetTotalEscrowForDenom(ctx, newTotalEscrow)
return nil
}
```
When tokens need to be escrowed in `sendTransfer`, then `escrowToken` is called; when tokens need to be unescrowed on execution of the `OnRecvPacket`, `OnAcknowledgementPacket` or `OnTimeoutPacket` callbacks, then `unescrowToken` is called.
### gRPC query endpoint and CLI to retrieve amount
A gRPC query endpoint is added so that it is possible to retrieve the total amount for a given denomination:
```proto
// TotalEscrowForDenom returns the total amount of tokens in escrow based on the denom.
rpc TotalEscrowForDenom(QueryTotalEscrowForDenomRequest) returns (QueryTotalEscrowForDenomResponse) {
option (google.api.http).get = "/ibc/apps/transfer/v1/denoms/{denom=**}/total_escrow";
}
// QueryTotalEscrowForDenomRequest is the request type for TotalEscrowForDenom RPC method.
message QueryTotalEscrowForDenomRequest {
string denom = 1;
}
// QueryTotalEscrowForDenomResponse is the response type for TotalEscrowForDenom RPC method.
message QueryTotalEscrowForDenomResponse {
cosmos.base.v1beta1.Coin amount = 1 [(gogoproto.nullable) = false];
}
```
And a CLI query is also available to retrieve the total amount via the command line:
```shell
query ibc-transfer total-escrow [denom]
```
## Consequences
### Positive
- Possibility to retrieve the total amount of a particular denomination in escrow across all transfer channels without iteration.
### Negative
No notable consequences
### Neutral
- A new entry is added to state for every denomination that is transferred out of the chain.
## References
Issues:
- [#2664](https://github.com/cosmos/ibc-go/issues/2664)
PRs:
- [#3019](https://github.com/cosmos/ibc-go/pull/3019)
- [#3558](https://github.com/cosmos/ibc-go/pull/3558)

View file

@ -0,0 +1,299 @@
# ADR 015: IBC Packet Receiver
## Changelog
- 2019 Oct 22: Initial Draft
## Context
[ICS 26 - Routing Module](https://github.com/cosmos/ibc/tree/master/spec/core/ics-026-routing-module) defines a function [`handlePacketRecv`](https://github.com/cosmos/ibc/tree/master/spec/core/ics-026-routing-module#packet-relay).
In ICS 26, the routing module is defined as a layer above each application module
which verifies and routes messages to the destination modules. It is possible to
implement it as a separate module, however, we already have the functionality to route
messages upon the destination identifiers in the baseapp. This ADR suggests
to utilize existing `baseapp.router` to route packets to application modules.
Generally, routing module callbacks have two separate steps in them,
verification and execution. This corresponds to the `AnteHandler`-`Handler`
model inside the SDK. We can do the verification inside the `AnteHandler`
in order to increase developer ergonomics by reducing boilerplate
verification code.
For atomic multi-message transaction, we want to keep the IBC related
state modification to be preserved even the application side state change
reverts. One of the example might be IBC token sending message following with
stake delegation which uses the tokens received by the previous packet message.
If the token receiving fails for any reason, we might not want to keep
executing the transaction, but we also don't want to abort the transaction
or the sequence and commitment will be reverted and the channel will be stuck.
This ADR suggests new `CodeType`, `CodeTxBreak`, to fix this problem.
## Decision
`PortKeeper` will have the capability key that is able to access only the
channels bound to the port. Entities that hold a `PortKeeper` will be
able to call the methods on it which are corresponding with the methods with
the same names on the `ChannelKeeper`, but only with the
allowed port. `ChannelKeeper.Port(string, ChannelChecker)` will be defined to
easily construct a capability-safe `PortKeeper`. This will be addressed in
another ADR and we will use insecure `ChannelKeeper` for now.
`baseapp.runMsgs` will break the loop over the messages if one of the handlers
returns `!Result.IsOK()`. However, the outer logic will write the cached
store if `Result.IsOK() || Result.Code.IsBreak()`. `Result.Code.IsBreak()` if
`Result.Code == CodeTxBreak`.
```go
func (app *BaseApp) runTx(tx Tx) (result Result) {
msgs := tx.GetMsgs()
// AnteHandler
if app.anteHandler != nil {
anteCtx, msCache := app.cacheTxContext(ctx)
newCtx, err := app.anteHandler(anteCtx, tx)
if !newCtx.IsZero() {
ctx = newCtx.WithMultiStore(ms)
}
if err != nil {
// error handling logic
return res
}
msCache.Write()
}
// Main Handler
runMsgCtx, msCache := app.cacheTxContext(ctx)
result = app.runMsgs(runMsgCtx, msgs)
// BEGIN modification made in this ADR
if result.IsOK() || result.IsBreak() {
// END
msCache.Write()
}
return result
}
```
The Cosmos SDK will define an `AnteDecorator` for IBC packet receiving. The
`AnteDecorator` will iterate over the messages included in the transaction, type
`switch` to check whether the message contains an incoming IBC packet, and if so
verify the Merkle proof.
```go
type ProofVerificationDecorator struct {
clientKeeper ClientKeeper
channelKeeper ChannelKeeper
}
func (pvr ProofVerificationDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (Context, error) {
for _, msg := range tx.GetMsgs() {
var err error
switch msg := msg.(type) {
case client.MsgUpdateClient:
err = pvr.clientKeeper.UpdateClient(msg.ClientID, msg.Header)
case channel.MsgPacket:
err = pvr.channelKeeper.RecvPacket(msg.Packet, msg.Proofs, msg.ProofHeight)
case channel.MsgAcknowledgement:
err = pvr.channelKeeper.AcknowledgementPacket(msg.Acknowledgement, msg.Proof, msg.ProofHeight)
case channel.MsgTimeoutPacket:
err = pvr.channelKeeper.TimeoutPacket(msg.Packet, msg.Proof, msg.ProofHeight, msg.NextSequenceRecv)
case channel.MsgChannelOpenInit;
err = pvr.channelKeeper.CheckOpen(msg.PortID, msg.ChannelID, msg.Channel)
default:
continue
}
if err != nil {
return ctx, err
}
}
return next(ctx, tx, simulate)
}
```
Where `MsgUpdateClient`, `MsgPacket`, `MsgAcknowledgement`, `MsgTimeoutPacket`
are `sdk.Msg` types correspond to `handleUpdateClient`, `handleRecvPacket`,
`handleAcknowledgementPacket`, `handleTimeoutPacket` of the routing module,
respectively.
The side effects of `RecvPacket`, `VerifyAcknowledgement`,
`VerifyTimeout` will be extracted out into separated functions,
`WriteAcknowledgement`, `DeleteCommitment`, `DeleteCommitmentTimeout`, respectively,
which will be called by the application handlers after the execution.
`WriteAcknowledgement` writes the acknowledgement to the state that can be
verified by the counter-party chain and increments the sequence to prevent
double execution. `DeleteCommitment` will delete the commitment stored,
`DeleteCommitmentTimeout` will delete the commitment and close channel in case
of ordered channel.
```go
func (keeper ChannelKeeper) WriteAcknowledgement(ctx Context, packet Packet, ack []byte) {
keeper.SetPacketAcknowledgement(ctx, packet.GetDestPort(), packet.GetDestChannel(), packet.GetSequence(), ack)
keeper.SetNextSequenceRecv(ctx, packet.GetDestPort(), packet.GetDestChannel(), packet.GetSequence())
}
func (keeper ChannelKeeper) DeleteCommitment(ctx Context, packet Packet) {
keeper.deletePacketCommitment(ctx, packet.GetSourcePort(), packet.GetSourceChannel(), packet.GetSequence())
}
func (keeper ChannelKeeper) DeleteCommitmentTimeout(ctx Context, packet Packet) {
k.deletePacketCommitment(ctx, packet.GetSourcePort(), packet.GetSourceChannel(), packet.GetSequence())
if channel.Ordering == types.ORDERED [
channel.State = types.CLOSED
k.SetChannel(ctx, packet.GetSourcePort(), packet.GetSourceChannel(), channel)
}
}
```
Each application handler should call respective finalization methods on the `PortKeeper`
in order to increase sequence (in case of packet) or remove the commitment
(in case of acknowledgement and timeout).
Calling those functions implies that the application logic has successfully executed.
However, the handlers can return `Result` with `CodeTxBreak` after calling those methods
which will persist the state changes that has been already done but prevent any further
messages to be executed in case of semantically invalid packet. This will keep the sequence
increased in the previous IBC packets(thus preventing double execution) without
proceeding to the following messages.
In any case the application modules should never return state reverting result,
which will make the channel unable to proceed.
`ChannelKeeper.CheckOpen` method will be introduced. This will replace `onChanOpen*` defined
under the routing module specification. Instead of define each channel handshake callback
functions, application modules can provide `ChannelChecker` function with the `AppModule`
which will be injected to `ChannelKeeper.Port()` at the top level application.
`CheckOpen` will find the correct `ChannelChecker` using the
`PortID` and call it, which will return an error if it is unacceptable by the application.
The `ProofVerificationDecorator` will be inserted to the top level application.
It is not safe to make each module responsible to call proof verification
logic, whereas application can misbehave(in terms of IBC protocol) by
mistake.
The `ProofVerificationDecorator` should come right after the default sybil attack
resistant layer from the current `auth.NewAnteHandler`:
```go
// add IBC ProofVerificationDecorator to the Chain of
func NewAnteHandler(
ak keeper.AccountKeeper, supplyKeeper types.SupplyKeeper, ibcKeeper ibc.Keeper,
sigGasConsumer SignatureVerificationGasConsumer) sdk.AnteHandler {
return sdk.ChainAnteDecorators(
NewSetUpContextDecorator(), // outermost AnteDecorator. SetUpContext must be called first
...
NewIncrementSequenceDecorator(ak),
ibcante.ProofVerificationDecorator(ibcKeeper.ClientKeeper, ibcKeeper.ChannelKeeper), // innermost AnteDecorator
)
}
```
The implementation of this ADR will also create a `Data` field of the `Packet` of type `[]byte`, which can be deserialised by the receiving module into its own private type. It is up to the application modules to do this according to their own interpretation, not by the IBC keeper. This is crucial for dynamic IBC.
Example application-side usage:
```go
type AppModule struct {}
// CheckChannel will be provided to the ChannelKeeper as ChannelKeeper.Port(module.CheckChannel)
func (module AppModule) CheckChannel(portID, channelID string, channel Channel) error {
if channel.Ordering != UNORDERED {
return ErrUncompatibleOrdering()
}
if channel.CounterpartyPort != "bank" {
return ErrUncompatiblePort()
}
if channel.Version != "" {
return ErrUncompatibleVersion()
}
return nil
}
func NewHandler(k Keeper) Handler {
return func(ctx Context, msg Msg) Result {
switch msg := msg.(type) {
case MsgTransfer:
return handleMsgTransfer(ctx, k, msg)
case ibc.MsgPacket:
var data PacketDataTransfer
if err := types.ModuleCodec.UnmarshalBinaryBare(msg.GetData(), &data); err != nil {
return err
}
return handlePacketDataTransfer(ctx, k, msg, data)
case ibc.MsgTimeoutPacket:
var data PacketDataTransfer
if err := types.ModuleCodec.UnmarshalBinaryBare(msg.GetData(), &data); err != nil {
return err
}
return handleTimeoutPacketDataTransfer(ctx, k, packet)
// interface { PortID() string; ChannelID() string; Channel() ibc.Channel }
// MsgChanInit, MsgChanTry implements ibc.MsgChannelOpen
case ibc.MsgChannelOpen:
return handleMsgChannelOpen(ctx, k, msg)
}
}
}
func handleMsgTransfer(ctx Context, k Keeper, msg MsgTransfer) Result {
err := k.SendTransfer(ctx,msg.PortID, msg.ChannelID, msg.Amount, msg.Sender, msg.Receiver)
if err != nil {
return sdk.ResultFromError(err)
}
return sdk.Result{}
}
func handlePacketDataTransfer(ctx Context, k Keeper, packet Packet, data PacketDataTransfer) Result {
err := k.ReceiveTransfer(ctx, packet.GetSourcePort(), packet.GetSourceChannel(), packet.GetDestinationPort(), packet.GetDestinationChannel(), data)
if err != nil {
// TODO: Source chain sent invalid packet, shutdown channel
}
k.ChannelKeeper.WriteAcknowledgement([]byte{0x00}) // WriteAcknowledgement increases the sequence, preventing double spending
return sdk.Result{}
}
func handleCustomTimeoutPacket(ctx Context, k Keeper, packet CustomPacket) Result {
err := k.RecoverTransfer(ctx, packet.GetSourcePort(), packet.GetSourceChannel(), packet.GetDestinationPort(), packet.GetDestinationChannel(), data)
if err != nil {
// This chain sent invalid packet or cannot recover the funds
panic(err)
}
k.ChannelKeeper.DeleteCommitmentTimeout(ctx, packet)
// packet timeout should not fail
return sdk.Result{}
}
func handleMsgChannelOpen(sdk.Context, k Keeper, msg MsgOpenChannel) Result {
k.AllocateEscrowAddress(ctx, msg.ChannelID())
return sdk.Result{}
}
```
## Status
Proposed
## Consequences
### Positive
- Intuitive interface for developers - IBC handlers do not need to care about IBC authentication
- State change commitment logic is embedded into `baseapp.runTx` logic
### Negative
- Cannot support dynamic ports, routing is tied to the baseapp router
### Neutral
- Introduces new `AnteHandler` decorator.
- Dynamic ports can be supported using hierarchical port identifier, see #5290 for detail
## References
- Relevant comment: [cosmos/ics#289](https://github.com/cosmos/ibc/issues/289#issuecomment-544533583)
- [ICS26 - Routing Module](https://github.com/cosmos/ibc/tree/master/spec/core/ics-026-routing-module)

View file

@ -0,0 +1,141 @@
# ADR 025: IBC Passive Channels
## Changelog
- 2021-04-23: Change status to "deprecated"
- 2020-05-23: Provide sample Go code and more details
- 2020-05-18: Initial Draft
## Status
*deprecated*
## Context
The current "naive" IBC Relayer strategy currently establishes a single predetermined IBC channel atop a single connection between two clients (each potentially of a different chain). This strategy then detects packets to be relayed by watching for `send_packet` and `recv_packet` events matching that channel, and sends the necessary transactions to relay those packets.
We wish to expand this "naive" strategy to a "passive" one which detects and relays both channel handshake messages and packets on a given connection, without the need to know each channel in advance of relaying it.
In order to accomplish this, we propose adding more comprehensive events to expose channel metadata for each transaction sent from the `x/ibc/core/04-channel/keeper/handshake.go` and `x/ibc/core/04-channel/keeper/packet.go` modules.
Here is an example of what would be in `ChanOpenInit`:
```go
const (
EventTypeChannelMeta = "channel_meta"
AttributeKeyAction = "action"
AttributeKeyHops = "hops"
AttributeKeyOrder = "order"
AttributeKeySrcPort = "src_port"
AttributeKeySrcChannel = "src_channel"
AttributeKeySrcVersion = "src_version"
AttributeKeyDstPort = "dst_port"
AttributeKeyDstChannel = "dst_channel"
AttributeKeyDstVersion = "dst_version"
)
// ...
// Emit Event with Channel metadata for the relayer to pick up and
// relay to the other chain
// This appears immediately before the successful return statement.
ctx.EventManager().EmitEvents(sdk.Events{
sdk.NewEvent(
types.EventTypeChannelMeta,
sdk.NewAttribute(types.AttributeKeyAction, "open_init"),
sdk.NewAttribute(types.AttributeKeySrcConnection, connectionHops[0]),
sdk.NewAttribute(types.AttributeKeyHops, strings.Join(connectionHops, ",")),
sdk.NewAttribute(types.AttributeKeyOrder, order.String()),
sdk.NewAttribute(types.AttributeKeySrcPort, portID),
sdk.NewAttribute(types.AttributeKeySrcChannel, channelID),
sdk.NewAttribute(types.AttributeKeySrcVersion, version),
sdk.NewAttribute(types.AttributeKeyDstPort, counterparty.GetPortID()),
sdk.NewAttribute(types.AttributeKeyDstChannel, counterparty.GetChannelID()),
// The destination version is not yet known, but a value is necessary to pad
// the event attribute offsets
sdk.NewAttribute(types.AttributeKeyDstVersion, ""),
),
})
```
These metadata events capture all the "header" information needed to route IBC channel handshake transactions without requiring the client to query any data except that of the connection ID that it is willing to relay. It is intended that `channel_meta.src_connection` is the only event key that needs to be indexed for a passive relayer to function.
### Handling Channel Open Attempts
In the case of the passive relayer, when one chain sends a `ChanOpenInit`, the relayer should inform the other chain of this open attempt and allow that chain to decide how (and if) it continues the handshake. Once both chains have actively approved the channel opening, then the rest of the handshake can happen as it does with the current "naive" relayer.
To implement this behavior, we propose replacing the `cbs.OnChanOpenTry` callback with a new `cbs.OnAttemptChanOpenTry` callback which explicitly handles the `MsgChannelOpenTry`, usually by resulting in a call to `keeper.ChanOpenTry`. The typical implementation, in `x/ibc-transfer/module.go` would be compatible with the current "naive" relayer, as follows:
```go
func (am AppModule) OnAttemptChanOpenTry(
ctx sdk.Context,
chanKeeper channel.Keeper,
portCap *capability.Capability,
msg channel.MsgChannelOpenTry,
) (*sdk.Result, error) {
// Require portID is the portID transfer module is bound to
boundPort := am.keeper.GetPort(ctx)
if boundPort != msg.PortID {
return nil, sdkerrors.Wrapf(porttypes.ErrInvalidPort, "invalid port: %s, expected %s", msg.PortID, boundPort)
}
// BEGIN NEW CODE
// Assert our protocol version, overriding the relayer's suggestion.
msg.Version = types.Version
// Continue the ChanOpenTry.
res, chanCap, err := channel.HandleMsgChannelOpenTry(ctx, chanKeeper, portCap, msg)
if err != nil {
return nil, err
}
// END OF NEW CODE
// ... the rest of the callback is similar to the existing OnChanOpenTry
// but uses msg.* directly.
```
Here is how this callback would be used, in the implementation of `x/ibc/handler.go`:
```go
// ...
case channel.MsgChannelOpenTry:
// Lookup module by port capability
module, portCap, err := k.PortKeeper.LookupModuleByPort(ctx, msg.PortID)
if err != nil {
return nil, sdkerrors.Wrap(err, "could not retrieve module from port-id")
}
// Retrieve callbacks from router
cbs, ok := k.Router.GetRoute(module)
if !ok {
return nil, sdkerrors.Wrapf(port.ErrInvalidRoute, "route not found to module: %s", module)
}
// Delegate to the module's OnAttemptChanOpenTry.
return cbs.OnAttemptChanOpenTry(ctx, k.ChannelKeeper, portCap, msg)
```
The reason we do not have a more structured interaction between `x/ibc/handler.go` and the port's module (to explicitly negotiate versions, etc) is that we do not wish to constrain the app module to have to finish handling the `MsgChannelOpenTry` during this transaction or even this block.
## Decision
- Expose events to allow "passive" connection relayers.
- Enable application-initiated channels via such passive relayers.
- Allow port modules to control how to handle open-try messages.
## Consequences
### Positive
Makes channels into a complete application-level abstraction.
Applications have full control over initiating and accepting channels, rather than expecting a relayer to tell them when to do so.
A passive relayer does not have to know what kind of channel (version string, ordering constraints, firewalling logic) the application supports. These are negotiated directly between applications.
### Negative
Increased event size for IBC messages.
### Neutral
More IBC events are exposed.
## References
- The Agoric VM's IBC handler currently [accommodates `attemptChanOpenTry`](https://github.com/Agoric/agoric-sdk/blob/904b3a0423222a1b32893453e44bbde598473960/packages/cosmic-swingset/lib/ag-solo/vats/ibc.js#L546)

View file

@ -0,0 +1,90 @@
# ADR 026: IBC Client Recovery Mechanisms
## Changelog
- 2020/06/23: Initial version
- 2020/08/06: Revisions per review & to reference version
- 2021/01/15: Revision to support substitute clients for unfreezing
- 2021/05/20: Revision to simplify consensus state copying, remove initial height
- 2022/04/08: Revision to deprecate AllowUpdateAfterExpiry and AllowUpdateAfterMisbehaviour
- 2022/07/15: Revision to allow updating of TrustingPeriod
- 2023/09/05: Revision to migrate from gov v1beta1 to gov v1
## Status
*Accepted*
## Context
### Summary
At launch, IBC will be a novel protocol, without an experienced user-base. At the protocol layer, it is not possible to distinguish between client expiry or misbehaviour due to genuine faults (Byzantine behaviour) and client expiry or misbehaviour due to user mistakes (failing to update a client, or accidentally double-signing). In the base IBC protocol and ICS 20 fungible token transfer implementation, if a client can no longer be updated, funds in that channel will be permanently locked and can no longer be transferred. To the degree that it is safe to do so, it would be preferable to provide users with a recovery mechanism which can be utilised in these exceptional cases.
### Exceptional cases
The state of concern is where a client associated with connection(s) and channel(s) can no longer be updated. This can happen for several reasons:
1. The chain which the client is following has halted and is no longer producing blocks/headers, so no updates can be made to the client
1. The chain which the client is following has continued to operate, but no relayer has submitted a new header within the unbonding period, and the client has expired
1. This could be due to real misbehaviour (intentional Byzantine behaviour) or merely a mistake by validators, but the client cannot distinguish these two cases
1. The chain which the client is following has experienced a misbehaviour event, and the client has been frozen & thus can no longer be updated
### Security model
Two-thirds of the validator set (the quorum for governance, module participation) can already sign arbitrary data, so allowing governance to manually force-update a client with a new header after a delay period does not substantially alter the security model.
## Decision
We elect not to deal with chains which have actually halted, which is necessarily Byzantine behaviour and in which case token recovery is not likely possible anyways (in-flight packets cannot be timed-out, but the relative impact of that is minor).
1. Require Tendermint light clients (ICS 07) to be created with the following additional flags
1. `allow_update_after_expiry` (boolean, default true). Note that this flag has been deprecated, it remains to signal intent but checks against this value will not be enforced.
1. Require Tendermint light clients (ICS 07) to expose the following additional internal query functions
1. `Expired() boolean`, which returns whether or not the client has passed the trusting period since the last update (in which case no headers can be validated)
1. Require Tendermint light clients (ICS 07) & solo machine clients (ICS 06) to be created with the following additional flags
1. `allow_update_after_misbehaviour` (boolean, default true). Note that this flag has been deprecated, it remains to signal intent but checks against this value will not be enforced.
1. Require Tendermint light clients (ICS 07) to expose the following additional state mutation functions
1. `Unfreeze()`, which unfreezes a light client after misbehaviour and clears any frozen height previously set
1. Add a new governance proposal with `MsgRecoverClient`.
1. Create a new Msg with two client identifiers (`string`) and a signer.
1. The first client identifier is the proposed client to be updated. This client must be either frozen or expired.
1. The second client is a substitute client. It carries all the state for the client which may be updated. It must have identical client and chain parameters to the client which may be updated (except for latest height, frozen height, and chain-id). It should be continually updated during the voting period.
1. If this governance proposal passes, the client on trial will be updated to the latest state of the substitute.
1. The signer must be the authority set for the ibc module.
Previously, `AllowUpdateAfterExpiry` and `AllowUpdateAfterMisbehaviour` were used to signal the recovery options for an expired or frozen client, and governance proposals were not allowed to overwrite the client if these parameters were set to false. However, this has now been deprecated because a code migration can overwrite the client and consensus states regardless of the value of these parameters. If governance would vote to overwrite a client or consensus state, it is likely that governance would also be willing to perform a code migration to do the same.
In addition, `TrustingPeriod` was initially not allowed to be updated by a client upgrade proposal. However, due to the number of situations experienced in production where the `TrustingPeriod` of a client should be allowed to be updated because of ie: initial misconfiguration for a canonical channel, governance should be allowed to update this client parameter.
In versions older than ibc-go v8, `MsgRecoverClient` was a governance proposal type `ClientUpdateProposal`. It has been removed and replaced by `MsgRecoverClient` in the migration from governance v1beta1 to governance v1.
Note that this should NOT be lightly updated, as there may be a gap in time between when misbehaviour has occurred and when the evidence of misbehaviour is submitted. For example, if the `UnbondingPeriod` is 2 weeks and the `TrustingPeriod` has also been set to two weeks, a validator could wait until right before `UnbondingPeriod` finishes, submit false information, then unbond and exit without being slashed for misbehaviour. Therefore, we recommend that the trusting period for the 07-tendermint client be set to 2/3 of the `UnbondingPeriod`.
Note that clients frozen due to misbehaviour must wait for the evidence to expire to avoid becoming refrozen.
This ADR does not address planned upgrades, which are handled separately as per the [specification](https://github.com/cosmos/ibc/tree/master/spec/client/ics-007-tendermint-client#upgrades).
## Consequences
### Positive
- Establishes a mechanism for client recovery in the case of expiry
- Establishes a mechanism for client recovery in the case of misbehaviour
- Constructing an ClientUpdate Proposal is as difficult as creating a new client
### Negative
- Additional complexity in client creation which must be understood by the user
- Coping state of the substitute adds complexity
- Governance participants must vote on a substitute client
### Neutral
No neutral consequences.
## References
- [Prior discussion](https://github.com/cosmos/ibc/issues/421)
- [Epoch number discussion](https://github.com/cosmos/ibc/issues/439)
- [Upgrade plan discussion](https://github.com/cosmos/ibc/issues/445)
- [Migration from gov v1beta1 to gov v1](https://github.com/cosmos/ibc-go/issues/3672)

View file

@ -0,0 +1,167 @@
# ADR 27: Add support for Wasm based light client
## Changelog
- 26/11/2020: Initial Draft
- 26/05/2023: Update after 02-client refactor and re-implementation by Strangelove
- 13/12/2023: Update after upstreaming of module to ibc-go
## Status
*Accepted and applied in v0.1.0 of 08-wasm*
## Abstract
In the Cosmos SDK light clients are currently hardcoded in Go. This makes upgrading existing IBC light clients or
adding support for new light client a multi step process involving on-chain governance which is time-consuming.
To remedy this, we are proposing a Wasm VM to host light client bytecode, which allows easier upgrading of
existing IBC light clients as well as adding support for new IBC light clients without requiring a code release and
corresponding hard-fork event.
## Context
Currently in ibc-go light clients are defined as part of the codebase and are implemented as modules under
`modules/light-clients`. Adding support for new light clients or updating an existing light client in the event
of a security issue or consensus update is a multi-step process which is both time-consuming and error-prone.
In order to enable new IBC light client implementations it is necessary to modify the codebase of ibc-go (if the light
client is part of its codebase), re-build chains' binaries, pass a governance proposal and validators upgrade their nodes.
Another problem stemming from the above process is that if a chain wants to upgrade its own consensus, it will
need to convince every chain or hub connected to it to upgrade its light client in order to stay connected. Due
to the time-consuming process required to upgrade a light client, a chain with lots of connections needs to be
disconnected for quite some time after upgrading its consensus, which can be very expensive in terms of time and effort.
We are proposing simplifying this workflow by integrating a Wasm light client module that makes adding support for
new light clients a simple governance-gated transaction. The light client bytecode, written in Wasm-compilable Rust,
runs inside a Wasm VM. The Wasm light client submodule exposes a proxy light client interface that routes incoming
messages to the appropriate handler function, inside the Wasm VM for execution.
With the Wasm light client module, anybody can add new IBC light client in the form of Wasm bytecode (provided they are
able to submit the governance proposal transaction and that it passes) as well as instantiate clients using any created
client type. This allows any chain to update its own light client in other chains without going through the steps outlined above.
## Decision
We decided to implement the Wasm light client module as a light client proxy that will interface with the actual light client
uploaded as Wasm bytecode. To enable usage of the Wasm light client module, users need to add it to the list of allowed clients
by updating the `AllowedClients` parameter in the 02-client submodule of core IBC.
```go
params := clientKeeper.GetParams(ctx)
params.AllowedClients = append(params.AllowedClients, exported.Wasm)
clientKeeper.SetParams(ctx, params)
```
Adding a new light client contract is governance-gated. To upload a new light client users need to submit
a [governance v1 proposal](https://docs.cosmos.network/main/modules/gov#proposals) that contains the `sdk.Msg` for storing
the Wasm contract's bytecode. The required message is `MsgStoreCode` and the bytecode is provided in the field `wasm_byte_code`:
```proto
// MsgStoreCode defines the request type for the StoreCode rpc.
message MsgStoreCode {
// signer address
string signer = 1;
// wasm byte code of light client contract. It can be raw or gzip compressed
bytes wasm_byte_code = 2;
}
```
The RPC handler processing `MsgStoreCode` will make sure that the signer of the message matches the address of authority allowed to
submit this message (which is normally the address of the governance module).
```go
// StoreCode defines a rpc handler method for MsgStoreCode
func (k Keeper) StoreCode(goCtx context.Context, msg *types.MsgStoreCode) (*types.MsgStoreCodeResponse, error) {
if k.GetAuthority() != msg.Signer {
return nil, errorsmod.Wrapf(ibcerrors.ErrUnauthorized, "expected %s, got %s", k.GetAuthority(), msg.Signer)
}
ctx := sdk.UnwrapSDKContext(goCtx)
checksum, err := k.storeWasmCode(ctx, msg.WasmByteCode, ibcwasm.GetVM().StoreCode)
if err != nil {
return nil, errorsmod.Wrap(err, "failed to store wasm bytecode")
}
emitStoreWasmCodeEvent(ctx, checksum)
return &types.MsgStoreCodeResponse{
Checksum: checksum,
}, nil
}
```
The contract's bytecode is not stored in state (it is actually unnecessary and wasteful to store it, since
the Wasm VM already stores it and can be queried back, if needed). The checksum is simply the hash of the bytecode
of the contract and it is stored in state in an entry with key `checksums` that contains the checksums for the bytecodes that have been stored.
### How light client proxy works?
The light client proxy behind the scenes will call a CosmWasm smart contract instance with incoming arguments serialized
in JSON format with appropriate environment information. Data returned by the smart contract is deserialized and
returned to the caller.
Consider the example of the `VerifyClientMessage` function of `ClientState` interface. Incoming arguments are
packaged inside a payload object that is then JSON serialized and passed to `queryContract`, which executes `WasmVm.Query`
and returns the slice of bytes returned by the smart contract. This data is deserialized and passed as return argument.
```go
type QueryMsg struct {
Status *StatusMsg `json:"status,omitempty"`
ExportMetadata *ExportMetadataMsg `json:"export_metadata,omitempty"`
TimestampAtHeight *TimestampAtHeightMsg `json:"timestamp_at_height,omitempty"`
VerifyClientMessage *VerifyClientMessageMsg `json:"verify_client_message,omitempty"`
CheckForMisbehaviour *CheckForMisbehaviourMsg `json:"check_for_misbehaviour,omitempty"`
}
type verifyClientMessageMsg struct {
ClientMessage *ClientMessage `json:"client_message"`
}
// VerifyClientMessage must verify a ClientMessage.
// A ClientMessage could be a Header, Misbehaviour, or batch update.
// It must handle each type of ClientMessage appropriately.
// Calls to CheckForMisbehaviour, UpdateStaåte, and UpdateStateOnMisbehaviour
// will assume that the content of the ClientMessage has been verified
// and can be trusted. An error should be returned
// if the ClientMessage fails to verify.
func (cs ClientState) VerifyClientMessage(
ctx sdk.Context,
_ codec.BinaryCodec,
clientStore storetypes.KVStore,
clientMsg exported.ClientMessage
) error {
clientMessage, ok := clientMsg.(*ClientMessage)
if !ok {
return errorsmod.Wrapf(ibcerrors.ErrInvalidType, "expected type: %T, got: %T", &ClientMessage{}, clientMsg)
}
payload := QueryMsg{
VerifyClientMessage: &VerifyClientMessageMsg{ClientMessage: clientMessage.Data},
}
_, err := wasmQuery[EmptyResult](ctx, clientStore, &cs, payload)
return err
}
```
### Global Wasm VM variable
The 08-wasm keeper structure keeps a reference to the Wasm VM instantiated in the keeper constructor function. The keeper uses
the Wasm VM to store the bytecode of light client contracts. However, the Wasm VM is also needed in the 08-wasm implementations of
some of the `ClientState` interface functions to initialise a contract, execute calls on the contract and query the contract. Since
the `ClientState` functions do not have access to the 08-wasm keeper, then it has been decided to keep a global pointer variable that
points to the same instance as the one in the 08-wasm keeper. This global pointer variable is then used in the implementations of
the `ClientState` functions.
## Consequences
### Positive
- Adding support for new light client or upgrading existing light client is way easier than before and only requires single transaction instead of a hard-fork.
- Improves maintainability of ibc-go, since no change in codebase is required to support new client or upgrade it.
- The existence of support for Rust dependencies in light clients which may not exist in Go.
### Negative
- Light clients written in Rust need to be written in a subset of Rust which could compile in Wasm.
- Introspecting light client code is difficult as only compiled bytecode exists in the blockchain.

View file

@ -0,0 +1,38 @@
# ADR \{ADR-NUMBER\}: \{TITLE\}
## Changelog
- \{date\}: \{changelog\}
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
\{Deprecated|Proposed|Accepted\}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high-level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
It should also describe affects / corollary items that may need to be changed as a part of this.
If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
(e.g. the optimal split of things to do between separate PR's)
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- \{reference link\}

Binary file not shown.

3
docs/babel.config.js Normal file
View file

@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};

85
docs/client/config.json Normal file
View file

@ -0,0 +1,85 @@
{
"swagger": "2.0",
"info": {
"title": "IBC-GO - gRPC Gateway docs",
"description": "A REST interface for state queries",
"version": "1.0.0"
},
"apis": [
{
"url": "./tmp-swagger-gen/ibc/applications/transfer/v1/query.swagger.json",
"operationIds": {
"rename": {
"Params": "TransferParams"
}
}
},
{
"url": "./tmp-swagger-gen/ibc/applications/interchain_accounts/controller/v1/query.swagger.json",
"operationIds": {
"rename": {
"Params": "InterchainAccountsControllerParams"
}
}
},
{
"url": "./tmp-swagger-gen/ibc/applications/interchain_accounts/host/v1/query.swagger.json",
"operationIds": {
"rename": {
"Params": "InterchainAccountsHostParams"
}
}
},
{
"url": "./tmp-swagger-gen/ibc/core/client/v1/query.swagger.json",
"operationIds": {
"rename": {
"Params": "ClientParams"
}
}
},
{
"url": "./tmp-swagger-gen/ibc/core/connection/v1/query.swagger.json",
"operationIds": {
"rename": {
"Params": "ConnectionParams"
}
}
},
{
"url": "./tmp-swagger-gen/ibc/core/channel/v1/query.swagger.json",
"operationIds": {
"rename": {
"Params": "ChannelParams"
}
}
},
{
"url": "./tmp-swagger-gen/ibc/core/channel/v2/query.swagger.json",
"operationIds": {
"rename": {
"Params": "ChannelV2Params",
"Channel": "ChannelV2",
"ChannelClientState": "ChannelClientStateV2",
"ChannelConsensusState": "ChannelConsensusStateV2",
"NextSequenceSend": "NextSequenceSendV2",
"PacketAcknowledgement": "PacketAcknowledgementV2",
"PacketAcknowledgements": "PacketAcknowledgementsV2",
"PacketCommitment": "PacketCommitmentV2",
"PacketCommitments": "PacketCommitmentsV2",
"PacketReceipt": "PacketReceiptV2",
"UnreceivedAcks": "UnreceivedAcksV2",
"UnreceivedPackets": "UnreceivedPacketsV2",
}
}
},
{
"url": "./tmp-swagger-gen/ibc/lightclients/wasm/v1/query.swagger.json",
"operationIds": {
"rename": {
"Params": "WasmParams"
}
}
}
]
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,60 @@
# Development setup
## Dependencies
We use [Go 1.16 Modules](https://go.dev/wiki/Modules) to manage dependency versions.
The main branch of every Cosmos repository should just build with `go get`, which means they should be kept up-to-date with their dependencies, so we can get away with telling people they can just `go get` our software.
Since some dependencies are not under our control, a third party may break our build, in which case we can fall back on `go mod tidy -v`.
Other helpful commands:
- `go get` to add a new go module (including if the existing go module is being semantic version bumped, i.e. my/module/v1 -> my/module/v2).
- `go get -u` to update an existing dependency.
- `go mod tidy` to update dependencies in `go.sum`.
## Protobuf
We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along with [buf](https://docs.buf.build/introduction) and [gogoproto](https://github.com/gogo/protobuf) to generate code for use in ibc-go.
For deterministic behavior around protobuf tooling, everything is containerized using Docker. Make sure to have Docker installed on your machine, or head to [Docker's website](https://docs.docker.com/get-docker/) to install it.
For formatting code in `.proto` files, you can run the `make proto-format` command.
For linting and checking breaking changes, we also use [buf](https://buf.build/). You can use the commands `make proto-lint` and `make proto-check-breaking` to respectively lint your proto files and check for breaking changes.
To generate the protobuf stubs, you can run `make proto-gen`.
We also added the `make proto-all` command to run the above commands (`proto-format`, `proto-lint` and `proto-gen`) sequentially.
To update third-party protobuf dependencies, you can run `make proto-update-deps`. This requires `buf` to be installed in the local development environment (see [`buf`s installation documentation](https://docs.buf.build/installation) for more details).
For generating or updating the swagger file that documents the URLs of the RESTful API that exposes the gRPC endpoints over HTTP, you can run the `proto-swagger-gen` command.
It reads protobuf service definitions and generates a reverse-proxy server which translates a RESTful HTTP API into gRPC.
## Developing and testing
- The latest state of development is on `main`.
- Build the `simd` test chain binary with `make build`.
- `main` must never fail `make test`.
- No `--force` onto `main` (except when reverting a broken commit, which should seldom happen).
- Create a development branch either on `github.com/cosmos/ibc-go`, or your fork (using `git remote add fork`).
- Before submitting a pull request, begin `git rebase` on top of `main`.
- Ensure you are using the pre-commit hooks by running `make setup-pre-commit`.
All Go tests in ibc-go can be ran by running `make test`.
Please make sure to run `make format` before every commit - the easiest way to do this is have your editor run it for you upon saving a file. Additionally please ensure that your code is lint compliant by running `make lint-fix` (requires `golangci-lint`).
When testing a function under a variety of different inputs, we prefer to use [table driven tests](https://github.com/golang/go/wiki/TableDrivenTests).
All unit tests should use the testing package. Please see the testing package [README](../../testing/README.md) for more information.
## Documentation
- If you open a PR on ibc-go, it is mandatory to update the relevant documentation in `/docs`.
- We lint the markdown files for documentation with [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli). Please run `make docs-lint` before pushing changes in the markdown files (you will need to have `markdownlint-cli` installed, so please follow the [installation instructions](https://github.com/igorshubovych/markdownlint-cli#installation)).
- Generate the folder `docs/.vuepress/dist` with all the static files for the documentation site with `make build-docs`.
- Run the documentation site locally with `make view-docs`.

119
docs/dev/go-style-guide.md Normal file
View file

@ -0,0 +1,119 @@
# Go style guide
In order to keep our code looking good with lots of programmers working on it, it helps to have a "style guide", so all the code generally looks quite similar. This doesn't mean there is only one "right way" to write code, or even that this standard is better than your style. But if we agree to a number of stylistic practices, it makes it much easier to read and modify new code. Please feel free to make suggestions if there's something you would like to add or modify.
We expect all contributors to be familiar with [Effective Go](https://golang.org/doc/effective_go.html) (and it's recommended reading for all Go programmers anyways). Additionally, we generally agree with the suggestions in [Uber's style guide](https://github.com/uber-go/guide/blob/master/style.md) and use that as a starting point.
## Code Structure
Perhaps more key for code readability than good commenting is having the right structure. As a rule of thumb, try to write in a logical order of importance, taking a little time to think how to order and divide the code such that someone could scroll down and understand the functionality of it just as well as you do. A loose example of such order would be:
- Constants, global and package-level variables.
- Main struct definition.
- Options (only if they are seen as critical to the struct else they should be placed in another file).
- Initialization/start and stop of the service functions.
- Public functions (in order of most important).
- Private/helper functions.
- Auxiliary structs and function (can also be above private functions or in a separate file).
## General
- Use `gofumpt` to format all code upon saving it (or run `make format`).
- Think about documentation, and try to leave godoc comments, when it will help new developers.
- Every package should have a high level doc.go file to describe the purpose of that package, its main functions, and any other relevant information.
- Applications (e.g. clis/servers) should panic on unexpected unrecoverable errors and print a stack trace.
## Comments
- Use a space after the comment deliminter (ex. `// your comment`).
- Many comments are not sentences. These should begin with a lower case letter and end without a period.
- Conversely, sentences in comments should be sentenced-cased and end with a period.
- Comments should explain *why* something is being done rather than *what* the code is doing. For example:
The comments in
```go
// assign a variable foo
f := foo
// assign f to b
b := f
```
have little value, but the following is more useful:
```go
f := foo
// we copy the variable f because we want to preserve the state at time of initialization
b := f
```
## Linting
- Run `make lint-fix` to fix any linting errors.
## Various
- Functions that return functions should have the suffix `Fn`.
- Names should not [stutter](https://blog.golang.org/package-names). For example, a struct generally shouldnt have a field named after itself; e.g., this shouldn't occur:
```go
type middleware struct {
middleware Middleware
}
```
- Acronyms are all capitalized, like "RPC", "gRPC", "API". "MyID", rather than "MyId".
- Whenever it is safe to use Go's built-in `error` instantiation functions (as opposed to Cosmos SDK's error instantiation functions), prefer `errors.New()` instead of `fmt.Errorf()` unless you're actually using the format feature with arguments.
## Importing libraries
- Use [goimports](https://godoc.org/golang.org/x/tools/cmd/goimports).
- Separate imports into blocks. For example:
```go
import (
// standard library imports
"fmt"
"testing"
// external library imports
"github.com/stretchr/testify/require"
// Cosmos-SDK imports
abci "github.com/cometbft/cometbft/abci/types"
// ibc-go library imports
"github.com/cosmos/ibc-go/modules/core/23-commitment/types"
)
```
Run `make lint-fix` to get the imports ordered and grouped automatically.
## Dependencies
- Dependencies should be pinned by a release tag, or specific commit, to avoid breaking `go get` when external dependencies are updated.
- Refer to the [contributing](./development-setup.md#dependencies) document for more details.
## Testing
- Make use of table driven testing where possible and not-cumbersome. Read [this blog post](https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go) for more information. See the [tests](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/transfer/keeper/msg_server_test.go#L11) for [`Transfer`](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/transfer/keeper/msg_server.go#L15) for an example.
- Make use of Testify [assert](https://godoc.org/github.com/stretchr/testify/assert) and [require](https://godoc.org/github.com/stretchr/testify/require).
- When using mocks, it is recommended to use Testify [mock](https://pkg.go.dev/github.com/stretchr/testify/mock) along with [Mockery](https://github.com/vektra/mockery) for autogeneration.
## Errors
- Ensure that errors are concise, clear and traceable.
- Depending on the context, use either `cosmossdk.io/errors` or `stdlib` error packages.
- For wrapping errors, use `fmt.Errorf()` with `%w`.
- Panic is appropriate when an internal invariant of a system is broken, while all other cases (in particular, incorrect or invalid usage) should return errors.
- Error messages should be formatted as following:
```go
sdkerrors.Wrapf(
<most specific error type possible>,
"<optional text description ended by colon and space>expected %s, got %s",
<value 1>,
<value 2>
)
```

View file

@ -0,0 +1,48 @@
# Project structure
If you're not familiar with the overall module structure from the SDK modules, please check this [document](https://docs.cosmos.network/main/build/building-modules/intro) as prerequisite reading.
Every Interchain Standard (ICS) has been developed in its own package. The development team separated the IBC TAO (Transport, Authentication, Ordering) ICS specifications from the IBC application level specification. The following sections describe the architecture of the most relevant directories that comprise this repository.
## `modules`
This folder contains implementations for the IBC TAO (`core`), IBC applications (`apps`) and light clients (`light-clients`).
### `core`
- `02-client`: This package is an implementation for Cosmos SDK-based chains of [ICS 02](https://github.com/cosmos/ibc/tree/main/spec/core/ics-002-client-semantics). This implementation defines the types and methods needed to operate light clients tracking other chain's consensus state.
- `03-connection`: This package is an implementation for Cosmos SDK-based chains of [ICS 03](https://github.com/cosmos/ibc/tree/main/spec/core/ics-003-connection-semantics). This implementation defines the types and methods necessary to perform connection handshake between two chains.
- `04-channel`: This package is an implementation for Cosmos SDK-based chains of [ICS 04](https://github.com/cosmos/ibc/tree/main/spec/core/ics-004-channel-and-packet-semantics). This implementation defines the types and methods necessary to perform channel handshake between two chains and ensure correct packet sending flow.
- `05-port`: This package is an implementation for Cosmos SDK-based chains of [ICS 05](https://github.com/cosmos/ibc/tree/main/spec/core/ics-005-port-allocation). This implements the port allocation system by which modules can bind to uniquely named ports.
- `23-commitment`: This package is an implementation for Cosmos SDK-based chains of [ICS 23](https://github.com/cosmos/ibc/tree/main/spec/core/ics-023-vector-commitments). This implementation defines the functions required to prove inclusion or non-inclusion of particular values at particular paths in state.
- `24-host`: This package is an implementation for Cosmos SDK-based chains of [ICS 24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements).
### `apps`
- `transfer`: This is the Cosmos SDK implementation of the [ICS 20](https://github.com/cosmos/ibc/tree/main/spec/app/ics-020-fungible-token-transfer) protocol, which enables cross-chain fungible token transfers. For more information, read the [module's docs](../docs/02-apps/01-transfer/01-overview.md)
- `27-interchain-accounts`: This is the Cosmos SDK implementation of the [ICS 27](https://github.com/cosmos/ibc/tree/main/spec/app/ics-027-interchain-accounts) protocol, which enables cross-chain account management built upon IBC. For more information, read the [module's documentation](../docs/02-apps/02-interchain-accounts/01-overview.md).
- `callbacks`: This is an implementation of [ADR 008](../architecture/adr-008-app-caller-cbs.md) that allows for secondary applications (e.g. smart contracts, modules) to call into IBC apps as part of their state machine logic and then do some actions on packet lifecycle events. For more information, read the [module's documentation](../docs/04-middleware/01-callbacks/01-overview.md).
### `light-clients`
- `06-solomachine`: This package implements the types for the Solo Machine light client specified in [ICS 06](https://github.com/cosmos/ibc/tree/main/spec/client/ics-006-solo-machine-client).
- `07-tendermint`: This package implements the types for the Tendermint consensus light client as specified in [ICS 07](https://github.com/cosmos/ibc/tree/main/spec/client/ics-007-tendermint-client).
- `08-wasm`: This package implements a proxy light client module that routes requests to the actual light clients uploaded as Wasm byte code, as specified in [ICS 08](https://github.com/cosmos/ibc/tree/main/spec/client/ics-008-wasm-client).
- `09-localhost`: This package implements a localhost loopback client with the ability to send and receive IBC packets to and from the same state machine, as specified in [ICS 09](https://github.com/cosmos/ibc/tree/main/spec/client/ics-009-loopback-cilent).
## `proto`
This folder contains all the Protobuf files used for
- common message type definitions,
- message type definitions related to genesis state,
- `Query` service and related message type definitions,
- `Msg` service and related message type definitions.
## `testing`
This package contains the implementation of the testing package used in unit and integration tests. Please read the [package's documentation](../../testing/README.md) for more information.
## `e2e`
This folder contains all the e2e tests of ibc-go. Please read the [module's documentation](../../e2e/README.md) for more information.

68
docs/dev/pull-requests.md Normal file
View file

@ -0,0 +1,68 @@
# Pull request guidelines
> To accommodate the review process we suggest that PRs are categorically broken up. Ideally each PR addresses only a single issue and does not introduce unrelated changes. Additionally, as much as possible code refactoring and cleanup should be submitted as separate PRs from bug fixes and feature additions.
If the PR is the result of a related GitHub issue, please include `closes: #<issue number>` in the PRs description in order to auto-close the related issue once the PR is merged. This will also link the issue and the PR together so that if anyone looks at either in the future, they wont have any problem trying to find the corresponding issue/PR as it will be recorded in the sidebar.
If the PR is not the result of an existing issue and it fixes a bug, please provide a detailed description of the bug. For feature additions, we recommend opening an issue first and have it discussed and agreed upon, before working on it and opening a PR.
If possible, [tick the "Allow edits from maintainers" box](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) when opening your PR from your fork of ibc-go. This allows us to directly make minor edits / refactors and speeds up the merging process.
If you open a PR on ibc-go, it is mandatory to update the relevant documentation in `/docs`.
## Pull request targeting
Ensure that you base and target your PR on the either the `main` branch or the corresponding feature branch where a large body of work is being implemented. Please make sure that the PR is made from a branch different than either `main` or the corresponding feature branch.
All development should be then targeted against `main` or the feature branch. Bug fixes which are required for outstanding releases should be backported if the CODEOWNERS decide it is applicable.
## Commit Messages
Commit messages should follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/).
When opening a PR, include the proposed commit message in the PR description.
The commit message type should be one of:
- `feat` / `feature` for feature work.
- `bug` / `fix` for bug fixes.
- `imp` / `improvements` for improvements.
- `doc` / `docs` / `documentation` for any documentation changes.
- `test` / `e2e` for addition or improvements of unit, integration and e2e tests or their corresponding infrastructure.
- `deprecated` for deprecation changes.
- `deps` / `build` for changes to dependencies.
- `chore` / `misc` / `nit` for any miscellaneous changes that don't fit into another category.
**Note**: If any change is breaking, the following format must be used:
- `type` + `(api)!` for api breaking changes, e.g. `fix(api)!: api breaking fix`
- `type` + `(statemachine)!` for state machine breaking changes, e.g. `fix(statemachine)!: state machine breaking fix`
**`api` breaking changes take precedence over `statemachine` breaking changes.**
## Pull request review process
All PRs require an approval from at least one CODEOWNER before merge. PRs which cause significant changes require two approvals from CODEOWNERS. When reviewing PRs please use the following review guidelines:
- `Approval` through the GitHub UI with the following comments:
- `Concept ACK` means that you agree with the overall proposed concept, but have neither reviewed the code nor tested it.
- `LGTM` means the above and besides you have superficially reviewed the code without considering how logic affects other parts the codebase.
- `utACK` (aka. `Untested ACK`) means the above and besides have thoroughly reviewed the code and considered the safety of logic changes, but have not tested it.
- `Tested ACK` means the above and besides you have tested the code.
- If you are only making "surface level" reviews, submit any notes as `Comments` without submitting an approval.
A thorough review means that:
- You understand the code and make sure that documentation is updated in the right places.
- You must also think through anything which ought to be included but is not.
- You must think through whether any added code could be partially combined (DRYed) with existing code.
- You must think through any potential security issues or incentive-compatibility flaws introduced by the changes.
- Naming must be consistent with conventions and the rest of the codebase.
- Code must live in a reasonable location, considering dependency structures (e.g. not importing testing modules in production code, or including example code modules in production code).
## Pull request merge procedure
- Ensure pull request branch is rebased on target branch.
- Ensure all GitHub requirements pass.
- Set the changelog entry in the commit message for the pull request.
- Squash and merge pull request.

View file

@ -0,0 +1,86 @@
# Tagging a release
Before tagging a new release, please run the [compatibility e2e test suite](https://github.com/cosmos/ibc-go/actions/workflows/e2e-compatibility.yaml) for the corresponding release line.
## New major release branch
Pre-requisites for creating a release branch for a new major version:
1. Bump [Go package version](https://github.com/cosmos/ibc-go/blob/main/go.mod#L3).
2. Change all imports. For example: if the next major version is `v3`, then change all imports starting with `github.com/cosmos/ibc-go/v2` to `github.com/cosmos/ibc-go/v3`).
Once the above pre-requisites are satisfied:
1. Start on `main`.
2. Create the release branch (`release/vX.XX.X`). For example: `release/v3.0.x`.
## New minor release branch
1. Start on the latest release branch in the same major release line. For example: the latest release branch in the `v3` release line is `v3.2.x`.
2. Create branch from the release branch. For example: create branch `release/v3.3.x` from `v3.2.x`.
Post-requisites for both new major and minor release branches:
1. Add branch protection rules to new release branch.
2. Add backport task to [`mergify.yml`](https://github.com/cosmos/ibc-go/blob/main/.github/mergify.yml).
3. Create label for backport (e.g.`backport-to-v3.0.x`).
## Point release procedure
In order to alleviate the burden for a single person to have to cherry-pick and handle merge conflicts of all desired backporting PRs to a point release, we instead maintain a living backport branch, where all desired features and bug fixes are merged into as separate PRs.
### Example
Current release is `v1.0.2`. We then maintain a (living) branch `release/v1.0.x`, given `x` as the next patch release number (currently `v1.0.3`) for the `v1.0` release series. As bugs are fixed and PRs are merged into `main`, if a contributor wishes the PR to be released into the `v1.0.x` point release, the contributor must:
1. Add the `backport-to-v1.0.x` label to the PR.
2. Once the PR is merged, the Mergify GitHub application will automatically copy the changes into another branch and open a new PR against the desired `release/v1.0.x` branch.
3. If the following has not been discussed in the original PR, then update the backport PR's description and ensure it contains the following information:
- **[Impact]** explanation of how the bug affects users or developers.
- **[Test Case]** section with detailed instructions on how to reproduce the bug.
- **[Regression Potential]** section with a discussion how regressions are most likely to manifest, or might manifest even if it's unlikely, as a result of the change. **It is assumed that any backport PR is well-tested before it is merged in and has an overall low risk of regression**. This section should discuss the potential for state breaking changes to occur such as through out-of-gas errors.
It is the PR's author's responsibility to fix merge conflicts, update changelog entries, and ensure CI passes. If a PR originates from an external contributor, it may be a core team member's responsibility to perform this process instead of the original author. Lastly, it is core team's responsibility to ensure that the PR meets all the backport criteria.
Finally, when a point release is ready to be made:
1. Checkout the release branch (e.g. `release/v1.0.x`).
2. In `CHANGELOG.md`:
- Ensure changelog entries are verified.
- Remove any sections of the changelog that do not have any entries (e.g. if the release does not have any bug fixes, then remove the section).
- Remove the `[Unreleased]` title.
- Add release version and date of release.
3. Create release in GitHub:
- Select the correct target branch (e.g. `release/v1.0.x`).
- Choose a tag (e.g. `v1.0.3`).
- Write release notes.
- Check the `This is a pre-release` checkbox if needed (this applies for alpha, beta and release candidates).
### Post-release procedure
- Update [`CHANGELOG.md`](../../CHANGELOG.md) in `main` (remove from the `[Unreleased]` section any items that are part of the release).`
- Put back the `[Unreleased]` section in the release branch (e.g. `release/v1.0.x`) with clean sections for each of the types of changelog entries, so that entries will be added for the PRs that are backported for the next release.
- Update [version matrix](../../RELEASES.md#version-matrix) in `RELEASES.md`: add the new release and remove any tags that might not be recommended anymore.
Additionally, for the first point release of a new major or minor release branch:
- Update the table of supported release lines (and End of Life dates) in [`RELEASES.md`](../../RELEASES.md): add the new release line and remove any release lines that might have become discontinued.
- Update the [list of supported release lines in README.md](../../RELEASES.md#releases), if necessary.
- Update the manual [e2e `simd`](https://github.com/cosmos/ibc-go/blob/main/.github/workflows/e2e-manual-simd.yaml) test workflow:
- Remove any tags that might not be recommended anymore.
- Update docs site:
- If the release is occurring on the main branch, on the latest version, then run `npm run docusaurus docs:version vX.Y.Z` in the `docs/` directory. (where `X.Y.Z` is the new version number)
- If the release is occurring on an older release branch, then make a PR to the main branch called `docs: new release vX.Y.Z` doing the following:
- Update the content of the docs found in `docs/versioned_docs/version-vx.y.z` if needed. (where `x.y.z` is the previous version number)
- Update the version number of the older release branch by changing the version number of the older release branch in:
- In `docs/versions.json`.
- Rename `docs/versioned_sidebars/version-vx.y.z-sidebars.json`
- Rename `docs/versioned_docs/version-vx.y.z`
- After changes to docs site are deployed, check [ibc.cosmos.network](https://ibc.cosmos.network) is updated.
- Open issue in [SDK tutorials repo](https://github.com/cosmos/sdk-tutorials) to update tutorials to the released version of ibc-go.
See [this PR](https://github.com/cosmos/ibc-go/pull/2919) for an example of the involved changes.

42
docs/docs/00-intro.md Normal file
View file

@ -0,0 +1,42 @@
---
slug: /
sidebar_position: 0
---
# IBC-Go Documentation
Welcome to the documentation for IBC-Go, the Golang implementation of the Inter-Blockchain Communication Protocol!
The Inter-Blockchain Communication Protocol (IBC) is a protocol that allows blockchains to talk to each other. Chains that speak IBC can share any type of data as long as it's encoded in bytes, enabling the industrys most feature-rich cross-chain interactions. IBC can be used to build a wide range of cross-chain applications that include token transfers, atomic swaps, multi-chain smart contracts (with or without mutually comprehensible VMs), and cross-chain account control. IBC is secure and permissionless.
The protocol realizes this interoperability by specifying a set of data structures, abstractions, and semantics that can be implemented by any distributed ledger that satisfies a small set of requirements.
:::note Notice
Since ibc-go v10, there are two versions of the protocol in the same release: IBC classic and IBC v2. The protocols are seperate - a connection uses either IBC classic or IBC v2
:::
## High-level overview of IBC v2
For a high level overview of IBC v2, please refer to [this blog post.](https://ibcprotocol.dev/blog/ibc-v2-announcement) For a more detailed understanding of the IBC v2 protocol, please refer to the [IBC v2 protocol specification.](https://github.com/cosmos/ibc/tree/main/spec/IBC_V2)
If you are interested in using the cannonical deployment of IBC v2, connecting Cosmos chains and Ethereum, take a look at the [IBC Eureka](https://docs.skip.build/go/eureka/eureka-overview) documentation to get started.
## High-level overview of IBC Classic
The following diagram shows how IBC works at a high level:
![Light Mode IBC Overview](./images/ibcoverview-light.svg#gh-light-mode-only)![Dark Mode IBC Overview](./images/ibcoverview-dark.svg#gh-dark-mode-only)
The transport layer (TAO) provides the necessary infrastructure to establish secure connections and authenticate data packets between chains. The application layer builds on top of the transport layer and defines exactly how data packets should be packaged and interpreted by the sending and receiving chains.
IBC provides a reliable, permissionless, and generic base layer (allowing for the secure relaying of data packets), while allowing for composability and modularity with separation of concerns by moving application designs (interpreting and acting upon the packet data) to a higher-level layer. This separation is reflected in the categories:
- **IBC/TAO** comprises the Transport, Authentication, and Ordering of packets, i.e. the infrastructure layer.
- **IBC/APP** consists of the application handlers for the data packets being passed over the transport layer. These include but are not limited to fungible token transfers (ICS-20), NFT transfers (ICS-721), and interchain accounts (ICS-27).
- **Application module:** groups any application, middleware or smart contract that may wrap downstream application handlers to provide enhanced functionality.
Note three crucial elements in the diagram:
- The chains depend on relayers to communicate. [Relayers](https://github.com/cosmos/ibc/blob/main/spec/relayer/ics-018-relayer-algorithms/README.md) are the "physical" connection layer of IBC: off-chain processes responsible for relaying data between two chains running the IBC protocol by scanning the state of each chain, constructing appropriate datagrams, and executing them on the opposite chain as is allowed by the protocol.
- Many relayers can serve one or more channels to send messages between the chains.
- Each side of the connection uses the light client of the other chain to quickly verify incoming messages.

View file

@ -0,0 +1,293 @@
---
title: Overview
sidebar_label: Overview
sidebar_position: 1
slug: /ibc/overview
---
# Overview
:::note Synopsis
Learn about IBC, its components, and its use cases.
:::
## What is the Inter-Blockchain Communication Protocol (IBC)?
This document serves as a guide for developers who want to write their own Inter-Blockchain
Communication Protocol (IBC) applications for custom use cases.
> IBC applications must be written as self-contained modules.
Due to the modular design of the IBC Protocol, IBC
application developers do not need to be concerned with the low-level details of clients,
connections, and proof verification.
This brief explanation of the lower levels of the
stack gives application developers a broad understanding of the IBC
Protocol. Abstraction layer details for channels and ports are most relevant for application developers and describe how to define custom packets and `IBCModule` callbacks.
The requirements to have your module interact over IBC are:
- Bind to a port or ports.
- Define your packet data.
- Use the default acknowledgment struct provided by core IBC or optionally define a custom acknowledgment struct.
- Standardize an encoding of the packet data.
- Implement the `IBCModule` interface.
- Implement the `UpgradableModule` interface (optional).
Read on for a detailed explanation of how to write a self-contained IBC application module.
## Components overview
### [Clients](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client)
IBC clients are on-chain light clients. Each light client is identified by a unique client ID.
IBC clients track the consensus states of other blockchains, along with the proof spec necessary to
properly verify proofs against the client's consensus state. A client can be associated with any number
of connections to the counterparty chain. The client identifier is auto generated using the client type
and the global client counter appended in the format: `{client-type}-{N}`.
A `ClientState` should contain chain specific and light client specific information necessary for verifying updates
and upgrades to the IBC client. The `ClientState` may contain information such as chain ID, latest height, proof specs,
unbonding periods or the status of the light client. The `ClientState` should not contain information that
is specific to a given block at a certain height, this is the function of the `ConsensusState`. Each `ConsensusState`
should be associated with a unique block and should be referenced using a height. IBC clients are given a
client identifier prefixed store to store their associated client state and consensus states along with
any metadata associated with the consensus states. Consensus states are stored using their associated height.
The supported IBC clients are:
- [Solo Machine light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/06-solomachine): Devices such as phones, browsers, or laptops.
- [Tendermint light client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/07-tendermint): The default for Cosmos SDK-based chains.
- [Wasm client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/08-wasm): Proxy client useful for running light clients written in a Wasm-compilable language.
- [Localhost (loopback) client](https://github.com/cosmos/ibc-go/blob/main/modules/light-clients/09-localhost): Useful for testing, simulation, and relaying packets to modules on the same application.
### IBC client heights
IBC Client Heights are represented by the struct:
```go
type Height struct {
RevisionNumber uint64
RevisionHeight uint64
}
```
The `RevisionNumber` represents the revision of the chain that the height is representing.
A revision typically represents a continuous, monotonically increasing range of block-heights.
The `RevisionHeight` represents the height of the chain within the given revision.
On any reset of the `RevisionHeight`—for example, when hard-forking a Tendermint chain,
the `RevisionNumber` will get incremented. This allows IBC clients to distinguish between a
block height `n` of a previous revision of the chain (at revision `p`) and block-height `n` of the current
revision of the chain (at revision `e`).
`Height`s that share the same revision number can be compared by simply comparing their respective `RevisionHeight`s.
`Height`s that do not share the same revision number will only be compared using their respective `RevisionNumber`s.
Thus a height `h` with revision number `e+1` will always be greater than a height `g` with revision number `e`,
**REGARDLESS** of the difference in revision heights.
For example:
```go
Height{RevisionNumber: 3, RevisionHeight: 0} > Height{RevisionNumber: 2, RevisionHeight: 100000000000}
```
When a Tendermint chain is running a particular revision, relayers can simply submit headers and proofs with the revision number
given by the chain's `chainID`, and the revision height given by the Tendermint block height. When a chain updates using a hard-fork
and resets its block-height, it is responsible for updating its `chainID` to increment the revision number.
IBC Tendermint clients then verifies the revision number against their `chainID` and treat the `RevisionHeight` as the Tendermint block-height.
Tendermint chains wishing to use revisions to maintain persistent IBC connections even across height-resetting upgrades must format their `chainID`s
in the following manner: `{chainID}-{revision_number}`. On any height-resetting upgrade, the `chainID` **MUST** be updated with a higher revision number
than the previous value.
For example:
- Before upgrade `chainID`: `gaiamainnet-3`
- After upgrade `chainID`: `gaiamainnet-4`
Clients that do not require revisions, such as the `06-solomachine` client, can simply hardcode `0` into the revision number whenever they
need to return an IBC height when implementing IBC interfaces and use the `RevisionHeight` exclusively.
Other client types can implement their own logic to verify the IBC heights that relayers provide in their `Update`, `Misbehavior`, and
`Verify` functions respectively.
The IBC interfaces expect an `ibcexported.Height` interface, however all clients must use the concrete implementation provided in
`02-client/types` and reproduced above.
### [Connections](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection)
Connections encapsulate two [`ConnectionEnd`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L17)
objects on two separate blockchains. Each `ConnectionEnd` is associated with a client of the
other blockchain (for example, the counterparty blockchain). The connection handshake is responsible
for verifying that the light clients on each chain are correct for their respective counterparties.
Connections, once established, are responsible for facilitating all cross-chain verifications of IBC state.
A connection can be associated with any number of channels.
The connection handshake is a 4-step handshake. Briefly, if a given chain A wants to open a connection with
chain B using already established light clients on both chains:
1. chain A sends a `ConnectionOpenInit` message to signal a connection initialization attempt with chain B.
2. chain B sends a `ConnectionOpenTry` message to try opening the connection on chain A.
3. chain A sends a `ConnectionOpenAck` message to mark its connection end state as open.
4. chain B sends a `ConnectionOpenConfirm` message to mark its connection end state as open.
#### Time delayed connections
Connections can be opened with a time delay by setting the `delay_period` field (in nanoseconds) in the [`MsgConnectionOpenInit`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/tx.proto#L45).
The time delay is used to require that the underlying light clients have been updated to a certain height before commitment verification can be performed.
`delayPeriod` is used in conjunction with the [`max_expected_time_per_block`](https://github.com/cosmos/ibc-go/blob/v8.0.0/proto/ibc/core/connection/v1/connection.proto#L113) parameter of the connection submodule to determine the `blockDelay`, which is number of blocks that the connection must be delayed by.
When commitment verification is performed, the connection submodule will pass `delayPeriod` and `blockDelay` to the light client. It is up to the light client to determine whether the light client has been updated to the required height. Only the following light clients in `ibc-go` support time delayed connections:
- `07-tendermint`
- `08-wasm` (passed to the contact)
### [Proofs](https://github.com/cosmos/ibc-go/blob/main/modules/core/23-commitment) and [paths](https://github.com/cosmos/ibc-go/blob/main/modules/core/24-host)
In IBC, blockchains do not directly pass messages to each other over the network. Instead, to
communicate, a blockchain commits some state to a specifically defined path that is reserved for a
specific message type and a specific counterparty. For example, for storing a specific connectionEnd as part
of a handshake or a packet intended to be relayed to a module on the counterparty chain. A relayer
process monitors for updates to these paths and relays messages by submitting the data stored
under the path and a proof to the counterparty chain.
Proofs are passed from core IBC to light clients as bytes. It is up to light client implementations to interpret these bytes appropriately.
- The paths that all IBC implementations must use for committing IBC messages is defined in
[ICS-24 Host State Machine Requirements](https://github.com/cosmos/ibc/tree/master/spec/core/ics-024-host-requirements).
- The proof format that all implementations must be able to produce and verify is defined in [ICS-23 Proofs](https://github.com/cosmos/ics23) implementation.
### [Ports](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port)
An IBC module can bind to any number of ports. Each port must be identified by a unique `portID`.
Since IBC is designed to be secure with mutually distrusted modules operating on the same ledger,
binding a port returns a dynamic object capability. In order to take action on a particular port
(for example, an open channel with its port ID), a module must provide the dynamic object capability to the IBC
handler. This requirement prevents a malicious module from opening channels with ports it does not own. Thus,
IBC modules are responsible for claiming the capability that is returned on `BindPort`.
### [Channels](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel)
An IBC channel can be established between two IBC ports. Currently, a port is exclusively owned by a
single module. IBC packets are sent over channels. Just as IP packets contain the destination IP
address and IP port, and the source IP address and source IP port, IBC packets contain
the destination port ID and channel ID, and the source port ID and channel ID. This packet structure enables IBC to
correctly route packets to the destination module while allowing modules receiving packets to
know the sender module.
A channel can be `ORDERED`, where packets from a sending module must be processed by the
receiving module in the order they were sent. Or a channel can be `UNORDERED`, where packets
from a sending module are processed in the order they arrive (might be in a different order than they were sent).
Modules can choose which channels they wish to communicate over with, thus IBC expects modules to
implement callbacks that are called during the channel handshake. These callbacks can do custom
channel initialization logic. If any callback returns an error, the channel handshake fails. Thus, by
returning errors on callbacks, modules can programmatically reject and accept channels.
The channel handshake is a 4-step handshake. Briefly, if a given chain A wants to open a channel with
chain B using an already established connection:
1. chain A sends a `ChanOpenInit` message to signal a channel initialization attempt with chain B.
2. chain B sends a `ChanOpenTry` message to try opening the channel on chain A.
3. chain A sends a `ChanOpenAck` message to mark its channel end status as open.
4. chain B sends a `ChanOpenConfirm` message to mark its channel end status as open.
If all handshake steps are successful, the channel is opened on both sides. At each step in the handshake, the module
associated with the `ChannelEnd` executes its callback. So
on `ChanOpenInit`, the module on chain A executes its callback `OnChanOpenInit`.
The channel identifier is auto derived in the format: `channel-{N}` where `N` is the next sequence to be used.
#### Closing channels
Closing a channel occurs in 2 handshake steps as defined in [ICS 04](https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics).
Once a channel is closed, it cannot be reopened. The channel handshake steps are:
**`ChanCloseInit`** closes a channel on the executing chain if
- the channel exists and it is not already closed,
- the connection it exists upon is `OPEN`,
- the [IBC module callback `OnChanCloseInit`](./03-apps/02-ibcmodule.md#channel-closing-callbacks) returns `nil`.
`ChanCloseInit` can be initiated by any user by submitting a `MsgChannelCloseInit` transaction.
Note that channels are automatically closed when a packet times out on an `ORDERED` channel.
A timeout on an `ORDERED` channel skips the `ChanCloseInit` step and immediately closes the channel.
**`ChanCloseConfirm`** is a response to a counterparty channel executing `ChanCloseInit`. The channel
on the executing chain closes if
- the channel exists and is not already closed,
- the connection the channel exists upon is `OPEN`,
- the executing chain successfully verifies that the counterparty channel has been closed
- the [IBC module callback `OnChanCloseConfirm`](./03-apps/02-ibcmodule.md#channel-closing-callbacks) returns `nil`.
Currently, none of the IBC applications provided in ibc-go support `ChanCloseInit`.
### [Packets](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel)
Modules communicate with each other by sending packets over IBC channels. All
IBC packets contain the destination `portID` and `channelID` along with the source `portID` and
`channelID`. This packet structure allows modules to know the sender module of a given packet. IBC packets
contain a sequence to optionally enforce ordering.
IBC packets also contain a `TimeoutHeight` and a `TimeoutTimestamp` that determine the deadline before the receiving module must process a packet.
Modules send custom application data to each other inside the `Data` `[]byte` field of the IBC packet.
Thus, packet data is opaque to IBC handlers. It is incumbent on a sender module to encode
their application-specific packet information into the `Data` field of packets. The receiver
module must decode that `Data` back to the original application data.
### [Receipts and timeouts](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel)
Since IBC works over a distributed network and relies on potentially faulty relayers to relay messages between ledgers,
IBC must handle the case where a packet does not get sent to its destination in a timely manner or at all. Packets must
specify a non-zero value for timeout height (`TimeoutHeight`) or timeout timestamp (`TimeoutTimestamp` ) after which a packet can no longer be successfully received on the destination chain.
- The `timeoutHeight` indicates a consensus height on the destination chain after which the packet is no longer to be processed, and instead counts as having timed-out.
- The `timeoutTimestamp` indicates a timestamp on the destination chain after which the packet is no longer to be processed, and instead counts as having timed-out.
If the timeout passes without the packet being successfully received, the packet can no longer be
received on the destination chain. The sending module can timeout the packet and take appropriate actions.
If the timeout is reached, then a proof of packet timeout can be submitted to the original chain. The original chain can then perform
application-specific logic to timeout the packet, perhaps by rolling back the packet send changes (refunding senders any locked funds, etc).
- In `ORDERED` channels, a timeout of a single packet in the channel causes the channel to close.
- If packet sequence `n` times out, then a packet at sequence `k > n` cannot be received without violating the contract of `ORDERED` channels that packets are processed in the order that they are sent.
- Since `ORDERED` channels enforce this invariant, a proof that sequence `n` has not been received on the destination chain by the specified timeout of packet `n` is sufficient to timeout packet `n` and close the channel.
- In `UNORDERED` channels, the application-specific timeout logic for that packet is applied and the channel is not closed.
- Packets can be received in any order.
- IBC writes a packet receipt for each sequence received in the `UNORDERED` channel. This receipt does not contain information; it is simply a marker intended to signify that the `UNORDERED` channel has received a packet at the specified sequence.
- To timeout a packet on an `UNORDERED` channel, a proof is required that a packet receipt **does not exist** for the packet's sequence by the specified timeout.
For this reason, most modules should use `UNORDERED` channels as they require fewer liveness guarantees to function effectively for users of that channel.
### [Acknowledgments](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel)
Modules can also choose to write application-specific acknowledgments upon processing a packet. Acknowledgments can be done:
- Synchronously on `OnRecvPacket` if the module processes packets as soon as they are received from IBC module.
- Asynchronously if module processes packets at some later point after receiving the packet.
This acknowledgment data is opaque to IBC much like the packet `Data` and is treated by IBC as a simple byte string `[]byte`. Receiver modules must encode their acknowledgment so that the sender module can decode it correctly. The encoding must be negotiated between the two parties during version negotiation in the channel handshake.
The acknowledgment can encode whether the packet processing succeeded or failed, along with additional information that allows the sender module to take appropriate action.
After the acknowledgment has been written by the receiving chain, a relayer relays the acknowledgment back to the original sender module.
The original sender module then executes application-specific acknowledgment logic using the contents of the acknowledgment.
- After an acknowledgement fails, packet-send changes can be rolled back (for example, refunding senders in ICS 20).
- After an acknowledgment is received successfully on the original sender on the chain, the corresponding packet commitment is deleted since it is no longer needed.
## Further readings and specs
If you want to learn more about IBC, check the following specifications:
- [IBC specification overview](https://github.com/cosmos/ibc/blob/master/README.md)

View file

@ -0,0 +1,359 @@
---
title: Integration
sidebar_label: Integration
sidebar_position: 2
slug: /ibc/integration
---
# Integration
:::note Synopsis
Learn how to integrate IBC to your application
:::
This document outlines the required steps to integrate and configure the [IBC
module](https://github.com/cosmos/ibc-go/tree/main/modules/core) to your Cosmos SDK application and enable sending fungible token transfers to other chains. An [example app using ibc-go v10 is linked](https://github.com/gjermundgaraba/probe/tree/ibc/v10).
## Integrating the IBC module
Integrating the IBC module to your SDK-based application is straightforward. The general changes can be summarized in the following steps:
- [Define additional `Keeper` fields for the new modules on the `App` type](#add-application-fields-to-app).
- [Add the module's `StoreKey`s and initialize their `Keeper`s](#configure-the-keepers).
- [Create Application Stacks with Middleware](#create-application-stacks-with-middleware)
- [Set up IBC router and add route for the `transfer` module](#register-module-routes-in-the-ibc-router).
- [Grant permissions to `transfer`'s `ModuleAccount`](#module-account-permissions).
- [Add the modules to the module `Manager`](#module-manager-and-simulationmanager).
- [Update the module `SimulationManager` to enable simulations](#module-manager-and-simulationmanager).
- [Integrate light client modules (e.g. `07-tendermint`)](#integrating-light-clients).
- [Add modules to `Begin/EndBlockers` and `InitGenesis`](#application-abci-ordering).
### Add application fields to `App`
We need to register the core `ibc` and `transfer` `Keeper`s. To support the use of IBC v2, `transferv2` and `callbacksv2` must also be registered as follows:
```go title="app.go"
import (
// other imports
// ...
ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper"
ibctransferkeeper "github.com/cosmos/ibc-go/v10/modules/apps/transfer/keeper"
// ibc v2 imports
transferv2 "github.com/cosmos/ibc-go/v10/modules/apps/transfer/v2"
ibccallbacksv2 "github.com/cosmos/ibc-go/v10/modules/apps/callbacks/v2"
)
type App struct {
// baseapp, keys and subspaces definitions
// other keepers
// ...
IBCKeeper *ibckeeper.Keeper // IBC Keeper must be a pointer in the app, so we can SetRouter on it correctly
TransferKeeper ibctransferkeeper.Keeper // for cross-chain fungible token transfers
// ...
// module and simulation manager definitions
}
```
### Configure the `Keeper`s
Initialize the IBC `Keeper`s (for core `ibc` and `transfer` modules), and any additional modules you want to include.
:::note Notice
The capability module has been removed in ibc-go v10, therefore the `ScopedKeeper` has also been removed
:::
```go
import (
// other imports
// ...
authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
ibcexported "github.com/cosmos/ibc-go/v10/modules/core/exported"
ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper"
"github.com/cosmos/ibc-go/v10/modules/apps/transfer"
ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types"
ibctm "github.com/cosmos/ibc-go/v10/modules/light-clients/07-tendermint"
)
func NewApp(...args) *App {
// define codecs and baseapp
// ... other module keepers
// Create IBC Keeper
app.IBCKeeper = ibckeeper.NewKeeper(
appCodec,
runtime.NewKVStoreService(keys[ibcexported.StoreKey]),
app.GetSubspace(ibcexported.ModuleName),
app.UpgradeKeeper,
authtypes.NewModuleAddress(govtypes.ModuleName).String(),
)
// Create Transfer Keeper
app.TransferKeeper = ibctransferkeeper.NewKeeper(
appCodec,
runtime.NewKVStoreService(keys[ibctransfertypes.StoreKey]),
app.GetSubspace(ibctransfertypes.ModuleName),
app.IBCKeeper.ChannelKeeper,
app.IBCKeeper.ChannelKeeper,
app.MsgServiceRouter(),
app.AccountKeeper,
app.BankKeeper,
authtypes.NewModuleAddress(govtypes.ModuleName).String(),
)
// ... continues
}
```
### Create Application Stacks with Middleware
Middleware stacks in IBC allow you to wrap an `IBCModule` with additional logic for packets and acknowledgements. This is a chain of handlers that execute in order. The transfer stack below shows how to wire up transfer to use packet forward middleware, and the callbacks middleware. Note that the order is important.
```go
// Create Transfer Stack for IBC Classic
maxCallbackGas := uint64(10_000_000)
wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper)
var transferStack porttypes.IBCModule
transferStack = transfer.NewIBCModule(app.TransferKeeper)
// callbacks wraps the transfer stack as its base app, and uses PacketForwardKeeper as the ICS4Wrapper
// i.e. packet-forward-middleware is higher on the stack and sits between callbacks and the ibc channel keeper
// Since this is the lowest level middleware of the transfer stack, it should be the first entrypoint for transfer keeper's
// WriteAcknowledgement.
cbStack := ibccallbacks.NewIBCMiddleware(transferStack, app.PacketForwardKeeper, wasmStackIBCHandler, maxCallbackGas)
transferStack = packetforward.NewIBCMiddleware(
cbStack,
app.PacketForwardKeeper,
0, // retries on timeout
packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp,
)
```
#### IBC v2 Application Stack
For IBC v2, an example transfer stack is shown below. In this case the transfer stack is using the callbacks middleware.
```go
// Create IBC v2 transfer middleware stack
// the callbacks gas limit is recommended to be 10M for use with wasm contracts
maxCallbackGas := uint64(10_000_000)
wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper)
var ibcv2TransferStack ibcapi.IBCModule
ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper)
ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware(transferv2.NewIBCModule(app.TransferKeeper), app.IBCKeeper.ChannelKeeperV2, wasmStackIBCHandler, app.IBCKeeper.ChannelKeeperV2, maxCallbackGas)
```
### Register module routes in the IBC `Router`
IBC needs to know which module is bound to which port so that it can route packets to the
appropriate module and call the appropriate callbacks. The port to module name mapping is handled by
IBC's port `Keeper`. However, the mapping from module name to the relevant callbacks is accomplished
by the port
[`Router`](https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/router.go) on the
`ibc` module.
Adding the module routes allows the IBC handler to call the appropriate callback when processing a channel handshake or a packet.
Currently, a `Router` is static so it must be initialized and set correctly on app initialization.
Once the `Router` has been set, no new routes can be added.
```go title="app.go"
import (
// other imports
// ...
porttypes "github.com/cosmos/ibc-go/v10/modules/core/05-port/types"
ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types"
)
func NewApp(...args) *App {
// .. continuation from above
// Create static IBC router, add transfer module route, then set and seal it
ibcRouter := porttypes.NewRouter()
ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack)
// Setting Router will finalize all routes by sealing router
// No more routes can be added
app.IBCKeeper.SetRouter(ibcRouter)
// ... continues
```
#### IBC v2 Router
With IBC v2, there is a new [router](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/router.go) that needs to register the routes for a portID to a given IBCModule. It supports two kinds of routes: direct routes and prefix-based routes. The direct routes match one specific port ID to a module, while the prefix-based routes match any port ID with a specific prefix to a module.
For example, if a direct route named `someModule` exists, only messages addressed to exactly that port ID will be passed to the corresponding module.
However, if instead, `someModule` is a prefix-based route, port IDs like `someModuleRandomPort1`, `someModuleRandomPort2`, etc., will be passed to the module.
Note that the router will panic when you add a route that conflicts with an already existing route. This is also the case if you add a prefix-based route that conflicts with an existing direct route or vice versa.
```go
// IBC v2 router creation
ibcRouterV2 := ibcapi.NewRouter()
ibcRouterV2.AddRoute(ibctransfertypes.PortID, ibcv2TransferStack)
// Setting Router will finalize all routes by sealing router
// No more routes can be added
app.IBCKeeper.SetRouterV2(ibcRouterV2)
```
### Module `Manager` and `SimulationManager`
In order to use IBC, we need to add the new modules to the module `Manager` and to the `SimulationManager`, in case your application supports [simulations](https://docs.cosmos.network/main/learn/advanced/simulation).
```go title="app.go"
import (
// other imports
// ...
"github.com/cosmos/cosmos-sdk/types/module"
ibc "github.com/cosmos/ibc-go/v10/modules/core"
"github.com/cosmos/ibc-go/v10/modules/apps/transfer"
)
func NewApp(...args) *App {
// ... continuation from above
app.ModuleManager = module.NewManager(
// other modules
// ...
// highlight-start
+ ibc.NewAppModule(app.IBCKeeper),
+ transfer.NewAppModule(app.TransferKeeper),
// highlight-end
)
// ...
app.simulationManager = module.NewSimulationManagerFromAppModules(
// other modules
// ...
app.ModuleManager.Modules,
map[string]module.AppModuleSimulation{},
)
// ... continues
```
### Module account permissions
After that, we need to grant `Minter` and `Burner` permissions to
the `transfer` `ModuleAccount` to mint and burn relayed tokens.
```go title="app.go"
import (
// other imports
// ...
"github.com/cosmos/cosmos-sdk/types/module"
authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
// highlight-next-line
+ ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types"
)
// app.go
var (
// module account permissions
maccPerms = map[string][]string{
// other module accounts permissions
// ...
ibctransfertypes.ModuleName: {authtypes.Minter, authtypes.Burner},
}
)
```
### Integrating light clients
> Note that from v10 onwards, all light clients are expected to implement the [`LightClientInterface` interface](../03-light-clients/01-developer-guide/02-light-client-module.md#implementing-the-lightclientmodule-interface) defined by core IBC, and have to be explicitly registered in a chain's app.go. This is in contrast to earlier versions of ibc-go when `07-tendermint` and `06-solomachine` were added out of the box. Follow the steps below to integrate the `07-tendermint` light client.
All light clients must be registered with `module.Manager` in a chain's app.go file. The following code example shows how to instantiate `07-tendermint` light client module and register its `ibctm.AppModule`.
```go title="app.go"
import (
// other imports
// ...
"github.com/cosmos/cosmos-sdk/types/module"
// highlight-next-line
+ ibctm "github.com/cosmos/ibc-go/v10/modules/light-clients/07-tendermint"
)
// app.go
// after sealing the IBC router
clientKeeper := app.IBCKeeper.ClientKeeper
storeProvider := app.IBCKeeper.ClientKeeper.GetStoreProvider()
tmLightClientModule := ibctm.NewLightClientModule(appCodec, storeProvider)
clientKeeper.AddRoute(ibctm.ModuleName, &tmLightClientModule)
// ...
app.ModuleManager = module.NewManager(
// ...
ibc.NewAppModule(app.IBCKeeper),
transfer.NewAppModule(app.TransferKeeper), // i.e ibc-transfer module
// register light clients on IBC
// highlight-next-line
+ ibctm.NewAppModule(tmLightClientModule),
)
```
#### Allowed Clients Params
The allowed clients parameter defines an allow list of client types supported by the chain. The
default value is a single-element list containing the [`AllowedClients`](https://github.com/cosmos/ibc-go/blob/main/modules/core/02-client/types/client.pb.go#L248-L253) wildcard (`"*"`). Alternatively, the parameter
may be set with a list of client types (e.g. `"06-solomachine","07-tendermint","09-localhost"`).
A client type that is not registered on this list will fail upon creation or on genesis validation.
Note that, since the client type is an arbitrary string, chains must not register two light clients
which return the same value for the `ClientType()` function, otherwise the allow list check can be
bypassed.
### Application ABCI ordering
One addition from IBC is the concept of `HistoricalInfo` which is stored in the Cosmos SDK `x/staking` module. The number of records stored by `x/staking` is controlled by the `HistoricalEntries` parameter which stores `HistoricalInfo` on a per-height basis.
Each entry contains the historical information for the `Header` and `ValidatorSet` of this chain which is stored
at each height during the `BeginBlock` call. The `HistoricalInfo` is required to introspect a blockchain's prior state at a given height in order to verify the light client `ConsensusState` during the
connection handshake.
```go title="app.go"
import (
// other imports
// ...
stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types"
ibcexported "github.com/cosmos/ibc-go/v10/modules/core/exported"
ibckeeper "github.com/cosmos/ibc-go/v10/modules/core/keeper"
ibctransfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types"
)
func NewApp(...args) *App {
// ... continuation from above
// add x/staking, ibc and transfer modules to BeginBlockers
app.ModuleManager.SetOrderBeginBlockers(
// other modules ...
stakingtypes.ModuleName,
ibcexported.ModuleName,
ibctransfertypes.ModuleName,
)
app.ModuleManager.SetOrderEndBlockers(
// other modules ...
stakingtypes.ModuleName,
ibcexported.ModuleName,
ibctransfertypes.ModuleName,
)
// ...
genesisModuleOrder := []string{
// other modules
// ...
ibcexported.ModuleName,
ibctransfertypes.ModuleName,
}
app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...)
// ... continues
```
That's it! You have now wired up the IBC module and the `transfer` module, and are now able to send fungible tokens across
different chains. If you want to have a broader view of the changes take a look into the SDK's
[`SimApp`](https://github.com/cosmos/ibc-go/blob/main/testing/simapp/app.go).

View file

@ -0,0 +1,316 @@
---
title: IBC v2 Applications
sidebar_label: IBC v2 Applications
sidebar_position: 0
slug: /ibc/apps/ibcv2apps
---
:::note Synopsis
Learn how to implement IBC v2 applications
:::
To build an IBC v2 application the following steps are required:
1. [Implement the `IBCModule` interface](#implement-the-ibcmodule-interface)
2. [Bind Ports](#bind-ports)
3. [Implement the IBCModule Keeper](#implement-the-ibcmodule-keeper)
4. [Implement application payload and success acknowledgement](#packets-and-payloads)
5. [Set and Seal the IBC Router](#routing)
Highlighted improvements for app developers with IBC v2:
- No need to support channel handshake callbacks
- Flexibility on upgrading application versioning, no need to use channel upgradability to renegotiate an application version, simply support the application version on both sides of the connection.
- Flexibility to choose your desired encoding type.
## Implement the `IBCModule` interface
The Cosmos SDK expects all IBC modules to implement the [`IBCModule`
interface](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L9-L53). This interface contains all of the callbacks IBC expects modules to implement. Note that for IBC v2, an application developer no longer needs to implement callbacks for the channel handshake. Note that this interface is distinct from the [porttypes.IBCModule interface][porttypes.IBCModule] used for IBC Classic.
```go
// IBCModule implements the application interface given the keeper.
// The implementation of the IBCModule interface could for example be in a file called ibc_module.go,
// but ultimately file structure is up to the developer
type IBCModule struct {
keeper keeper.Keeper
}
```
Additionally, in the `module.go` file, add the following line:
```go
var (
_ module.AppModule = AppModule{}
_ module.AppModuleBasic = AppModuleBasic{}
// Add this line
_ porttypes.IBCModule = IBCModule{}
)
```
### Packet callbacks
IBC expects modules to implement callbacks for handling the packet lifecycle, as defined in the `IBCModule` interface.
With IBC v2, modules are not directly connected. Instead a pair of clients are connected and register the counterparty clientID. Packets are routed to the relevant application module by the portID registered in the Router. Relayers send packets between the routers/packet handlers on each chain.
![IBC packet flow diagram](./images/packet_flow_v2.png)
Briefly, a successful packet flow works as follows:
1. A user sends a message to the IBC packet handler
2. The IBC packet handler validates the message, creates the packet and stores the commitment and returns the packet sequence number. The [`Payload`](https://github.com/cosmos/ibc-go/blob/fe25b216359fab71b3228461b05dbcdb1a554158/proto/ibc/core/channel/v2/packet.proto#L26-L38), which contains application specific data, is routed to the relevant application.
3. If the counterparty writes an acknowledgement of the packet then the sending chain will process the acknowledgement.
4. If the packet is not successfully received before the timeout, then the sending chain processes the packet's timeout.
#### Sending packets
[`MsgSendPacket`](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/tx.pb.go#L69-L75) is sent by a user to the [channel v2 message server](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/keeper/msg_server.go), which calls `ChannelKeeperV2.SendPacket`. This validates the message, creates the packet, stores the commitment and returns the packet sequence number. The application must specify its own payload which is used by the application and sent with `MsgSendPacket`.
An application developer needs to implement the custom logic the application executes when a packet is sent.
```go
// OnSendPacket logic
func (im *IBCModule) OnSendPacket(
ctx sdk.Context,
sourceChannel string,
destinationChannel string,
sequence uint64,
payload channeltypesv2.Payload,
signer sdk.AccAddress) error {
// implement any validation
// implement payload decoding and validation
// call the relevant keeper method for state changes as a result of application logic
// emit events or telemetry data
return nil
}
```
#### Receiving packets
To handle receiving packets, the module must implement the `OnRecvPacket` callback. An application module should validate and confirm support for the given version and encoding method used as there is greater flexibility in IBC v2 to support a range of versions and encoding methods.
The `OnRecvPacket` callback is invoked by the IBC module after the packet has been proven to be valid and correctly processed by the IBC
keepers.
Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state
changes given the packet data without worrying about whether the packet is valid or not.
Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface.
The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the
acknowledgement back to the sender module.
The state changes that occurr during this callback could be:
- the packet processing was successful as indicated by the `PacketStatus_Success` and an `Acknowledgement()` will be written
- if the packet processing was unsuccessful as indicated by the `PacketStatus_Failure` and an `ackErr` will be written
Note that with IBC v2 the error acknowledgements are standardised and cannot be customised.
```go
func (im IBCModule) OnRecvPacket(
ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, payload channeltypesv2.Payload, relayer sdk.AccAddress) channeltypesv2.RecvPacketResult {
// do application state changes based on payload and return the result
// state changes should be written via the `RecvPacketResult`
return recvResult
}
```
#### Acknowledging packets
After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can
then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the
acknowledgement is entirely up to the application developer.
IBC will pass in the acknowledgements as `[]byte` to this callback. The callback
is responsible for decoding the acknowledgement and processing it. The acknowledgement is serialised and deserialised using JSON.
```go
func (im IBCModule) OnAcknowledgementPacket(
ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, acknowledgement []byte, payload channeltypesv2.Payload, relayer sdk.AccAddress) error {
// check the type of the acknowledgement
// if not ackErr, unmarshal the JSON acknowledgement and unmarshal packet payload
// perform any application specific logic for processing acknowledgement
// emit events
return nil
}
```
#### Timeout packets
If the timeout for a packet is reached before the packet is successfully received or the receiving
chain can no longer process the packet the sending chain must process the timeout using
`OnTimeoutPacket`. Again the IBC module will verify that the timeout is
valid, so our module only needs to implement the state machine logic for what to do once a
timeout is reached and the packet can no longer be received.
```go
func (im IBCModule) OnTimeoutPacket(
ctx sdk.Context, sourceChannel string, destinationChannel string, sequence uint64, payload channeltypesv2.Payload, relayer sdk.AccAddress) error {
// unmarshal packet data
// do custom timeout logic, e.g. refund tokens for transfer
}
```
#### PacketDataUnmarshaler
The `PacketDataUnmarshaler` interface is required for IBC v2 applications to implement because the encoding type is specified by the `Payload` and multiple encoding types are supported.
```go
type PacketDataUnmarshaler interface {
// UnmarshalPacketData unmarshals the packet data into a concrete type
// the payload is provided and the packet data interface is returned
UnmarshalPacketData(payload channeltypesv2.Payload) (interface{}, error)
}
```
## Bind Ports
Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented:
> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address.
Add port ID to the `GenesisState` proto definition:
```protobuf
message GenesisState {
string port_id = 1;
// other fields
}
```
You can see an example for transfer [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/applications/transfer/v1/genesis.proto).
Add port ID as a key to the module store:
```go
// x/<moduleName>/types/keys.go
const (
// ModuleName defines the IBC Module name
ModuleName = "moduleName"
// PortID is the default port id that module binds to
PortID = "portID"
// ...
)
```
Note that with IBC v2, the version does not need to be added as a key (as required with IBC classic) because versioning of applications is now contained within the [packet Payload](https://github.com/cosmos/ibc-go/blob/main/modules/core/04-channel/v2/types/packet.go#L23-L32).
Add port ID to `x/<moduleName>/types/genesis.go`:
```go
// in x/<moduleName>/types/genesis.go
// DefaultGenesisState returns a GenesisState
// with the portID defined in keys.go
func DefaultGenesisState() *GenesisState {
return &GenesisState{
PortId: PortID,
// additional k-v fields
}
}
// Validate performs basic genesis state validation
// returning an error upon any failure.
func (gs GenesisState) Validate() error {
if err := host.PortIdentifierValidator(gs.PortId); err != nil {
return err
}
//additional validations
return gs.Params.Validate()
}
```
Set the port in the module keeper's for `InitGenesis`:
```go
// SetPort sets the portID for the transfer module. Used in InitGenesis
func (k Keeper) SetPort(ctx sdk.Context, portID string) {
store := k.storeService.OpenKVStore(ctx)
if err := store.Set(types.PortKey, []byte(portID)); err != nil {
panic(err)
}
}
// Initialize any other module state, like params with SetParams.
func (k Keeper) SetParams(ctx sdk.Context, params types.Params) {
store := k.storeService.OpenKVStore(ctx)
bz := k.cdc.MustMarshal(&params)
if err := store.Set([]byte(types.ParamsKey), bz); err != nil {
panic(err)
}
}
// ...
```
The module is set to the desired port. The setting and sealing happens during creation of the IBC router.
## Implement the IBCModule Keeper
More information on implementing the IBCModule Keepers can be found in the [keepers section](04-keeper.md)
## Packets and Payloads
Applications developers need to define the `Payload` contained within an [IBC packet](https://github.com/cosmos/ibc-go/blob/fe25b216359fab71b3228461b05dbcdb1a554158/proto/ibc/core/channel/v2/packet.proto#L11-L24). Note that in IBC v2 the `timeoutHeight` has been removed and only `timeoutTimestamp` is used. A packet can contain multiple payloads in a list. Each Payload includes:
```go
// Payload contains the source and destination ports and payload for the application (version, encoding, raw bytes)
message Payload {
// specifies the source port of the packet.
string source_port = 1;
// specifies the destination port of the packet.
string destination_port = 2;
// version of the specified application.
string version = 3;
// the encoding used for the provided value.
string encoding = 4;
// the raw bytes for the payload.
bytes value = 5;
}
```
Note that compared to IBC classic, where the applications version and encoding is negotiated during the channel handshake, IBC v2 provides enhanced flexibility. The application version and encoding used by the Payload is defined in the Payload. An example Payload is illustrated below:
```go
type MyAppPayloadData struct {
Field1 string
Field2 uint64
}
// Marshal your payload to bytes using your encoding
bz, err := json.Marshal(MyAppPayloadData{Field1: "example", Field2: 7})
// Wrap it in a channel v2 Payload
payload := channeltypesv2.NewPayload(
sourcePort,
destPort,
"my-app-v1", // App version
channeltypesv2.EncodingJSON, // Encoding type, e.g. JSON, protobuf or ABI
bz, // Encoded data
)
```
It is also possible to define your own custom success acknowledgement which will be returned to the sender if the packet is successfully recieved and is returned in the `RecvPacketResult`. Note that if the packet processing fails, it is not possible to define a custom error acknowledgment, a constant ackErr is returned.
## Routing
More information on implementing the IBC Router can be found in the [routing section](../03-apps/06-routing.md).
[porttypes.IBCModule]: https://github.com/cosmos/ibc-go/blob/main/modules/core/05-port/types/module.go

View file

@ -0,0 +1,446 @@
---
title: IBC Applications
sidebar_label: IBC Applications
sidebar_position: 1
slug: /ibc/apps/apps
---
# IBC Applications
:::warning
This page is relevant for IBC Classic, naviagate to the IBC v2 applications page for information on v2 apps
:::
Learn how to configure your application to use IBC and send data packets to other chains.
This document serves as a guide for developers who want to write their own Inter-blockchain
Communication Protocol (IBC) applications for custom use cases.
Due to the modular design of the IBC protocol, IBC
application developers do not need to concern themselves with the low-level details of clients,
connections, and proof verification, however a brief explaination is given. Then the document goes into detail on the abstraction layer most relevant for application
developers (channels and ports), and describes how to define your own custom packets, and
`IBCModule` callbacks.
To have your module interact over IBC you must: bind to a port(s), define your own packet data and acknowledgement structs as well as how to encode/decode them, and implement the
`IBCModule` interface. Below is a more detailed explanation of how to write an IBC application
module correctly.
:::note
## Pre-requisites Readings
- [IBC Overview](../01-overview.md)
- [IBC default integration](../02-integration.md)
:::
## Create a custom IBC application module
### Implement `IBCModule` Interface and callbacks
The Cosmos SDK expects all IBC modules to implement the [`IBCModule`
interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This
interface contains all of the callbacks IBC expects modules to implement. This section will describe
the callbacks that are called during channel handshake execution.
Here are the channel handshake callbacks that modules are expected to implement:
```go
// Called by IBC Handler on MsgOpenInit
func (k Keeper) OnChanOpenInit(ctx sdk.Context,
order channeltypes.Order,
connectionHops []string,
portID string,
channelID string,
counterparty channeltypes.Counterparty,
version string,
) error {
// ... do custom initialization logic
// Use above arguments to determine if we want to abort handshake
// Examples: Abort if order == UNORDERED,
// Abort if version is unsupported
err := checkArguments(args)
return err
}
// Called by IBC Handler on MsgOpenTry
OnChanOpenTry(
ctx sdk.Context,
order channeltypes.Order,
connectionHops []string,
portID,
channelID string,
counterparty channeltypes.Counterparty,
counterpartyVersion string,
) (string, error) {
// ... do custom initialization logic
// Use above arguments to determine if we want to abort handshake
if err := checkArguments(args); err != nil {
return err
}
// Construct application version
// IBC applications must return the appropriate application version
// This can be a simple string or it can be a complex version constructed
// from the counterpartyVersion and other arguments.
// The version returned will be the channel version used for both channel ends.
appVersion := negotiateAppVersion(counterpartyVersion, args)
return appVersion, nil
}
// Called by IBC Handler on MsgOpenAck
OnChanOpenAck(
ctx sdk.Context,
portID,
channelID string,
counterpartyVersion string,
) error {
// ... do custom initialization logic
// Use above arguments to determine if we want to abort handshake
err := checkArguments(args)
return err
}
// Called by IBC Handler on MsgOpenConfirm
OnChanOpenConfirm(
ctx sdk.Context,
portID,
channelID string,
) error {
// ... do custom initialization logic
// Use above arguments to determine if we want to abort handshake
err := checkArguments(args)
return err
}
```
The channel closing handshake will also invoke module callbacks that can return errors to abort the
closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls
`ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`.
```go
// Called by IBC Handler on MsgCloseInit
OnChanCloseInit(
ctx sdk.Context,
portID,
channelID string,
) error {
// ... do custom finalization logic
// Use above arguments to determine if we want to abort handshake
err := checkArguments(args)
return err
}
// Called by IBC Handler on MsgCloseConfirm
OnChanCloseConfirm(
ctx sdk.Context,
portID,
channelID string,
) error {
// ... do custom finalization logic
// Use above arguments to determine if we want to abort handshake
err := checkArguments(args)
return err
}
```
#### Channel Handshake Version Negotiation
Application modules are expected to verify versioning used during the channel handshake procedure.
- `ChanOpenInit` callback should verify that the `MsgChanOpenInit.Version` is valid
- `ChanOpenTry` callback should construct the application version used for both channel ends. If no application version can be constructed, it must return an error.
- `ChanOpenAck` callback should verify that the `MsgChanOpenAck.CounterpartyVersion` is valid and supported.
IBC expects application modules to perform application version negotiation in `OnChanOpenTry`. The negotiated version
must be returned to core IBC. If the version cannot be negotiated, an error should be returned.
Versions must be strings but can implement any versioning structure. If your application plans to
have linear releases then semantic versioning is recommended. If your application plans to release
various features in between major releases then it is advised to use the same versioning scheme
as IBC. This versioning scheme specifies a version identifier and compatible feature set with
that identifier. Valid version selection includes selecting a compatible version identifier with
a subset of features supported by your application for that version. The struct is used for this
scheme can be found in `03-connection/types`.
Since the version type is a string, applications have the ability to do simple version verification
via string matching or they can use the already implemented versioning system and pass the proto
encoded version into each handhshake call as necessary.
ICS20 currently implements basic string matching with a single supported version.
### Custom Packets
Modules connected by a channel must agree on what application data they are sending over the
channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up
to each application module to determine how to implement this agreement. However, for most
applications this will happen as a version negotiation during the channel handshake. While more
complex version negotiation is possible to implement inside the channel opening handshake, a very
simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go).
Thus, a module must define its custom packet data structure, along with a well-defined way to
encode and decode it to and from `[]byte`.
```go
// Custom packet data defined in application module
type CustomPacketData struct {
// Custom fields ...
}
EncodePacketData(packetData CustomPacketData) []byte {
// encode packetData to bytes
}
DecodePacketData(encoded []byte) (CustomPacketData) {
// decode from bytes to packet data
}
```
Then a module must encode its packet data before sending it through IBC.
```go
// Sending custom application packet data
data := EncodePacketData(customPacketData)
packet.Data = data
// Send packet to IBC, authenticating with channelCap
sequence, err := IBCChannelKeeper.SendPacket(
ctx,
sourcePort,
sourceChannel,
timeoutHeight,
timeoutTimestamp,
data,
)
```
A module receiving a packet must decode the `PacketData` into a structure it expects so that it can
act on it.
```go
// Receiving custom application packet data (in OnRecvPacket)
packetData := DecodePacketData(packet.Data)
// handle received custom packet data
```
#### Packet Flow Handling
Just as IBC expected modules to implement callbacks for channel handshakes, IBC also expects modules
to implement callbacks for handling the packet flow through a channel.
Once a module A and module B are connected to each other, relayers can start relaying packets and
acknowledgements back and forth on the channel.
![IBC packet flow diagram](https://media.githubusercontent.com/media/cosmos/ibc/old/spec/ics-004-channel-and-packet-semantics/channel-state-machine.png)
Briefly, a successful packet flow works as follows:
1. module A sends a packet through the IBC module
2. the packet is received by module B
3. if module B writes an acknowledgement of the packet then module A will process the
acknowledgement
4. if the packet is not successfully received before the timeout, then module A processes the
packet's timeout.
##### Sending Packets
Modules do not send packets through callbacks, since the modules initiate the action of sending
packets to the IBC module, as opposed to other parts of the packet flow where msgs sent to the IBC
module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a
packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`.
```go
// Sending custom application packet data
data := EncodePacketData(customPacketData)
// Send packet to IBC, authenticating with channelCap
sequence, err := IBCChannelKeeper.SendPacket(
ctx,
sourcePort,
sourceChannel,
timeoutHeight,
timeoutTimestamp,
data,
)
```
##### Receiving Packets
To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets
invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC
keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state
changes given the packet data without worrying about whether the packet is valid or not.
Modules may return to the IBC handler an acknowledgement which implements the Acknowledgement interface.
The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the
acknowledgement back to the sender module.
The state changes that occurred during this callback will only be written if:
- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement
- if the acknowledgement returned is nil indicating that an asynchronous process is occurring
NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes
when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written
for asynchronous acknowledgements.
```go
OnRecvPacket(
ctx sdk.Context,
packet channeltypes.Packet,
) ibcexported.Acknowledgement {
// Decode the packet data
packetData := DecodePacketData(packet.Data)
// do application state changes based on packet data and return the acknowledgement
// NOTE: The acknowledgement will indicate to the IBC handler if the application
// state changes should be written via the `Success()` function. Application state
// changes are only written if the acknowledgement is successful or the acknowledgement
// returned is nil indicating that an asynchronous acknowledgement will occur.
ack := processPacket(ctx, packet, packetData)
return ack
}
```
The Acknowledgement interface:
```go
// Acknowledgement defines the interface used to return
// acknowledgements in the OnRecvPacket callback.
type Acknowledgement interface {
Success() bool
Acknowledgement() []byte
}
```
### Acknowledgements
Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing.
In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement
will be written once the packet has been processed by the application which may be well after the packet receipt.
NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement
for a packet as soon as it has been received from the IBC module.
This acknowledgement can then be relayed back to the original sender chain, which can take action
depending on the contents of the acknowledgement.
Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and
receive acknowledegments with the IBC modules as byte strings.
Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an
acknowledgement struct along with encoding and decoding it, is very similar to the packet data
example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope)
specifies a recommended format for acknowledgements. This acknowledgement type can be imported from
[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types).
While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto):
```proto
// Acknowledgement is the recommended acknowledgement format to be used by
// app-specific protocols.
// NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental
// conflicts with other protobuf message formats used for acknowledgements.
// The first byte of any message with this format will be the non-ASCII values
// `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS:
// https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope
message Acknowledgement {
// response contains either a result or an error and must be non-empty
oneof response {
bytes result = 21;
string error = 22;
}
}
```
#### Acknowledging Packets
After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can
then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the
acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it
may often contain information on whether the packet was successfully processed along
with some additional data that could be useful for remediation if the packet processing failed.
Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and
acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback
is responsible for decoding the acknowledgement and processing it.
```go
OnAcknowledgementPacket(
ctx sdk.Context,
packet channeltypes.Packet,
acknowledgement []byte,
) (*sdk.Result, error) {
// Decode acknowledgement
ack := DecodeAcknowledgement(acknowledgement)
// process ack
res, err := processAck(ack)
return res, err
}
```
#### Timeout Packets
If the timeout for a packet is reached before the packet is successfully received or the
counterparty channel end is closed before the packet is successfully received, then the receiving
chain can no longer process it. Thus, the sending chain must process the timeout using
`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is
indeed valid, so our module only needs to implement the state machine logic for what to do once a
timeout is reached and the packet can no longer be received.
```go
OnTimeoutPacket(
ctx sdk.Context,
packet channeltypes.Packet,
) (*sdk.Result, error) {
// do custom timeout logic
}
```
### Routing
As mentioned above, modules must implement the IBC module interface (which contains both channel
handshake callbacks and packet handling callbacks). The concrete implementation of this interface
must be registered with the module name as a route on the IBC `Router`.
```go
// app.go
func NewApp(...args) *App {
// ...
// Create static IBC router, add module routes, then set and seal it
ibcRouter := port.NewRouter()
ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule)
// Note: moduleCallbacks must implement IBCModule interface
ibcRouter.AddRoute(moduleName, moduleCallbacks)
// Setting Router will finalize all routes by sealing router
// No more routes can be added
app.IBCKeeper.SetRouter(ibcRouter)
```
## Working Example
For a real working example of an IBC application, you can look through the `ibc-transfer` module
which implements everything discussed above.
Here are the useful parts of the module to look at:
[Binding to transfer
port](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/genesis.go)
[Sending transfer
packets](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/relay.go)
[Implementing IBC
callbacks](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/ibc_module.go)

View file

@ -0,0 +1,367 @@
---
title: Implement IBCModule interface and callbacks
sidebar_label: Implement IBCModule interface and callbacks
sidebar_position: 2
slug: /ibc/apps/ibcmodule
---
# Implement `IBCModule` interface and callbacks
:::note Synopsis
Learn how to implement the `IBCModule` interface and all of the callbacks it requires.
:::
The Cosmos SDK expects all IBC modules to implement the [`IBCModule`
interface](https://github.com/cosmos/ibc-go/tree/main/modules/core/05-port/types/module.go). This interface contains all of the callbacks IBC expects modules to implement. They include callbacks related to channel handshake, closing and packet callbacks (`OnRecvPacket`, `OnAcknowledgementPacket` and `OnTimeoutPacket`).
```go
// IBCModule implements the ICS26 interface for given the keeper.
// The implementation of the IBCModule interface could for example be in a file called ibc_module.go,
// but ultimately file structure is up to the developer
type IBCModule struct {
keeper keeper.Keeper
}
```
Additionally, in the `module.go` file, add the following line:
```go
var (
_ module.AppModule = AppModule{}
_ module.AppModuleBasic = AppModuleBasic{}
// Add this line
_ porttypes.IBCModule = IBCModule{}
)
```
:::note
## Pre-requisite readings
- [IBC Overview](../01-overview.md)
- [IBC default integration](../02-integration.md)
:::
## Channel handshake callbacks
This section will describe the callbacks that are called during channel handshake execution.
Here are the channel handshake callbacks that modules are expected to implement:
> Note that some of the code below is *pseudo code*, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `checkArguments` and `negotiateAppVersion` functions.
```go
// Called by IBC Handler on MsgOpenInit
func (im IBCModule) OnChanOpenInit(ctx sdk.Context,
order channeltypes.Order,
connectionHops []string,
portID string,
channelID string,
counterparty channeltypes.Counterparty,
version string,
) (string, error) {
// ... do custom initialization logic
// Use above arguments to determine if we want to abort handshake
// Examples:
// - Abort if order == UNORDERED,
// - Abort if version is unsupported
if err := checkArguments(args); err != nil {
return "", err
}
return version, nil
}
// Called by IBC Handler on MsgOpenTry
func (im IBCModule) OnChanOpenTry(
ctx sdk.Context,
order channeltypes.Order,
connectionHops []string,
portID,
channelID string,
counterparty channeltypes.Counterparty,
counterpartyVersion string,
) (string, error) {
// ... do custom initialization logic
// Use above arguments to determine if we want to abort handshake
if err := checkArguments(args); err != nil {
return "", err
}
// Construct application version
// IBC applications must return the appropriate application version
// This can be a simple string or it can be a complex version constructed
// from the counterpartyVersion and other arguments.
// The version returned will be the channel version used for both channel ends.
appVersion := negotiateAppVersion(counterpartyVersion, args)
return appVersion, nil
}
// Called by IBC Handler on MsgOpenAck
func (im IBCModule) OnChanOpenAck(
ctx sdk.Context,
portID,
channelID string,
counterpartyVersion string,
) error {
if counterpartyVersion != types.Version {
return sdkerrors.Wrapf(types.ErrInvalidVersion, "invalid counterparty version: %s, expected %s", counterpartyVersion, types.Version)
}
// do custom logic
return nil
}
// Called by IBC Handler on MsgOpenConfirm
func (im IBCModule) OnChanOpenConfirm(
ctx sdk.Context,
portID,
channelID string,
) error {
// do custom logic
return nil
}
```
### Channel closing callbacks
The channel closing handshake will also invoke module callbacks that can return errors to abort the closing handshake. Closing a channel is a 2-step handshake, the initiating chain calls `ChanCloseInit` and the finalizing chain calls `ChanCloseConfirm`.
Currently, all IBC modules in this repository return an error for `OnChanCloseInit` to prevent the channels from closing. This is because any user can call `ChanCloseInit` by submitting a `MsgChannelCloseInit` transaction.
```go
// Called by IBC Handler on MsgCloseInit
func (im IBCModule) OnChanCloseInit(
ctx sdk.Context,
portID,
channelID string,
) error {
// ... do custom finalization logic
// Use above arguments to determine if we want to abort handshake
err := checkArguments(args)
return err
}
// Called by IBC Handler on MsgCloseConfirm
func (im IBCModule) OnChanCloseConfirm(
ctx sdk.Context,
portID,
channelID string,
) error {
// ... do custom finalization logic
// Use above arguments to determine if we want to abort handshake
err := checkArguments(args)
return err
}
```
### Channel handshake version negotiation
Application modules are expected to verify versioning used during the channel handshake procedure.
- `OnChanOpenInit` will verify that the relayer-chosen parameters
are valid and perform any custom `INIT` logic.
It may return an error if the chosen parameters are invalid
in which case the handshake is aborted.
If the provided version string is non-empty, `OnChanOpenInit` should return
the version string if valid or an error if the provided version is invalid.
**If the version string is empty, `OnChanOpenInit` is expected to
return a default version string representing the version(s)
it supports.**
If there is no default version string for the application,
it should return an error if the provided version is an empty string.
- `OnChanOpenTry` will verify the relayer-chosen parameters along with the
counterparty-chosen version string and perform custom `TRY` logic.
If the relayer-chosen parameters
are invalid, the callback must return an error to abort the handshake.
If the counterparty-chosen version is not compatible with this module's
supported versions, the callback must return an error to abort the handshake.
If the versions are compatible, the try callback must select the final version
string and return it to core IBC.
`OnChanOpenTry` may also perform custom initialization logic.
- `OnChanOpenAck` will error if the counterparty selected version string
is invalid and abort the handshake. It may also perform custom ACK logic.
Versions must be strings but can implement any versioning structure. If your application plans to
have linear releases then semantic versioning is recommended. If your application plans to release
various features in between major releases then it is advised to use the same versioning scheme
as IBC. This versioning scheme specifies a version identifier and compatible feature set with
that identifier. Valid version selection includes selecting a compatible version identifier with
a subset of features supported by your application for that version. The struct used for this
scheme can be found in [03-connection/types](https://github.com/cosmos/ibc-go/blob/main/modules/core/03-connection/types/version.go#L16).
Since the version type is a string, applications have the ability to do simple version verification
via string matching or they can use the already implemented versioning system and pass the proto
encoded version into each handhshake call as necessary.
ICS20 currently implements basic string matching with a single supported version.
## Packet callbacks
Just as IBC expects modules to implement callbacks for channel handshakes, it also expects modules to implement callbacks for handling the packet flow through a channel, as defined in the `IBCModule` interface.
Once a module A and module B are connected to each other, relayers can start relaying packets and acknowledgements back and forth on the channel.
![IBC packet flow diagram](./images/packet_flow.png)
Briefly, a successful packet flow works as follows:
1. Module A sends a packet through the IBC module
2. The packet is received by module B
3. If module B writes an acknowledgement of the packet then module A will process the acknowledgement
4. If the packet is not successfully received before the timeout, then module A processes the packet's timeout.
### Sending packets
Modules **do not send packets through callbacks**, since the modules initiate the action of sending packets to the IBC module, as opposed to other parts of the packet flow where messages sent to the IBC
module must trigger execution on the port-bound module through the use of callbacks. Thus, to send a packet a module simply needs to call `SendPacket` on the `IBCChannelKeeper`.
> Note that some of the code below is *pseudo code*, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `EncodePacketData(customPacketData)` function.
```go
// Sending custom application packet data
data := EncodePacketData(customPacketData)
// Send packet to IBC, authenticating with channelCap
sequence, err := IBCChannelKeeper.SendPacket(
ctx,
sourcePort,
sourceChannel,
timeoutHeight,
timeoutTimestamp,
data,
)
```
### Receiving packets
To handle receiving packets, the module must implement the `OnRecvPacket` callback. This gets
invoked by the IBC module after the packet has been proved valid and correctly processed by the IBC
keepers. Thus, the `OnRecvPacket` callback only needs to worry about making the appropriate state
changes given the packet data without worrying about whether the packet is valid or not.
Modules may return to the IBC handler an acknowledgement which implements the `Acknowledgement` interface.
The IBC handler will then commit this acknowledgement of the packet so that a relayer may relay the
acknowledgement back to the sender module.
The state changes that occurred during this callback will only be written if:
- the acknowledgement was successful as indicated by the `Success()` function of the acknowledgement
- if the acknowledgement returned is nil indicating that an asynchronous process is occurring
NOTE: Applications which process asynchronous acknowledgements must handle reverting state changes
when appropriate. Any state changes that occurred during the `OnRecvPacket` callback will be written
for asynchronous acknowledgements.
> Note that some of the code below is *pseudo code*, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodePacketData(packet.Data)` function.
```go
func (im IBCModule) OnRecvPacket(
ctx sdk.Context,
packet channeltypes.Packet,
) ibcexported.Acknowledgement {
// Decode the packet data
packetData := DecodePacketData(packet.Data)
// do application state changes based on packet data and return the acknowledgement
// NOTE: The acknowledgement will indicate to the IBC handler if the application
// state changes should be written via the `Success()` function. Application state
// changes are only written if the acknowledgement is successful or the acknowledgement
// returned is nil indicating that an asynchronous acknowledgement will occur.
ack := processPacket(ctx, packet, packetData)
return ack
}
```
Reminder, the `Acknowledgement` interface:
```go
// Acknowledgement defines the interface used to return
// acknowledgements in the OnRecvPacket callback.
type Acknowledgement interface {
Success() bool
Acknowledgement() []byte
}
```
### Acknowledging packets
After a module writes an acknowledgement, a relayer can relay back the acknowledgement to the sender module. The sender module can
then process the acknowledgement using the `OnAcknowledgementPacket` callback. The contents of the
acknowledgement is entirely up to the modules on the channel (just like the packet data); however, it
may often contain information on whether the packet was successfully processed along
with some additional data that could be useful for remediation if the packet processing failed.
Since the modules are responsible for agreeing on an encoding/decoding standard for packet data and
acknowledgements, IBC will pass in the acknowledgements as `[]byte` to this callback. The callback
is responsible for decoding the acknowledgement and processing it.
> Note that some of the code below is *pseudo code*, indicating what actions need to happen but leaving it up to the developer to implement a custom implementation. E.g. the `DecodeAcknowledgement(acknowledgments)` and `processAck(ack)` functions.
```go
func (im IBCModule) OnAcknowledgementPacket(
ctx sdk.Context,
packet channeltypes.Packet,
acknowledgement []byte,
) (*sdk.Result, error) {
// Decode acknowledgement
ack := DecodeAcknowledgement(acknowledgement)
// process ack
res, err := processAck(ack)
return res, err
}
```
### Timeout packets
If the timeout for a packet is reached before the packet is successfully received or the
counterparty channel end is closed before the packet is successfully received, then the receiving
chain can no longer process it. Thus, the sending chain must process the timeout using
`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is
indeed valid, so our module only needs to implement the state machine logic for what to do once a
timeout is reached and the packet can no longer be received.
```go
func (im IBCModule) OnTimeoutPacket(
ctx sdk.Context,
packet channeltypes.Packet,
) (*sdk.Result, error) {
// do custom timeout logic
}
```
### Optional interfaces
The following interface are optional and MAY be implemented by an IBCModule.
#### PacketDataUnmarshaler
The `PacketDataUnmarshaler` interface is defined as follows:
```go
// PacketDataUnmarshaler defines an optional interface which allows a middleware to
// request the packet data to be unmarshaled by the base application.
type PacketDataUnmarshaler interface {
// UnmarshalPacketData unmarshals the packet data into a concrete type
// ctx, portID, channelID are provided as arguments, so that (if needed)
// the packet data can be unmarshaled based on the channel version.
// The version of the underlying app is also returned.
UnmarshalPacketData(ctx sdk.Context, portID, channelID string, bz []byte) (interface{}, string, error)
}
```
The implementation of `UnmarshalPacketData` should unmarshal the bytes into the packet data type defined for an IBC stack.
The base application of an IBC stack should unmarshal the bytes into its packet data type, while a middleware may simply defer the call to the underlying application.
This interface allows middlewares to unmarshal a packet data in order to make use of interfaces the packet data type implements.
For example, the callbacks middleware makes use of this function to access packet data types which implement the `PacketData` and `PacketDataProvider` interfaces.

View file

@ -0,0 +1,106 @@
---
title: Bind ports
sidebar_label: Bind ports
sidebar_position: 3
slug: /ibc/apps/bindports
---
# Bind ports
:::note Synopsis
Learn what changes to make to bind modules to their ports on initialization.
:::
:::note
## Pre-requisite readings
- [IBC Overview](../01-overview.md)
- [IBC default integration](../02-integration.md)
:::
Currently, ports must be bound on app initialization. In order to bind modules to their respective ports on initialization, the following needs to be implemented:
> Note that `portID` does not refer to a certain numerical ID, like `localhost:8080` with a `portID` 8080. Rather it refers to the application module the port binds. For IBC Modules built with the Cosmos SDK, it defaults to the module's name and for Cosmwasm contracts it defaults to the contract address.
1. Add port ID to the `GenesisState` proto definition:
```protobuf
message GenesisState {
string port_id = 1;
// other fields
}
```
2. Add port ID as a key to the module store:
```go
// x/<moduleName>/types/keys.go
const (
// ModuleName defines the IBC Module name
ModuleName = "moduleName"
// Version defines the current version the IBC
// module supports
Version = "moduleVersion-1"
// PortID is the default port id that module binds to
PortID = "portID"
// ...
)
```
3. Add port ID to `x/<moduleName>/types/genesis.go`:
```go
// in x/<moduleName>/types/genesis.go
// DefaultGenesisState returns a GenesisState with "portID" as the default PortID.
func DefaultGenesisState() *GenesisState {
return &GenesisState{
PortId: PortID,
// additional k-v fields
}
}
// Validate performs basic genesis state validation returning an error upon any
// failure.
func (gs GenesisState) Validate() error {
if err := host.PortIdentifierValidator(gs.PortId); err != nil {
return err
}
//additional validations
return gs.Params.Validate()
}
```
4. Set the port in the module keeper's for `InitGenesis`:
:::note
The capability module has been removed so port binding has also changed
:::
```go
// SetPort sets the portID for the transfer module. Used in InitGenesis
func (k Keeper) SetPort(ctx sdk.Context, portID string) {
store := k.storeService.OpenKVStore(ctx)
if err := store.Set(types.PortKey, []byte(portID)); err != nil {
panic(err)
}
}
// Initialize any other module state, like params with SetParams.
func (k Keeper) SetParams(ctx sdk.Context, params types.Params) {
store := k.storeService.OpenKVStore(ctx)
bz := k.cdc.MustMarshal(&params)
if err := store.Set([]byte(types.ParamsKey), bz); err != nil {
panic(err)
}
}
// ...
```
The module is set to the desired port. The setting and sealing happens during creation of the IBC router.

View file

@ -0,0 +1,70 @@
---
title: Keeper
sidebar_label: Keeper
sidebar_position: 4
slug: /ibc/apps/keeper
---
# Keeper
:::note Synopsis
Learn how to implement the IBC Module keeper. Relevant for IBC classic and v2
:::
:::note
## Pre-requisite readings
- [IBC Overview](../01-overview.md)
- [IBC default integration](../02-integration.md)
:::
In the previous sections, on channel handshake callbacks and port binding in `InitGenesis`, a reference was made to keeper methods that need to be implemented when creating a custom IBC module. Below is an overview of how to define an IBC module's keeper.
> Note that some code has been left out for clarity, to get a full code overview, please refer to [the transfer module's keeper in the ibc-go repo](https://github.com/cosmos/ibc-go/blob/main/modules/apps/transfer/keeper/keeper.go).
```go
// Keeper defines the IBC app module keeper
type Keeper struct {
storeKey sdk.StoreKey
cdc codec.BinaryCodec
paramSpace paramtypes.Subspace
channelKeeper types.ChannelKeeper
portKeeper types.PortKeeper
// ... additional according to custom logic
}
// NewKeeper creates a new IBC app module Keeper instance
func NewKeeper(
// args
) Keeper {
// ...
return Keeper{
cdc: cdc,
storeKey: key,
paramSpace: paramSpace,
channelKeeper: channelKeeper,
portKeeper: portKeeper,
// ... additional according to custom logic
}
}
// GetPort returns the portID for the IBC app module. Used in ExportGenesis
func (k Keeper) GetPort(ctx sdk.Context) string {
store := ctx.KVStore(k.storeKey)
return string(store.Get(types.PortKey))
}
// SetPort sets the portID for the IBC app module. Used in InitGenesis
func (k Keeper) SetPort(ctx sdk.Context, portID string) {
store := ctx.KVStore(k.storeKey)
store.Set(types.PortKey, []byte(portID))
}
// ... additional according to custom logic
```

View file

@ -0,0 +1,163 @@
---
title: Define packets and acks
sidebar_label: Define packets and acks
sidebar_position: 5
slug: /ibc/apps/packets_acks
---
# Define packets and acks
:::note Synopsis
Learn how to define custom packet and acknowledgement structs and how to encode and decode them.
:::
:::note
## Pre-requisite readings
- [IBC Overview](../01-overview.md)
- [IBC default integration](../02-integration.md)
:::
## Custom packets
Modules connected by a channel must agree on what application data they are sending over the
channel, as well as how they will encode/decode it. This process is not specified by IBC as it is up
to each application module to determine how to implement this agreement. However, for most
applications this will happen as a version negotiation during the channel handshake. While more
complex version negotiation is possible to implement inside the channel opening handshake, a very
simple version negotiation is implemented in the [ibc-transfer module](https://github.com/cosmos/ibc-go/tree/main/modules/apps/transfer/module.go).
Thus, a module must define its custom packet data structure, along with a well-defined way to
encode and decode it to and from `[]byte`.
```go
// Custom packet data defined in application module
type CustomPacketData struct {
// Custom fields ...
}
EncodePacketData(packetData CustomPacketData) []byte {
// encode packetData to bytes
}
DecodePacketData(encoded []byte) (CustomPacketData) {
// decode from bytes to packet data
}
```
> Note that the `CustomPacketData` struct is defined in the proto definition and then compiled by the protobuf compiler.
Then a module must encode its packet data before sending it through IBC.
```go
// Sending custom application packet data
data := EncodePacketData(customPacketData)
// Send packet to IBC, authenticating with channelCap
sequence, err := IBCChannelKeeper.SendPacket(
ctx,
sourcePort,
sourceChannel,
timeoutHeight,
timeoutTimestamp,
data,
)
```
A module receiving a packet must decode the `PacketData` into a structure it expects so that it can
act on it.
```go
// Receiving custom application packet data (in OnRecvPacket)
packetData := DecodePacketData(packet.Data)
// handle received custom packet data
```
### Optional interfaces
The following interfaces are optional and MAY be implemented by a custom packet type.
They allow middlewares such as callbacks to access information stored within the packet data.
#### PacketData interface
The `PacketData` interface is defined as follows:
```go
// PacketData defines an optional interface which an application's packet data structure may implement.
type PacketData interface {
// GetPacketSender returns the sender address of the packet data.
// If the packet sender is unknown or undefined, an empty string should be returned.
GetPacketSender(sourcePortID string) string
}
```
The implementation of `GetPacketSender` should return the sender of the packet data.
If the packet sender is unknown or undefined, an empty string should be returned.
This interface is intended to give IBC middlewares access to the packet sender of a packet data type.
#### PacketDataProvider interface
The `PacketDataProvider` interface is defined as follows:
```go
// PacketDataProvider defines an optional interfaces for retrieving custom packet data stored on behalf of another application.
// An existing problem in the IBC middleware design is the inability for a middleware to define its own packet data type and insert packet sender provided information.
// A short term solution was introduced into several application's packet data to utilize a memo field to carry this information on behalf of another application.
// This interfaces standardizes that behaviour. Upon realization of the ability for middleware's to define their own packet data types, this interface will be deprecated and removed with time.
type PacketDataProvider interface {
// GetCustomPacketData returns the packet data held on behalf of another application.
// The name the information is stored under should be provided as the key.
// If no custom packet data exists for the key, nil should be returned.
GetCustomPacketData(key string) interface{}
}
```
The implementation of `GetCustomPacketData` should return packet data held on behalf of another application (if present and supported).
If this functionality is not supported, it should return nil. Otherwise it should return the packet data associated with the provided key.
This interface gives IBC applications access to the packet data information embedded into the base packet data type.
Within transfer and interchain accounts, the embedded packet data is stored within the Memo field.
Once all IBC applications within an IBC stack are capable of creating/maintaining their own packet data type's, this interface function will be deprecated and removed.
## Acknowledgements
Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing.
In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement
will be written once the packet has been processed by the application which may be well after the packet receipt.
NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement
for a packet as soon as it has been received from the IBC module.
This acknowledgement can then be relayed back to the original sender chain, which can take action
depending on the contents of the acknowledgement.
Just as packet data was opaque to IBC, acknowledgements are similarly opaque. Modules must pass and
receive acknowledegments with the IBC modules as byte strings.
Thus, modules must agree on how to encode/decode acknowledgements. The process of creating an
acknowledgement struct along with encoding and decoding it, is very similar to the packet data
example above. [ICS 04](https://github.com/cosmos/ibc/blob/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope)
specifies a recommended format for acknowledgements. This acknowledgement type can be imported from
[channel types](https://github.com/cosmos/ibc-go/tree/main/modules/core/04-channel/types).
While modules may choose arbitrary acknowledgement structs, a default acknowledgement types is provided by IBC [here](https://github.com/cosmos/ibc-go/blob/main/proto/ibc/core/channel/v1/channel.proto):
```protobuf
// Acknowledgement is the recommended acknowledgement format to be used by
// app-specific protocols.
// NOTE: The field numbers 21 and 22 were explicitly chosen to avoid accidental
// conflicts with other protobuf message formats used for acknowledgements.
// The first byte of any message with this format will be the non-ASCII values
// `0xaa` (result) or `0xb2` (error). Implemented as defined by ICS:
// https://github.com/cosmos/ibc/tree/master/spec/core/ics-004-channel-and-packet-semantics#acknowledgement-envelope
message Acknowledgement {
// response contains either a result or an error and must be non-empty
oneof response {
bytes result = 21;
string error = 22;
}
}
```

View file

@ -0,0 +1,44 @@
---
title: Routing
sidebar_label: Routing
sidebar_position: 6
slug: /ibc/apps/routing
---
# Routing
:::note
## Pre-requisite readings
- [IBC Overview](../01-overview.md)
- [IBC default integration](../02-integration.md)
:::
:::note Synopsis
Learn how to hook a route to the IBC router for the custom IBC module.
:::
As mentioned above, modules must implement the `IBCModule` interface (which contains both channel
handshake callbacks for IBC classic only, and packet handling callbacks for IBC classic and v2). The concrete implementation of this interface
must be registered with the module name as a route on the IBC `Router`.
```go
// app.go
func NewApp(...args) *App {
// ...
// Create static IBC router, add module routes, then set and seal it
ibcRouter := port.NewRouter()
ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferModule)
// Note: moduleCallbacks must implement IBCModule interface
ibcRouter.AddRoute(moduleName, moduleCallbacks)
// Setting Router will finalize all routes by sealing router
// No more routes can be added
app.IBCKeeper.SetRouter(ibcRouter)
// ...
}
```

View file

@ -0,0 +1,5 @@
{
"label": "Applications",
"position": 3,
"link": null
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 345 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 588 KiB

View file

@ -0,0 +1,55 @@
---
title: IBC middleware
sidebar_label: IBC middleware
sidebar_position: 1
slug: /ibc/middleware/overview
---
# IBC middleware
:::note Synopsis
Learn how to write your own custom middleware to wrap an IBC application, and understand how to hook different middleware to IBC base applications to form different IBC application stacks
:::
This documentation serves as a guide for middleware developers who want to write their own middleware and for chain developers who want to use IBC middleware on their chains.
After going through the overview they can consult respectively:
- [documentation on developing custom middleware](02-develop.md)
- [documentation on integrating middleware into a stack on a chain](03-integration.md)
:::note
## Pre-requisite readings
- [IBC Overview](../01-overview.md)
- [IBC Integration](../02-integration.md)
- [IBC Application Developer Guide](../03-apps/01-apps.md)
:::
## Why middleware?
IBC applications are designed to be self-contained modules that implement their own application-specific logic through a set of interfaces with the core IBC handlers. These core IBC handlers, in turn, are designed to enforce the correctness properties of IBC (transport, authentication, ordering) while delegating all application-specific handling to the IBC application modules. **However, there are cases where some functionality may be desired by many applications, yet not appropriate to place in core IBC.**
Middleware allows developers to define the extensions as separate modules that can wrap over the base application. This middleware can thus perform its own custom logic, and pass data into the application so that it may run its logic without being aware of the middleware's existence. This allows both the application and the middleware to implement its own isolated logic while still being able to run as part of a single packet flow.
## Definitions
`Middleware`: A self-contained module that sits between core IBC and an underlying IBC application during packet execution. All messages between core IBC and underlying application must flow through middleware, which may perform its own custom logic.
`Underlying Application`: An underlying application is the application that is directly connected to the middleware in question. This underlying application may itself be middleware that is chained to a base application.
`Base Application`: A base application is an IBC application that does not contain any middleware. It may be nested by 0 or multiple middleware to form an application stack.
`Application Stack (or stack)`: A stack is the complete set of application logic (middleware(s) + base application) that gets connected to core IBC. A stack may be just a base application, or it may be a series of middlewares that nest a base application.
The diagram below gives an overview of a middleware stack consisting of two middleware (one stateless, the other stateful).
![middleware-stack.png](./images/middleware-stack.png)
Keep in mind that:
- **The order of the middleware matters** (more on how to correctly define your stack in the code will follow in the [integration section](03-integration.md)).
- Depending on the type of message, it will either be passed on from the base application up the middleware stack to core IBC or down the stack in the reverse situation (handshake and packet callbacks).
- IBC middleware will wrap over an underlying IBC application and sits between core IBC and the application. It has complete control in modifying any message coming from IBC to the application, and any message coming from the application to core IBC. **Middleware must be completely trusted by chain developers who wish to integrate them**, as this gives them complete flexibility in modifying the application(s) they wrap.

View file

@ -0,0 +1,478 @@
---
title: Create a custom IBC middleware
sidebar_label: Create a custom IBC middleware
sidebar_position: 3
slug: /ibc/middleware/develop
---
# Create a custom IBC middleware
IBC middleware will wrap over an underlying IBC application (a base application or downstream middleware) and sits between core IBC and the base application.
:::warning
middleware developers must use the same serialization and deserialization method as in ibc-go's codec: transfertypes.ModuleCdc.[Must]MarshalJSON
:::
For middleware builders this means:
```go
import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types"
transfertypes.ModuleCdc.[Must]MarshalJSON
func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) {
return transfertypes.ModuleCdc.MarshalJSON(&ack)
}
```
The interfaces a middleware must implement are found [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go).
```go
// Middleware implements the ICS26 Module interface
type Middleware interface {
IBCModule // middleware has access to an underlying application which may be wrapped by more middleware
ICS4Wrapper // middleware has access to ICS4Wrapper which may be core IBC Channel Handler or a higher-level middleware that wraps this middleware.
}
```
An `IBCMiddleware` struct implementing the `Middleware` interface, can be defined with its constructor as follows:
```go
// @ x/module_name/ibc_middleware.go
// IBCMiddleware implements the ICS26 callbacks and ICS4Wrapper for the fee middleware given the
// fee keeper and the underlying application.
type IBCMiddleware struct {
app porttypes.IBCModule
keeper keeper.Keeper
}
// NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application
func NewIBCMiddleware(app porttypes.IBCModule, k keeper.Keeper) IBCMiddleware {
return IBCMiddleware{
app: app,
keeper: k,
}
}
```
## Implement `IBCModule` interface
`IBCMiddleware` is a struct that implements the [ICS-26 `IBCModule` interface (`porttypes.IBCModule`)](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L14-L107). It is recommended to separate these callbacks into a separate file `ibc_middleware.go`.
> Note how this is analogous to implementing the same interfaces for IBC applications that act as base applications.
As will be mentioned in the [integration section](03-integration.md), this struct should be different than the struct that implements `AppModule` in case the middleware maintains its own internal state and processes separate SDK messages.
The middleware must have access to the underlying application, and be called before it during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback.
> Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases.
The `IBCModule` interface consists of the channel handshake callbacks and packet callbacks. Most of the custom logic will be performed in the packet callbacks, in the case of the channel handshake callbacks, introducing the middleware requires consideration to the version negotiation.
### Channel handshake callbacks
#### Version negotiation
In the case where the IBC middleware expects to speak to a compatible IBC middleware on the counterparty chain, they must use the channel handshake to negotiate the middleware version without interfering in the version negotiation of the underlying application.
Middleware accomplishes this by formatting the version in a JSON-encoded string containing the middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application. The format of the version is specified in ICS-30 as the following:
```json
{
"<middleware_version_key>": "<middleware_version_value>",
"app_version": "<application_version_value>"
}
```
The `<middleware_version_key>` key in the JSON struct should be replaced by the actual name of the key for the corresponding middleware (e.g. `fee_version`).
During the handshake callbacks, the middleware can unmarshal the version string and retrieve the middleware and application versions. It can do its negotiation logic on `<middleware_version_value>`, and pass the `<application_version_value>` to the underlying application.
> **NOTE**: Middleware that does not need to negotiate with a counterparty middleware on the remote stack will not implement the version unmarshalling and negotiation, and will simply perform its own custom logic on the callbacks without relying on the counterparty behaving similarly.
#### `OnChanOpenInit`
```go
func (im IBCMiddleware) OnChanOpenInit(
ctx sdk.Context,
order channeltypes.Order,
connectionHops []string,
portID string,
channelID string,
counterparty channeltypes.Counterparty,
version string,
) (string, error) {
if version != "" {
// try to unmarshal JSON-encoded version string and pass
// the app-specific version to app callback.
// otherwise, pass version directly to app callback.
metadata, err := Unmarshal(version)
if err != nil {
// Since it is valid for fee version to not be specified,
// the above middleware version may be for another middleware.
// Pass the entire version string onto the underlying application.
return im.app.OnChanOpenInit(
ctx,
order,
connectionHops,
portID,
channelID,
counterparty,
version,
)
}
else {
metadata = {
// set middleware version to default value
MiddlewareVersion: defaultMiddlewareVersion,
// allow application to return its default version
AppVersion: "",
}
}
}
doCustomLogic()
// if the version string is empty, OnChanOpenInit is expected to return
// a default version string representing the version(s) it supports
appVersion, err := im.app.OnChanOpenInit(
ctx,
order,
connectionHops,
portID,
channelID,
counterparty,
metadata.AppVersion, // note we only pass app version here
)
if err != nil {
return "", err
}
version := constructVersion(metadata.MiddlewareVersion, appVersion)
return version, nil
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L36-L83) an example implementation of this callback for the ICS-29 Fee Middleware module.
#### `OnChanOpenTry`
```go
func (im IBCMiddleware) OnChanOpenTry(
ctx sdk.Context,
order channeltypes.Order,
connectionHops []string,
portID,
channelID string,
counterparty channeltypes.Counterparty,
counterpartyVersion string,
) (string, error) {
// try to unmarshal JSON-encoded version string and pass
// the app-specific version to app callback.
// otherwise, pass version directly to app callback.
cpMetadata, err := Unmarshal(counterpartyVersion)
if err != nil {
return app.OnChanOpenTry(
ctx,
order,
connectionHops,
portID,
channelID,
counterparty,
counterpartyVersion,
)
}
doCustomLogic()
// Call the underlying application's OnChanOpenTry callback.
// The try callback must select the final app-specific version string and return it.
appVersion, err := app.OnChanOpenTry(
ctx,
order,
connectionHops,
portID,
channelID,
counterparty,
cpMetadata.AppVersion, // note we only pass counterparty app version here
)
if err != nil {
return "", err
}
// negotiate final middleware version
middlewareVersion := negotiateMiddlewareVersion(cpMetadata.MiddlewareVersion)
version := constructVersion(middlewareVersion, appVersion)
return version, nil
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L88-L125) an example implementation of this callback for the ICS-29 Fee Middleware module.
#### `OnChanOpenAck`
```go
func (im IBCMiddleware) OnChanOpenAck(
ctx sdk.Context,
portID,
channelID string,
counterpartyChannelID string,
counterpartyVersion string,
) error {
// try to unmarshal JSON-encoded version string and pass
// the app-specific version to app callback.
// otherwise, pass version directly to app callback.
cpMetadata, err = UnmarshalJSON(counterpartyVersion)
if err != nil {
return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, counterpartyVersion)
}
if !isCompatible(cpMetadata.MiddlewareVersion) {
return error
}
doCustomLogic()
// call the underlying application's OnChanOpenTry callback
return app.OnChanOpenAck(ctx, portID, channelID, counterpartyChannelID, cpMetadata.AppVersion)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L128-L153)) an example implementation of this callback for the ICS-29 Fee Middleware module.
#### `OnChanOpenConfirm`
```go
func OnChanOpenConfirm(
ctx sdk.Context,
portID,
channelID string,
) error {
doCustomLogic()
return app.OnChanOpenConfirm(ctx, portID, channelID)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L156-L163) an example implementation of this callback for the ICS-29 Fee Middleware module.
#### `OnChanCloseInit`
```go
func OnChanCloseInit(
ctx sdk.Context,
portID,
channelID string,
) error {
doCustomLogic()
return app.OnChanCloseInit(ctx, portID, channelID)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L166-L188) an example implementation of this callback for the ICS-29 Fee Middleware module.
#### `OnChanCloseConfirm`
```go
func OnChanCloseConfirm(
ctx sdk.Context,
portID,
channelID string,
) error {
doCustomLogic()
return app.OnChanCloseConfirm(ctx, portID, channelID)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L191-L213) an example implementation of this callback for the ICS-29 Fee Middleware module.
### Packet callbacks
The packet callbacks just like the handshake callbacks wrap the application's packet callbacks. The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware.
#### `OnRecvPacket`
```go
func (im IBCMiddleware) OnRecvPacket(
ctx sdk.Context,
packet channeltypes.Packet,
relayer sdk.AccAddress,
) ibcexported.Acknowledgement {
doCustomLogic(packet)
ack := app.OnRecvPacket(ctx, packet, relayer)
doCustomLogic(ack) // middleware may modify outgoing ack
return ack
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L217-L238) an example implementation of this callback for the ICS-29 Fee Middleware module.
#### `OnAcknowledgementPacket`
```go
func (im IBCMiddleware) OnAcknowledgementPacket(
ctx sdk.Context,
packet channeltypes.Packet,
acknowledgement []byte,
relayer sdk.AccAddress,
) error {
doCustomLogic(packet, ack)
return app.OnAcknowledgementPacket(ctx, packet, ack, relayer)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L242-L293) an example implementation of this callback for the ICS-29 Fee Middleware module.
#### `OnTimeoutPacket`
```go
func (im IBCMiddleware) OnTimeoutPacket(
ctx sdk.Context,
packet channeltypes.Packet,
relayer sdk.AccAddress,
) error {
doCustomLogic(packet)
return app.OnTimeoutPacket(ctx, packet, relayer)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/ibc_middleware.go#L297-L335) an example implementation of this callback for the ICS-29 Fee Middleware module.
## ICS-04 wrappers
Middleware must also wrap ICS-04 so that any communication from the application to the `channelKeeper` goes through the middleware first. Similar to the packet callbacks, the middleware may modify outgoing acknowledgements and packets in any way it wishes.
To ensure optimal generalisability, the `ICS4Wrapper` abstraction serves to abstract away whether a middleware is the topmost middleware (and thus directly calling into the ICS-04 `channelKeeper`) or itself being wrapped by another middleware.
Remember that middleware can be stateful or stateless. When defining the stateful middleware's keeper, the `ics4Wrapper` field is included. Then the appropriate keeper can be passed when instantiating the middleware's keeper in `app.go`
```go
type Keeper struct {
storeKey storetypes.StoreKey
cdc codec.BinaryCodec
ics4Wrapper porttypes.ICS4Wrapper
channelKeeper types.ChannelKeeper
portKeeper types.PortKeeper
...
}
```
For stateless middleware, the `ics4Wrapper` can be passed on directly without having to instantiate a keeper struct for the middleware.
[The interface](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/core/05-port/types/module.go#L110-L133) looks as follows:
```go
// This is implemented by ICS4 and all middleware that are wrapping base application.
// The base application will call `sendPacket` or `writeAcknowledgement` of the middleware directly above them
// which will call the next middleware until it reaches the core IBC handler.
type ICS4Wrapper interface {
SendPacket(
ctx sdk.Context,
sourcePort string,
sourceChannel string,
timeoutHeight clienttypes.Height,
timeoutTimestamp uint64,
data []byte,
) (sequence uint64, err error)
WriteAcknowledgement(
ctx sdk.Context,
packet exported.PacketI,
ack exported.Acknowledgement,
) error
GetAppVersion(
ctx sdk.Context,
portID,
channelID string,
) (string, bool)
}
```
:warning: In the following paragraphs, the methods are presented in pseudo code which has been kept general, not stating whether the middleware is stateful or stateless. Remember that when the middleware is stateful, `ics4Wrapper` can be accessed through the keeper.
Check out the references provided for an actual implementation to clarify, where the `ics4Wrapper` methods in `ibc_middleware.go` simply call the equivalent keeper methods where the actual logic resides.
### `SendPacket`
```go
func SendPacket(
ctx sdk.Context,
sourcePort string,
sourceChannel string,
timeoutHeight clienttypes.Height,
timeoutTimestamp uint64,
appData []byte,
) (uint64, error) {
// middleware may modify data
data = doCustomLogic(appData)
return ics4Wrapper.SendPacket(
ctx,
sourcePort,
sourceChannel,
timeoutHeight,
timeoutTimestamp,
data,
)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L17-L27) an example implementation of this function for the ICS-29 Fee Middleware module.
### `WriteAcknowledgement`
```go
// only called for async acks
func WriteAcknowledgement(
ctx sdk.Context,
packet exported.PacketI,
ack exported.Acknowledgement,
) error {
// middleware may modify acknowledgement
ack_bytes = doCustomLogic(ack)
return ics4Wrapper.WriteAcknowledgement(packet, ack_bytes)
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L31-L55) an example implementation of this function for the ICS-29 Fee Middleware module.
### `GetAppVersion`
```go
// middleware must return the underlying application version
func GetAppVersion(
ctx sdk.Context,
portID,
channelID string,
) (string, bool) {
version, found := ics4Wrapper.GetAppVersion(ctx, portID, channelID)
if !found {
return "", false
}
if !MiddlewareEnabled {
return version, true
}
// unwrap channel version
metadata, err := Unmarshal(version)
if err != nil {
panic(fmt.Errof("unable to unmarshal version: %w", err))
}
return metadata.AppVersion, true
}
```
See [here](https://github.com/cosmos/ibc-go/blob/v7.0.0/modules/apps/29-fee/keeper/relay.go#L58-L74) an example implementation of this function for the ICS-29 Fee Middleware module.

View file

@ -0,0 +1,260 @@
---
title: Create and integrate IBC v2 middleware
sidebar_label: Create and integrate IBC v2 middleware
sidebar_position: 2
slug: /ibc/middleware/developIBCv2
---
# Quick Navigation
1. [Create a custom IBC v2 middleware](#create-a-custom-ibc-v2-middleware)
2. [Implement `IBCModule` interface](#implement-ibcmodule-interface)
3. [WriteAckWrapper](#writeackwrapper)
4. [Integrate IBC v2 Middleware](#integrate-ibc-v2-middleware)
5. [Security Model](#security-model)
6. [Design Principles](#design-principles)
## Create a custom IBC v2 middleware
IBC middleware will wrap over an underlying IBC application (a base application or downstream middleware) and sits between core IBC and the base application.
:::warning
middleware developers must use the same serialization and deserialization method as in ibc-go's codec: transfertypes.ModuleCdc.[Must]MarshalJSON
:::
For middleware builders this means:
```go
import transfertypes "github.com/cosmos/ibc-go/v10/modules/apps/transfer/types"
transfertypes.ModuleCdc.[Must]MarshalJSON
func MarshalAsIBCDoes(ack channeltypes.Acknowledgement) ([]byte, error) {
return transfertypes.ModuleCdc.MarshalJSON(&ack)
}
```
The interfaces a middleware must implement are found in [core/api](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L11). Note that this interface has changed from IBC classic.
An `IBCMiddleware` struct implementing the `Middleware` interface, can be defined with its constructor as follows:
```go
// @ x/module_name/ibc_middleware.go
// IBCMiddleware implements the IBCv2 middleware interface
type IBCMiddleware struct {
app api.IBCModule // underlying app or middleware
writeAckWrapper api. WriteAcknowledgementWrapper // writes acknowledgement for an async acknowledgement
PacketDataUnmarshaler api.PacketDataUnmarshaler // optional interface
keeper types.Keeper // required for stateful middleware
// Keeper may include middleware specific keeper and the ChannelKeeperV2
// additional middleware specific fields
}
// NewIBCMiddleware creates a new IBCMiddleware given the keeper and underlying application
func NewIBCMiddleware(app api.IBCModule,
writeAckWrapper api.WriteAcknowledgementWrapper,
k types.Keeper
) IBCMiddleware {
return IBCMiddleware{
app: app,
writeAckWrapper: writeAckWrapper,
keeper: k,
}
}
```
:::note
The ICS4Wrapper has been removed in IBC v2 and there are no channel handshake callbacks, a writeAckWrapper has been added to the interface
:::
## Implement `IBCModule` interface
`IBCMiddleware` is a struct that implements the [`IBCModule` interface (`api.IBCModule`)](https://github.com/cosmos/ibc-go/blob/main/modules/core/api/module.go#L11-L53). It is recommended to separate these callbacks into a separate file `ibc_middleware.go`.
> Note how this is analogous to implementing the same interfaces for IBC applications that act as base applications.
The middleware must have access to the underlying application, and be called before it during all ICS-26 callbacks. It may execute custom logic during these callbacks, and then call the underlying application's callback.
> Middleware **may** choose not to call the underlying application's callback at all. Though these should generally be limited to error cases.
The `IBCModule` interface consists of the packet callbacks where cutom logic is performed.
### Packet callbacks
The packet callbacks are where the middleware performs most of its custom logic. The middleware may read the packet flow data and perform some additional packet handling, or it may modify the incoming data before it reaches the underlying application. This enables a wide degree of usecases, as a simple base application like token-transfer can be transformed for a variety of usecases by combining it with custom middleware, for example acting as a filter for which tokens can be sent and recieved.
#### `OnRecvPacket`
```go
func (im IBCMiddleware) OnRecvPacket(
ctx sdk.Context,
sourceClient string,
destinationClient string,
sequence uint64,
payload channeltypesv2.Payload,
relayer sdk.AccAddress,
) channeltypesv2.RecvPacketResult {
// Middleware may choose to do custom preprocessing logic before calling the underlying app OnRecvPacket
// Middleware may choose to error early and return a RecvPacketResult Failure
// Middleware may choose to modify the payload before passing on to OnRecvPacket though this
// should only be done to support very advanced custom behavior
// Middleware MUST NOT modify client identifiers and sequence
doCustomPreProcessLogic()
// call underlying app OnRecvPacket
recvResult := im.app.OnRecvPacket(ctx, sourceClient, destinationClient, sequence, payload, relayer)
if recvResult.Status == PACKET_STATUS_FAILURE {
return recvResult
}
doCustomPostProcessLogic(recvResult) // middleware may modify recvResult
return recvResult
}
```
See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L161-L230) an example implementation of this callback for the Callbacks Middleware module.
#### `OnAcknowledgementPacket`
```go
func (im IBCMiddleware) OnAcknowledgementPacket(
ctx sdk.Context,
sourceClient string,
destinationClient string,
sequence uint64,
acknowledgement []byte,
payload channeltypesv2.Payload,
relayer sdk.AccAddress,
) error {
// preprocessing logic may modify the acknowledgement before passing to
// the underlying app though this should only be done in advanced cases
// Middleware may return error early
// it MUST NOT change the identifiers of the clients or the sequence
doCustomPreProcessLogic(payload, acknowledgement)
// call underlying app OnAcknowledgementPacket
err = im.app.OnAcknowledgementPacket(
sourceClient, destinationClient, sequence,
acknowledgement, payload, relayer
)
if err != nil {
return err
}
// may perform some post acknowledgement logic and return error here
return doCustomPostProcessLogic()
}
```
See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L236-L302) an example implementation of this callback for the Callbacks Middleware module.
#### `OnTimeoutPacket`
```go
func (im IBCMiddleware) OnTimeoutPacket(
ctx sdk.Context,
sourceClient string,
destinationClient string,
sequence uint64,
payload channeltypesv2.Payload,
relayer sdk.AccAddress,
) error {
// Middleware may choose to do custom preprocessing logic before calling the underlying app OnTimeoutPacket
// Middleware may return error early
doCustomPreProcessLogic(payload)
// call underlying app OnTimeoutPacket
err = im.app.OnTimeoutPacket(
sourceClient, destinationClient, sequence,
payload, relayer
)
if err != nil {
return err
}
// may perform some post timeout logic and return error here
return doCustomPostProcessLogic()
}
```
See [here](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L309-L367) an example implementation of this callback for the Callbacks Middleware module.
### WriteAckWrapper
Middleware must also wrap the `WriteAcknowledgement` interface so that any acknowledgement written by the application passes through the middleware first. This allows middleware to modify or delay writing an acknowledgment before committed to the IBC store.
```go
// WithWriteAckWrapper sets the WriteAcknowledgementWrapper for the middleware.
func (im *IBCMiddleware) WithWriteAckWrapper(writeAckWrapper api.WriteAcknowledgementWrapper) {
im.writeAckWrapper = writeAckWrapper
}
// GetWriteAckWrapper returns the WriteAckWrapper
func (im *IBCMiddleware) GetWriteAckWrapper() api.WriteAcknowledgementWrapper {
return im.writeAckWrapper
}
```
### `WriteAcknowledgement`
This is where the middleware acknowledgement handling is finalised. An example is shown in the [callbacks middleware](https://github.com/cosmos/ibc-go/blob/main/modules/apps/callbacks/v2/ibc_middleware.go#L369-L454)
```go
// WriteAcknowledgement facilitates acknowledgment being written asynchronously
// The call stack flows from the IBC application to the IBC core handler
// Thus this function is called by the IBC app or a lower-level middleware
func (im IBCMiddleware) WriteAcknowledgement(
ctx sdk.Context,
clientID string,
sequence uint64,
ack channeltypesv2.Acknowledgement,
) error {
doCustomPreProcessLogic() // may modify acknowledgement
return im.writeAckWrapper.WriteAcknowledgement(
ctx, clientId, sequence, ack,
)
}
```
## Integrate IBC v2 Middleware
Middleware should be registered within the module manager in `app.go`.
The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects.
### Example Integration
The example integration is detailed for an IBC v2 stack using transfer and the callbacks middleware.
```go
// Middleware Stacks
// initialising callbacks middleware
maxCallbackGas := uint64(10_000_000)
wasmStackIBCHandler := wasm.NewIBCHandler(app.WasmKeeper, app.IBCKeeper.ChannelKeeper, app.IBCKeeper.ChannelKeeper)
// Create the transferv2 stack with transfer and callbacks middleware
var ibcv2TransferStack ibcapi.IBCModule
ibcv2TransferStack = transferv2.NewIBCModule(app.TransferKeeper)
ibcv2TransferStack = ibccallbacksv2.NewIBCMiddleware(transferv2.NewIBCModule(app.TransferKeeper), app.IBCKeeper.ChannelKeeperV2, wasmStackIBCHandler, app.IBCKeeper.ChannelKeeperV2, maxCallbackGas)
// Create static IBC v2 router, add app routes, then set and seal it
ibcRouterV2 := ibcapi.NewRouter()
ibcRouterV2.AddRoute(ibctransfertypes.PortID, ibcv2TransferStack)
app.IBCKeeper.SetRouterV2(ibcRouterV2)
```
## Security Model
IBC Middleware completely wraps all communication between IBC core and the application that it is wired with. Thus, the IBC Middleware has complete control to modify any packets and acknowledgements the underlying application receives or sends. Thus, if a chain chooses to wrap an application with a given middleware, that middleware is **completely trusted** and part of the application's security model. **Do not use middlewares that are untrusted.**
## Design Principles
The middleware follows a decorator pattern that wraps an underlying application's connection to the IBC core handlers. Thus, when implementing a middleware for a specific purpose, it is recommended to be as **unintrusive** as possible in the middleware design while still accomplishing the intended behavior.
The least intrusive middleware is stateless. They simply read the ICS26 callback arguments before calling the underlying app's callback and error if the arguments are not acceptable (e.g. whitelisting packets). Stateful middleware that are used solely for erroring are also very simple to build, an example of this would be a rate-limiting middleware that prevents transfer outflows from getting too high within a certain time frame.
Middleware that directly interfere with the payload or acknowledgement before passing control to the underlying app are way more intrusive to the underyling app processing. This makes such middleware more error-prone when implementing as incorrect handling can cause the underlying app to break or worse execute unexpected behavior. Moreover, such middleware typically needs to be built for a specific underlying app rather than being generic. An example of this is the packet-forwarding middleware which modifies the payload and is specifically built for transfer.
Middleware that modifies the payload or acknowledgement such that it is no longer readable by the underlying application is the most complicated middleware. Since it is not readable by the underlying apps, if these middleware write additional state into payloads and acknowledgements that get committed to IBC core provable state, there MUST be an equivalent counterparty middleware that is able to parse and intepret this additional state while also converting the payload and acknowledgment back to a readable form for the underlying application on its side. Thus, such middleware requires deployment on both sides of an IBC connection or the packet processing will break. This is the hardest type of middleware to implement, integrate and deploy. Thus, it is not recommended unless absolutely necessary to fulfill the given use case.

View file

@ -0,0 +1,72 @@
---
title: Integrating IBC middleware into a chain
sidebar_label: Integrating IBC middleware into a chain
sidebar_position: 4
slug: /ibc/middleware/integration
---
# Integrating IBC middleware into a chain
Learn how to integrate IBC middleware(s) with a base application to your chain. The following document only applies for Cosmos SDK chains.
If the middleware is maintaining its own state and/or processing SDK messages, then it should create and register its SDK module with the module manager in `app.go`.
All middleware must be connected to the IBC router and wrap over an underlying base IBC application. An IBC application may be wrapped by many layers of middleware, only the top layer middleware should be hooked to the IBC router, with all underlying middlewares and application getting wrapped by it.
The order of middleware **matters**, function calls from IBC to the application travel from top-level middleware to the bottom middleware and then to the application. Function calls from the application to IBC goes through the bottom middleware in order to the top middleware and then to core IBC handlers. Thus the same set of middleware put in different orders may produce different effects.
## Example integration
```go
// app.go pseudocode
// middleware 1 and middleware 3 are stateful middleware,
// perhaps implementing separate sdk.Msg and Handlers
mw1Keeper := mw1.NewKeeper(storeKey1, ..., ics4Wrapper: channelKeeper, ...) // in stack 1 & 3
// middleware 2 is stateless
mw3Keeper1 := mw3.NewKeeper(storeKey3,..., ics4Wrapper: mw1Keeper, ...) // in stack 1
mw3Keeper2 := mw3.NewKeeper(storeKey3,..., ics4Wrapper: channelKeeper, ...) // in stack 2
// Only create App Module **once** and register in app module
// if the module maintains independent state and/or processes sdk.Msgs
app.moduleManager = module.NewManager(
...
mw1.NewAppModule(mw1Keeper),
mw3.NewAppModule(mw3Keeper1),
mw3.NewAppModule(mw3Keeper2),
transfer.NewAppModule(transferKeeper),
custom.NewAppModule(customKeeper)
)
// NOTE: IBC Modules may be initialized any number of times provided they use a separate
// Keeper and underlying port.
customKeeper1 := custom.NewKeeper(..., KeeperCustom1, ...)
customKeeper2 := custom.NewKeeper(..., KeeperCustom2, ...)
// initialize base IBC applications
// if you want to create two different stacks with the same base application,
// they must be given different Keepers and assigned different ports.
transferIBCModule := transfer.NewIBCModule(transferKeeper)
customIBCModule1 := custom.NewIBCModule(customKeeper1, "portCustom1")
customIBCModule2 := custom.NewIBCModule(customKeeper2, "portCustom2")
// create IBC stacks by combining middleware with base application
// NOTE: since middleware2 is stateless it does not require a Keeper
// stack 1 contains mw1 -> mw3 -> transfer
stack1 := mw1.NewIBCMiddleware(mw3.NewIBCMiddleware(transferIBCModule, mw3Keeper1), mw1Keeper)
// stack 2 contains mw3 -> mw2 -> custom1
stack2 := mw3.NewIBCMiddleware(mw2.NewIBCMiddleware(customIBCModule1), mw3Keeper2)
// stack 3 contains mw2 -> mw1 -> custom2
stack3 := mw2.NewIBCMiddleware(mw1.NewIBCMiddleware(customIBCModule2, mw1Keeper))
// associate each stack with the moduleName provided by the underlying Keeper
ibcRouter := porttypes.NewRouter()
ibcRouter.AddRoute("transfer", stack1)
ibcRouter.AddRoute("custom1", stack2)
ibcRouter.AddRoute("custom2", stack3)
app.IBCKeeper.SetRouter(ibcRouter)
```

View file

@ -0,0 +1,5 @@
{
"label": "Middleware",
"position": 4,
"link": null
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

View file

@ -0,0 +1,15 @@
---
title: Upgrading IBC Chains Overview
sidebar_label: Overview
sidebar_position: 0
slug: /ibc/upgrades/intro
---
# Upgrading IBC Chains Overview
This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections.
IBC-connected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform an IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client.
1. The [quick-guide](./01-quick-guide.md) describes how IBC-connected chains can perform client-breaking upgrades and how relayers can securely upgrade counterparty clients using the SDK.
2. The [developer-guide](./02-developer-guide.md) is a guide for developers intending to develop IBC client implementations with upgrade functionality.

View file

@ -0,0 +1,60 @@
---
title: How to Upgrade IBC Chains and their Clients
sidebar_label: How to Upgrade IBC Chains and their Clients
sidebar_position: 1
slug: /ibc/upgrades/quick-guide
---
# How to Upgrade IBC Chains and their Clients
:::note Synopsis
Learn how to upgrade your chain and counterparty clients.
:::
The information in this doc for upgrading chains is relevant to SDK chains. However, the guide for counterparty clients is relevant to any Tendermint client that enables upgrades.
## IBC Client Breaking Upgrades
IBC-connected chains must perform an IBC upgrade if their upgrade will break counterparty IBC clients. The current IBC protocol supports upgrading tendermint chains for a specific subset of IBC-client-breaking upgrades. Here is the exhaustive list of IBC client-breaking upgrades and whether the IBC protocol currently supports such upgrades.
IBC currently does **NOT** support unplanned upgrades. All of the following upgrades must be planned and committed to in advance by the upgrading chain, in order for counterparty clients to maintain their connections securely.
Note: Since upgrades are only implemented for Tendermint clients, this doc only discusses upgrades on Tendermint chains that would break counterparty IBC Tendermint Clients.
1. Changing the Chain-ID: **Supported**
2. Changing the UnbondingPeriod: **Partially Supported**, chains may increase the unbonding period with no issues. However, decreasing the unbonding period may irreversibly break some counterparty clients. Thus, it is **not recommended** that chains reduce the unbonding period.
3. Changing the height (resetting to 0): **Supported**, so long as chains remember to increment the revision number in their chain-id.
4. Changing the ProofSpecs: **Supported**, this should be changed if the proof structure needed to verify IBC proofs is changed across the upgrade. Ex: Switching from an IAVL store, to a SimpleTree Store
5. Changing the UpgradePath: **Supported**, this might involve changing the key under which upgraded clients and consensus states are stored in the upgrade store, or even migrating the upgrade store itself.
6. Migrating the IBC store: **Unsupported**, as the IBC store location is negotiated by the connection.
7. Upgrading to a backwards compatible version of IBC: Supported
8. Upgrading to a non-backwards compatible version of IBC: **Unsupported**, as IBC version is negotiated on connection handshake.
9. Changing the Tendermint LightClient algorithm: **Partially Supported**. Changes to the light client algorithm that do not change the ClientState or ConsensusState struct may be supported, provided that the counterparty is also upgraded to support the new light client algorithm. Changes that require updating the ClientState and ConsensusState structs themselves are theoretically possible by providing a path to translate an older ClientState struct into the new ClientState struct; however this is not currently implemented.
## Step-by-Step Upgrade Process for SDK chains
If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the list above and then execute the upgrade process described below in order to prevent counterparty clients from breaking.
1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) message which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients (chain-specified parameters) and zero out any client-customizable fields (such as `TrustingPeriod`).
2. Vote on and pass the governance proposal.
Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`.
Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client.
## Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients
Once the upgrading chain has committed to upgrading, relayers must wait till the chain halts at the upgrade height before upgrading counterparty clients. This is because chains may reschedule or cancel upgrade plans before they occur. Thus, relayers must wait till the chain reaches the upgrade height and halts before they can be sure the upgrade will take place.
Thus, the upgrade process for relayers trying to upgrade the counterparty clients is as follows:
1. Wait for the upgrading chain to reach the upgrade height and halt
2. Query a full node for the proofs of `UpgradedClient` and `UpgradedConsensusState` at the last height of the old chain.
3. Update the counterparty client to the last height of the old chain using the `UpdateClient` msg.
4. Submit an `UpgradeClient` msg to the counterparty chain with the `UpgradedClient`, `UpgradedConsensusState` and their respective proofs.
5. Submit an `UpdateClient` msg to the counterparty chain with a header from the new upgraded chain.
The Tendermint client on the counterparty chain will verify that the upgrading chain did indeed commit to the upgraded client and upgraded consensus state at the upgrade height (since the upgrade height is included in the key). If the proofs are verified against the upgrade height, then the client will upgrade to the new client while retaining all of its client-customized fields. Thus, it will retain its old TrustingPeriod, TrustLevel, MaxClockDrift, etc; while adopting the new chain-specified fields such as UnbondingPeriod, ChainId, UpgradePath, etc. Note, this can lead to an invalid client since the old client-chosen fields may no longer be valid given the new chain-chosen fields. Upgrading chains should try to avoid these situations by not altering parameters that can break old clients. For an example, see the UnbondingPeriod example in the supported upgrades section.
The upgraded consensus state will serve purely as a basis of trust for future `UpdateClientMsgs` and will not contain a consensus root to perform proof verification against. Thus, relayers must submit an `UpdateClientMsg` with a header from the new chain so that the connection can be used for proof verification again.

View file

@ -0,0 +1,14 @@
---
title: IBC Client Developer Guide to Upgrades
sidebar_label: IBC Client Developer Guide to Upgrades
sidebar_position: 2
slug: /ibc/upgrades/developer-guide
---
# IBC Client Developer Guide to Upgrades
:::note Synopsis
Learn how to implement upgrade functionality for your custom IBC client.
:::
Please see the section [Handling upgrades](../../03-light-clients/01-developer-guide/06-upgrades.md) from the light client developer guide for more information.

View file

@ -0,0 +1,52 @@
---
title: Genesis Restart Upgrades
sidebar_label: Genesis Restart Upgrades
sidebar_position: 3
slug: /ibc/upgrades/genesis-restart
---
# Genesis Restart Upgrades
:::note Synopsis
Learn how to upgrade your chain and counterparty clients using genesis restarts.
:::
**NOTE**: Regular genesis restarts are currently unsupported by relayers!
## IBC Client Breaking Upgrades
IBC client breaking upgrades are possible using genesis restarts.
It is highly recommended to use the in-place migrations instead of a genesis restart.
Genesis restarts should be used sparingly and as backup plans.
Genesis restarts still require the usage of an IBC upgrade proposal in order to correctly upgrade counterparty clients.
### Step-by-Step Upgrade Process for SDK Chains
If the IBC-connected chain is conducting an upgrade that will break counterparty clients, it must ensure that the upgrade is first supported by IBC using the [IBC Client Breaking Upgrade List](./01-quick-guide.md#ibc-client-breaking-upgrades) and then execute the upgrade process described below in order to prevent counterparty clients from breaking.
1. Create a governance proposal with the [`MsgIBCSoftwareUpgrade`](https://buf.build/cosmos/ibc/docs/main:ibc.core.client.v1#ibc.core.client.v1.MsgIBCSoftwareUpgrade) which contains an `UpgradePlan` and a new IBC `ClientState` in the `UpgradedClientState` field. Note that the `UpgradePlan` must specify an upgrade height **only** (no upgrade time), and the `ClientState` should only include the fields common to all valid clients and zero out any client-customizable fields (such as `TrustingPeriod`).
2. Vote on and pass the governance proposal.
3. Halt the node after successful upgrade.
4. Export the genesis file.
5. Swap to the new binary.
6. Run migrations on the genesis file.
7. Remove the upgrade plan set by the governance proposal from the genesis file. This may be done by migrations.
8. Change desired chain-specific fields (chain id, unbonding period, etc). This may be done by migrations.
8. Reset the node's data.
9. Start the chain.
Upon passing the governance proposal, the upgrade module will commit the `UpgradedClient` under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedClient`. On the block right before the upgrade height, the upgrade module will also commit an initial consensus state for the next chain under the key: `upgrade/UpgradedIBCState/{upgradeHeight}/upgradedConsState`.
Once the chain reaches the upgrade height and halts, a relayer can upgrade the counterparty clients to the last block of the old chain. They can then submit the proofs of the `UpgradedClient` and `UpgradedConsensusState` against this last block and upgrade the counterparty client.
### Step-by-Step Upgrade Process for Relayers Upgrading Counterparty Clients
These steps are identical to the regular [IBC client breaking upgrade process](./01-quick-guide.md#step-by-step-upgrade-process-for-relayers-upgrading-counterparty-clients).
## Non-IBC Client Breaking Upgrades
While ibc-go supports genesis restarts which do not break IBC clients, relayers do not support this upgrade path.
Here is a tracking issue on [Hermes](https://github.com/informalsystems/ibc-rs/issues/1152).
Please do not attempt a regular genesis restarts unless you have a tool to update counterparty clients correctly.

View file

@ -0,0 +1,5 @@
{
"label": "Upgrades",
"position": 5,
"link": { "type": "doc", "id": "intro" }
}

Some files were not shown because too many files have changed in this diff Show more