Compare commits

..

16 Commits

Author SHA1 Message Date
Steve Myers
0ad65c7776 Merge bitcoindevkit/bdk#930: Add default std feature, prep for 0.28.0 release
cbcbdd120d Update CHANGELOG for 0.28.0 release (Steve Myers)
f507185729 Change workflows to run for release branches (Steve Myers)
573bf52578 Add default feature std (Steve Myers)

Pull request description:

  ### Description

  Add new `std` feature which enables `bitcoin/std` and `miniscript/std`, and enable `std` feature in `default` feature set.

  ### Notes to the reviewers

  When using `bdk` in a wasm environment then `default` features should be disabled and `bitcoin/no-std` and `miniscript/no-std` must be enabled.  This is a breaking change for anyone using `bdk` with `--no-default-features`.

  I also updated the workflows to also run for `release/*` branches in addition to `master`.

  ### Changelog notice

  Changed

  - Added new `std` feature as part of `default` features, `std` must be enabled unless building for wasm.

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

  #### New Features:

  * [ ] I've added tests for the new feature
  * [x] I've added docs for the new feature

ACKs for top commit:
  rajarshimaitra:
    ACK cbcbdd120d

Tree-SHA512: fc0e1495d2f4e65c015d771366ac8d9589736b4464cf496de5093cdef145abfed037c0701c3c210f70a7bde5128b681492202cecfca7cd94771d498e8fb8e565
2023-04-05 16:16:56 -05:00
Steve Myers
cbcbdd120d Update CHANGELOG for 0.28.0 release 2023-03-28 15:34:41 -05:00
Steve Myers
f507185729 Change workflows to run for release branches 2023-03-28 14:44:55 -05:00
Steve Myers
573bf52578 Add default feature std 2023-03-28 14:17:32 -05:00
Steve Myers
10608afb76 Merge bitcoindevkit/bdk#875: Bump bip39 crate to v2.0.0
a4647cfa98 Bump bip39 crate to v2.0.0 (Elias Rohrer)

Pull request description:

  <!-- You can erase any parts of this template not applicable to your Pull Request. -->

  ### Description

  The updated version of `rust-bip39` was just released. It also [bumped the pinned version](https://github.com/rust-bitcoin/rust-bip39/pull/41) of `unicode-normalization`, which previously had lead to some incompatibilities.

  ### Notes to the reviewers

  <!-- In this section you can include notes directed to the reviewers, like explaining why some parts
  of the PR were done in a specific way -->

  ### Changelog notice

  <!-- Notice the release manager should include in the release tag message changelog -->
  <!-- See https://keepachangelog.com/en/1.0.0/ for examples -->

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

ACKs for top commit:
  rajarshimaitra:
    tACK a4647cfa98
  notmandatory:
    tACK a4647cfa98

Tree-SHA512: c13f2c5081cd1cf0477ed41717b09b4854651ee43434760b12a460c19657ec6f437d6401b0b6d32c8315491301d7c9848f0575d427f027adc8390d649fe898a9
2023-03-25 21:05:59 -05:00
Steve Myers
de46a51208 Bump version to 0.27.2 2023-03-15 11:35:10 -05:00
Steve Myers
e8acafce8e Merge bitcoindevkit/bdk#884: Update esplora client dependency to version 0.4
bb2b2d6dd8 Update esplora-client dependency to version 0.4 (Steve Myers)

Pull request description:

  ### Description

  Update the `esplora-client` to the new `0.4` release.

  ### Notes to the reviewers

  The `esplora-client` was updated in bitcoindevkit/rust-esplora-client#39 to not include default `rust-bitcoin` dependencies.

  ### Changelog notice

  - Update the `esplora-client` optional dependency to version `0.4`.

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

ACKs for top commit:
  vladimirfomene:
    ACK bb2b2d6dd8
  rajarshimaitra:
    tACK bb2b2d6dd8

Tree-SHA512: 0f277106166ab4a8144a641ac9507f1a32d735f99c0b73a026ff6e0b82589110a71fe81f880caa01dab247572f3146b771cc28bdab5b22bcd87ff628e15b2e1a
2023-03-15 11:11:21 -05:00
Steve Myers
bb2b2d6dd8 Update esplora-client dependency to version 0.4 2023-03-14 23:14:28 -05:00
benthecarman
87c558c9cf Set default-features = false for rust-bitcoin 2023-03-07 20:45:53 -06:00
Elias Rohrer
a4647cfa98 Bump bip39 crate to v2.0.0 2023-03-07 19:56:34 +01:00
Steve Myers
b111f97c58 Set dev-dependency base64ct version to <1.6.0 2023-03-06 20:43:31 -06:00
Steve Myers
7a8e6609b1 Bump version to 0.27.1 2023-02-16 11:52:05 -06:00
Steve Myers
4ec6f3272e Update rusqlite from 0.27.0 to 0.28.0 2023-02-15 14:47:51 -06:00
Steve Myers
553df318ff Bump version to 0.27.0 2023-02-10 22:37:53 -06:00
Steve Myers
9e2e6411f2 Fix ci Dockerfile.ledger 2023-02-10 22:37:52 -06:00
Steve Myers
5d48e37926 Bump version to 0.27.0-rc.1 2023-02-03 16:06:57 -06:00
139 changed files with 19696 additions and 18967 deletions

View File

@@ -2,6 +2,9 @@ name: Audit
on:
push:
branches:
- 'master'
- 'release/*'
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'

View File

@@ -1,4 +1,12 @@
on: [push, pull_request]
on:
push:
branches:
- 'master'
- 'release/*'
pull_request:
branches:
- 'master'
- 'release/*'
name: Code Coverage
@@ -9,42 +17,47 @@ jobs:
env:
RUSTFLAGS: "-Cinstrument-coverage"
RUSTDOCFLAGS: "-Cinstrument-coverage"
LLVM_PROFILE_FILE: "./target/coverage/%p-%m.profraw"
LLVM_PROFILE_FILE: "report-%p-%m.profraw"
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install lcov tools
run: sudo apt-get install lcov -y
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
- name: Install rustup
run: curl https://sh.rustup.rs -sSf | sh -s -- -y
- name: Set default toolchain
run: rustup default nightly
- name: Set profile
run: rustup set profile minimal
- name: Add llvm tools
run: rustup component add llvm-tools-preview
- name: Update toolchain
run: rustup update
- name: Cache cargo
uses: actions/cache@v3
with:
toolchain: "1.65.0"
override: true
profile: minimal
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.1
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install grcov
run: if [[ ! -e ~/.cargo/bin/grcov ]]; then cargo install grcov; fi
- name: Build simulator image
run: docker build -t hwi/ledger_emulator ./ci -f ci/Dockerfile.ledger
- name: Run simulator image
run: docker run --name simulator --network=host hwi/ledger_emulator &
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install python dependencies
run: pip install hwi==2.1.1 protobuf==3.20.1
- name: Test
run: cargo test --all-features
- name: Make coverage directory
run: mkdir coverage
# WARNING: this is not testing the following features: test-esplora, test-hardware-signer, async-interface
# This is because some of our features are mutually exclusive, and generating various reports and
# merging them doesn't seem to be working very well.
# For more info, see:
# - https://github.com/bitcoindevkit/bdk/issues/696
# - https://github.com/bitcoindevkit/bdk/pull/748#issuecomment-1242721040
run: cargo test --features all-keys,compact_filters,compiler,key-value-db,sqlite,sqlite-bundled,test-electrum,test-rpc,verify
- name: Run grcov
run: grcov . --binary-path ./target/debug/ -s . -t lcov --branch --ignore-not-existing --keep-only '**/crates/**' --ignore '**/tests/**' --ignore '**/examples/**' -o ./coverage/lcov.info
run: mkdir coverage; grcov . --binary-path ./target/debug/ -s . -t lcov --branch --ignore-not-existing --ignore '/*' -o ./coverage/lcov.info
- name: Generate HTML coverage report
run: genhtml -o coverage-report.html --ignore-errors source ./coverage/lcov.info
run: genhtml -o coverage-report.html ./coverage/lcov.info
- name: Coveralls upload
uses: coverallsapp/github-action@master
with:

View File

@@ -1,4 +1,12 @@
on: [push, pull_request]
on:
push:
branches:
- 'master'
- 'release/*'
pull_request:
branches:
- 'master'
- 'release/*'
name: CI
@@ -10,55 +18,118 @@ jobs:
strategy:
matrix:
rust:
- version: stable
- version: 1.65.0 # STABLE
clippy: true
- version: 1.57.0 # MSRV
features:
- --no-default-features
- --all-features
- default
- minimal
- all-keys
- minimal,use-esplora-blocking
- key-value-db
- electrum
- compact_filters
- use-esplora-blocking,key-value-db,electrum
- compiler
- rpc
- verify
- async-interface
- use-esplora-async
- sqlite
- sqlite-bundled
steps:
- name: checkout
uses: actions/checkout@v2
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
- name: Generate cache key
run: echo "${{ matrix.rust.version }} ${{ matrix.features }}" | tee .cache_key
- name: cache
uses: actions/cache@v2
with:
toolchain: ${{ matrix.rust.version }}
override: true
profile: minimal
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.1
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('.cache_key') }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Set default toolchain
run: rustup default ${{ matrix.rust.version }}
- name: Set profile
run: rustup set profile minimal
- name: Add clippy
if: ${{ matrix.rust.clippy }}
run: rustup component add clippy
- name: Update toolchain
run: rustup update
- name: Build
run: cargo build ${{ matrix.features }}
run: cargo build --features ${{ matrix.features }} --no-default-features
- name: Clippy
if: ${{ matrix.rust.clippy }}
run: cargo clippy --all-targets --features ${{ matrix.features }} --no-default-features -- -D warnings
- name: Test
run: cargo test ${{ matrix.features }}
run: cargo test --features ${{ matrix.features }} --no-default-features
check-no-std:
name: Check no_std
test-readme-examples:
name: Test README.md examples
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v2
- name: cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-test-md-docs-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Set default toolchain
run: rustup default nightly
- name: Set profile
run: rustup set profile minimal
- name: Update toolchain
run: rustup update
- name: Test
run: cargo test --features test-md-docs --no-default-features -- doctest::ReadmeDoctests
test-blockchains:
name: Blockchain ${{ matrix.blockchain.features }}
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
blockchain:
- name: electrum
testprefix: blockchain::electrum::test
features: test-electrum,verify
- name: rpc
testprefix: blockchain::rpc::test
features: test-rpc
- name: rpc-legacy
testprefix: blockchain::rpc::test
features: test-rpc-legacy
- name: esplora
testprefix: esplora
features: test-esplora,use-esplora-async,verify
- name: esplora
testprefix: esplora
features: test-esplora,use-esplora-blocking,verify
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install Rust toolchain
- name: Cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ github.job }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Setup rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
profile: minimal
# target: "thumbv6m-none-eabi"
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.1
- name: Check bdk_chain
working-directory: ./crates/chain
# TODO "--target thumbv6m-none-eabi" should work but currently does not
run: cargo check --no-default-features --features bitcoin/no-std,miniscript/no-std,hashbrown
- name: Check bdk
working-directory: ./crates/bdk
# TODO "--target thumbv6m-none-eabi" should work but currently does not
run: cargo check --no-default-features --features bitcoin/no-std,miniscript/no-std,bdk_chain/hashbrown
- name: Check esplora
working-directory: ./crates/esplora
# TODO "--target thumbv6m-none-eabi" should work but currently does not
run: cargo check --no-default-features --features bitcoin/no-std,miniscript/no-std,bdk_chain/hashbrown
- name: Test
run: cargo test --no-default-features --features ${{ matrix.blockchain.features }} ${{ matrix.blockchain.testprefix }}::bdk_blockchain_tests
check-wasm:
name: Check WASM
@@ -69,26 +140,29 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ github.job }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
# Install a recent version of clang that supports wasm32
- run: wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add - || exit 1
- run: sudo apt-add-repository "deb http://apt.llvm.org/focal/ llvm-toolchain-focal-10 main" || exit 1
- run: sudo apt-get update || exit 1
- run: sudo apt-get install -y libclang-common-10-dev clang-10 libc6-dev-i386 || exit 1
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
profile: minimal
target: "wasm32-unknown-unknown"
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.1
- name: Check bdk
working-directory: ./crates/bdk
run: cargo check --target wasm32-unknown-unknown --no-default-features --features bitcoin/no-std,miniscript/no-std,bdk_chain/hashbrown,dev-getrandom-wasm
- name: Check esplora
working-directory: ./crates/esplora
run: cargo check --target wasm32-unknown-unknown --no-default-features --features bitcoin/no-std,miniscript/no-std,bdk_chain/hashbrown,async
- name: Set default toolchain
run: rustup default 1.65.0 # STABLE
- name: Set profile
run: rustup set profile minimal
- name: Add target wasm32
run: rustup target add wasm32-unknown-unknown
- name: Update toolchain
run: rustup update
- name: Check
run: cargo check --target wasm32-unknown-unknown --features async-interface,use-esplora-async,dev-getrandom-wasm --no-default-features
fmt:
name: Rust fmt
@@ -96,30 +170,42 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
profile: minimal
components: rustfmt
- name: Set default toolchain
run: rustup default nightly
- name: Set profile
run: rustup set profile minimal
- name: Add rustfmt
run: rustup component add rustfmt
- name: Update toolchain
run: rustup update
- name: Check fmt
run: cargo fmt --all -- --config format_code_in_doc_comments=true --check
clippy_check:
runs-on: ubuntu-latest
test_hardware_wallet:
runs-on: ubuntu-20.04
strategy:
matrix:
rust:
- version: 1.65.0 # STABLE
- version: 1.57.0 # MSRV
steps:
- uses: actions/checkout@v1
- uses: actions-rs/toolchain@v1
with:
# we pin clippy instead of using "stable" so that our CI doesn't break
# at each new cargo release
toolchain: "1.67.0"
components: clippy
override: true
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.1
- uses: actions-rs/clippy-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
args: --all-features --all-targets -- -D warnings
- name: Checkout
uses: actions/checkout@v3
- name: Build simulator image
run: docker build -t hwi/ledger_emulator ./ci -f ci/Dockerfile.ledger
- name: Run simulator image
run: docker run --name simulator --network=host hwi/ledger_emulator &
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install python dependencies
run: pip install hwi==2.1.1 protobuf==3.20.1
- name: Set default toolchain
run: rustup default ${{ matrix.rust.version }}
- name: Set profile
run: rustup set profile minimal
- name: Update toolchain
run: rustup update
- name: Test
run: cargo test --features test-hardware-signer

View File

@@ -1,6 +1,14 @@
name: Publish Nightly Docs
on: [push, pull_request]
on:
push:
branches:
- 'master'
- 'release/*'
pull_request:
branches:
- 'master'
- 'release/*'
jobs:
build_docs:
@@ -9,18 +17,22 @@ jobs:
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Setup cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: nightly-docs-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Set default toolchain
run: rustup default nightly-2022-12-14
- name: Set profile
run: rustup set profile minimal
- name: Update toolchain
run: rustup update
- name: Rust Cache
uses: Swatinem/rust-cache@v2.2.1
- name: Build docs
run: cargo doc --no-deps
env:
RUSTDOCFLAGS: '--cfg docsrs -Dwarnings'
run: cargo rustdoc --verbose --features=compiler,electrum,esplora,use-esplora-blocking,compact_filters,rpc,key-value-db,sqlite,all-keys,verify,hardware-signer -- --cfg docsrs -Dwarnings
- name: Upload artifact
uses: actions/upload-artifact@v2
with:

View File

@@ -9,6 +9,19 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [v0.28.0]
### Summary
Disable default-features for rust-bitcoin and rust-miniscript dependencies, and for rust-esplora-client optional dependency. New default `std` feature must be enabled unless building for wasm.
### Changed
- Bump bip39 crate to v2.0.0 #875
- Set default-features = false for rust-bitcoin and rust-miniscript #882
- Update esplora client dependency to version 0.4 #884
- Added new `std` feature as part of default features #930
## [v0.27.1]
### Summary
@@ -642,4 +655,5 @@ final transaction is created by calling `finish` on the builder.
[v0.26.0]: https://github.com/bitcoindevkit/bdk/compare/v0.25.0...v0.26.0
[v0.27.0]: https://github.com/bitcoindevkit/bdk/compare/v0.26.0...v0.27.0
[v0.27.1]: https://github.com/bitcoindevkit/bdk/compare/v0.27.0...v0.27.1
[Unreleased]: https://github.com/bitcoindevkit/bdk/compare/v0.27.1...HEAD
[v0.28.0]: https://github.com/bitcoindevkit/bdk/compare/v0.27.1...v0.28.0
[Unreleased]: https://github.com/bitcoindevkit/bdk/compare/v0.28.0...HEAD

View File

@@ -28,7 +28,7 @@ The codebase is maintained using the "contributor workflow" where everyone
without exception contributes patch proposals using "pull requests". This
facilitates social contribution, easy testing and peer review.
To contribute a patch, the workflow is as follows:
To contribute a patch, the worflow is a as follows:
1. Fork Repository
2. Create topic branch

View File

@@ -1,18 +1,176 @@
[workspace]
members = [
"crates/bdk",
"crates/chain",
"crates/file_store",
"crates/electrum",
"crates/esplora",
"example-crates/example_cli",
"example-crates/example_electrum",
"example-crates/wallet_electrum",
"example-crates/wallet_esplora",
"example-crates/wallet_esplora_async",
"nursery/tmp_plan",
"nursery/coin_select"
]
[package]
name = "bdk"
version = "0.27.2"
edition = "2018"
authors = ["Alekos Filini <alekos.filini@gmail.com>", "Riccardo Casatta <riccardo@casatta.it>"]
homepage = "https://bitcoindevkit.org"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk"
description = "A modern, lightweight, descriptor-based wallet library"
keywords = ["bitcoin", "wallet", "descriptor", "psbt"]
readme = "README.md"
license = "MIT OR Apache-2.0"
[workspace.package]
authors = ["Bitcoin Dev Kit Developers"]
[dependencies]
bdk-macros = "^0.6"
log = "^0.4"
miniscript = { version = "9.0", default-features = false, features = ["serde"] }
bitcoin = { version = "0.29.2", default-features = false, features = ["serde", "base64", "rand"] }
serde = { version = "^1.0", features = ["derive"] }
serde_json = { version = "^1.0" }
rand = "^0.8"
# Optional dependencies
sled = { version = "0.34", optional = true }
electrum-client = { version = "0.12", optional = true }
esplora-client = { version = "0.4", default-features = false, optional = true }
rusqlite = { version = "0.28.0", optional = true }
ahash = { version = "0.7.6", optional = true }
futures = { version = "0.3", optional = true }
async-trait = { version = "0.1", optional = true }
rocksdb = { version = "0.14", default-features = false, features = ["snappy"], optional = true }
cc = { version = ">=1.0.64", optional = true }
socks = { version = "0.3", optional = true }
hwi = { version = "0.5", optional = true, features = ["use-miniscript"] }
bip39 = { version = "2.0.0", optional = true }
bitcoinconsensus = { version = "0.19.0-3", optional = true }
# Needed by bdk_blockchain_tests macro and the `rpc` feature
bitcoincore-rpc = { version = "0.16", optional = true }
# Platform-specific dependencies
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
tokio = { version = "1", features = ["rt", "macros"] }
[target.'cfg(target_arch = "wasm32")'.dependencies]
getrandom = "0.2"
async-trait = "0.1"
js-sys = "0.3"
[features]
minimal = []
compiler = ["miniscript/compiler"]
verify = ["bitcoinconsensus"]
default = ["std", "key-value-db", "electrum"]
# std feature is always required unless building for wasm32-unknown-unknown target
# if building for wasm user must add dependencies bitcoin/no-std,miniscript/no-std
std = ["bitcoin/std", "miniscript/std"]
sqlite = ["rusqlite", "ahash"]
sqlite-bundled = ["sqlite", "rusqlite/bundled"]
compact_filters = ["rocksdb", "socks", "cc"]
key-value-db = ["sled"]
all-keys = ["keys-bip39"]
keys-bip39 = ["bip39"]
rpc = ["bitcoincore-rpc"]
hardware-signer = ["hwi"]
# We currently provide mulitple implementations of `Blockchain`, all are
# blocking except for the `EsploraBlockchain` which can be either async or
# blocking, depending on the HTTP client in use.
#
# - Users wanting asynchronous HTTP calls should enable `async-interface` to get
# access to the asynchronous method implementations. Then, if Esplora is wanted,
# enable the `use-esplora-async` feature.
# - Users wanting blocking HTTP calls can use any of the other blockchain
# implementations (`compact_filters`, `electrum`, or `esplora`). Users wanting to
# use Esplora should enable the `use-esplora-blocking` feature.
#
# WARNING: Please take care with the features below, various combinations will
# fail to build. We cannot currently build `bdk` with `--all-features`.
async-interface = ["async-trait"]
electrum = ["electrum-client"]
# MUST ALSO USE `--no-default-features`.
use-esplora-async = ["esplora", "esplora-client/async", "futures"]
use-esplora-blocking = ["esplora", "esplora-client/blocking"]
# Deprecated aliases
use-esplora-reqwest = ["use-esplora-async"]
use-esplora-ureq = ["use-esplora-blocking"]
# Typical configurations will not need to use `esplora` feature directly.
esplora = []
# Use below feature with `use-esplora-async` to enable reqwest default TLS support
reqwest-default-tls = ["esplora-client/async-https"]
# Debug/Test features
test-blockchains = ["bitcoincore-rpc", "electrum-client"]
test-electrum = ["electrum", "electrsd/electrs_0_8_10", "electrsd/bitcoind_22_0", "test-blockchains"]
test-rpc = ["rpc", "electrsd/electrs_0_8_10", "electrsd/bitcoind_22_0", "test-blockchains"]
test-rpc-legacy = ["rpc", "electrsd/electrs_0_8_10", "electrsd/bitcoind_0_20_0", "test-blockchains"]
test-esplora = ["electrsd/legacy", "electrsd/esplora_a33e97e1", "electrsd/bitcoind_22_0", "test-blockchains"]
test-md-docs = ["electrum"]
test-hardware-signer = ["hardware-signer"]
# This feature is used to run `cargo check` in our CI targeting wasm. It's not recommended
# for libraries to explicitly include the "getrandom/js" feature, so we only do it when
# necessary for running our CI. See: https://docs.rs/getrandom/0.2.8/getrandom/#webassembly-support
dev-getrandom-wasm = ["getrandom/js"]
[dev-dependencies]
miniscript = { version = "9.0", features = ["std"] }
bitcoin = { version = "0.29.2", features = ["std"] }
lazy_static = "1.4"
env_logger = "0.7"
electrsd = "0.22"
# Move back to importing from rust-bitcoin once https://github.com/rust-bitcoin/rust-bitcoin/pull/1342 is released
base64 = "^0.13"
assert_matches = "1.5.0"
# zip versions after 0.6.3 don't work with our MSRV 1.57.0
zip = "=0.6.3"
# base64ct versions at 1.6.0 and higher have MSRV 1.60.0
base64ct = "<1.6.0"
[[example]]
name = "compact_filters_balance"
required-features = ["compact_filters"]
[[example]]
name = "miniscriptc"
path = "examples/compiler.rs"
required-features = ["compiler"]
[[example]]
name = "policy"
path = "examples/policy.rs"
[[example]]
name = "rpcwallet"
path = "examples/rpcwallet.rs"
required-features = ["keys-bip39", "key-value-db", "rpc", "electrsd/bitcoind_22_0"]
[[example]]
name = "psbt_signer"
path = "examples/psbt_signer.rs"
required-features = ["electrum"]
[[example]]
name = "hardware_signer"
path = "examples/hardware_signer.rs"
required-features = ["electrum", "hardware-signer"]
[[example]]
name = "electrum_backend"
path = "examples/electrum_backend.rs"
required-features = ["electrum"]
[[example]]
name = "esplora_backend_synchronous"
path = "examples/esplora_backend_synchronous.rs"
required-features = ["use-esplora-ureq"]
[[example]]
name = "esplora_backend_asynchronous"
path = "examples/esplora_backend_asynchronous.rs"
required-features = ["use-esplora-reqwest", "reqwest-default-tls", "async-interface"]
[[example]]
name = "mnemonic_to_descriptors"
path = "examples/mnemonic_to_descriptors.rs"
required-features = ["all-keys"]
[workspace]
members = ["macros"]
[package.metadata.docs.rs]
features = ["compiler", "electrum", "esplora", "use-esplora-blocking", "compact_filters", "rpc", "key-value-db", "sqlite", "all-keys", "verify", "hardware-signer"]
# defines the configuration attribute `docsrs`
rustdoc-args = ["--cfg", "docsrs"]

189
README.md
View File

@@ -1,5 +1,3 @@
# The Bitcoin Dev Kit
<div align="center">
<h1>BDK</h1>
@@ -28,27 +26,178 @@
## About
The `bdk` libraries aims to provide well engineered and reviewed components for Bitcoin based applications.
It is built upon the excellent [`rust-bitcoin`] and [`rust-miniscript`] crates.
The `bdk` library aims to be the core building block for Bitcoin wallets of any kind.
> ⚠ The Bitcoin Dev Kit developers are in the process of releasing a `v1.0` which is a fundamental re-write of how the library works.
> See for some background on this project: https://bitcoindevkit.org/blog/road-to-bdk-1/ (ignore the timeline 😁)
> For a release timeline see the [`bdk_core_staging`] repo where a lot of the component work is being done. The plan is that everything in the `bdk_core_staging` repo will be moved into the `crates` directory here.
* It uses [Miniscript](https://github.com/rust-bitcoin/rust-miniscript) to support descriptors with generalized conditions. This exact same library can be used to build
single-sig wallets, multisigs, timelocked contracts and more.
* It supports multiple blockchain backends and databases, allowing developers to choose exactly what's right for their projects.
* It's built to be cross-platform: the core logic works on desktop, mobile, and even WebAssembly.
* It's very easy to extend: developers can implement customized logic for blockchain backends, databases, signers, coin selection, and more, without having to fork and modify this library.
## Architecture
## Examples
The project is split up into several crates in the `/crates` directory:
### Sync the balance of a descriptor
- [`bdk`](./crates/bdk): Contains the central high level `Wallet` type that is built from the low-level mechanisms provided by the other components
- [`chain`](./crates/chain): Tools for storing and indexing chain data
- [`file_store`](./crates/file_store): A (experimental) persistence backend for storing chain data in a single file.
- [`esplora`](./crates/esplora): Extends the [`esplora-client`] crate with methods to fetch chain data from an esplora HTTP server in the form that [`bdk_chain`] and `Wallet` can consume.
- [`electrum`](./crates/electrum): Extends the [`electrum-client`] crate with methods to fetch chain data from an electrum server in the form that [`bdk_chain`] and `Wallet` can consume.
```rust,no_run
use bdk::Wallet;
use bdk::database::MemoryDatabase;
use bdk::blockchain::ElectrumBlockchain;
use bdk::SyncOptions;
use bdk::electrum_client::Client;
use bdk::bitcoin::Network;
Fully working examples of how to use these components are in `/example-crates`
fn main() -> Result<(), bdk::Error> {
let blockchain = ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?);
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
Network::Testnet,
MemoryDatabase::default(),
)?;
[`bdk_core_staging`]: https://github.com/LLFourn/bdk_core_staging
[`rust-miniscript`]: https://github.com/rust-bitcoin/rust-miniscript
[`rust-bitcoin`]: https://github.com/rust-bitcoin/rust-bitcoin
[`esplora-client`]: https://docs.rs/esplora-client/0.3.0/esplora_client/
[`electrum-client`]: https://docs.rs/electrum-client/0.13.0/electrum_client/
wallet.sync(&blockchain, SyncOptions::default())?;
println!("Descriptor balance: {} SAT", wallet.get_balance()?);
Ok(())
}
```
### Generate a few addresses
```rust
use bdk::{Wallet, database::MemoryDatabase};
use bdk::wallet::AddressIndex::New;
use bdk::bitcoin::Network;
fn main() -> Result<(), bdk::Error> {
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
Network::Testnet,
MemoryDatabase::default(),
)?;
println!("Address #0: {}", wallet.get_address(New)?);
println!("Address #1: {}", wallet.get_address(New)?);
println!("Address #2: {}", wallet.get_address(New)?);
Ok(())
}
```
### Create a transaction
```rust,no_run
use bdk::{FeeRate, Wallet, SyncOptions};
use bdk::database::MemoryDatabase;
use bdk::blockchain::ElectrumBlockchain;
use bdk::electrum_client::Client;
use bdk::wallet::AddressIndex::New;
use base64;
use bdk::bitcoin::consensus::serialize;
use bdk::bitcoin::Network;
fn main() -> Result<(), bdk::Error> {
let blockchain = ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?);
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
Network::Testnet,
MemoryDatabase::default(),
)?;
wallet.sync(&blockchain, SyncOptions::default())?;
let send_to = wallet.get_address(New)?;
let (psbt, details) = {
let mut builder = wallet.build_tx();
builder
.add_recipient(send_to.script_pubkey(), 50_000)
.enable_rbf()
.do_not_spend_change()
.fee_rate(FeeRate::from_sat_per_vb(5.0));
builder.finish()?
};
println!("Transaction details: {:#?}", details);
println!("Unsigned PSBT: {}", base64::encode(&serialize(&psbt)));
Ok(())
}
```
### Sign a transaction
```rust,no_run
use bdk::{Wallet, SignOptions, database::MemoryDatabase};
use base64;
use bdk::bitcoin::consensus::deserialize;
use bdk::bitcoin::Network;
fn main() -> Result<(), bdk::Error> {
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/1/*)"),
Network::Testnet,
MemoryDatabase::default(),
)?;
let psbt = "...";
let mut psbt = deserialize(&base64::decode(psbt).unwrap())?;
let _finalized = wallet.sign(&mut psbt, SignOptions::default())?;
Ok(())
}
```
## Testing
### Unit testing
```bash
cargo test
```
### Integration testing
Integration testing require testing features, for example:
```bash
cargo test --features test-electrum
```
The other options are `test-esplora`, `test-rpc` or `test-rpc-legacy` which runs against an older version of Bitcoin Core.
Note that `electrs` and `bitcoind` binaries are automatically downloaded (on mac and linux), to specify you already have installed binaries you must use `--no-default-features` and provide `BITCOIND_EXE` and `ELECTRS_EXE` as environment variables.
## Running under WASM
If you want to run this library under WASM you will probably have to add the following lines to you `Cargo.toml`:
```toml
[dependencies]
getrandom = { version = "0.2", features = ["js"] }
```
This enables the `rand` crate to work in environments where JavaScript is available. See [this link](https://docs.rs/getrandom/0.2.8/getrandom/#webassembly-support) to learn more.
## License
Licensed under either of
* Apache License, Version 2.0
([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license
([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.

View File

@@ -1 +0,0 @@
msrv="1.57.0"

View File

@@ -1,65 +0,0 @@
[package]
name = "bdk"
homepage = "https://bitcoindevkit.org"
version = "1.0.0-alpha.1"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk"
description = "A modern, lightweight, descriptor-based wallet library"
keywords = ["bitcoin", "wallet", "descriptor", "psbt"]
readme = "README.md"
license = "MIT OR Apache-2.0"
authors = ["Bitcoin Dev Kit Developers"]
edition = "2021"
rust-version = "1.57"
[dependencies]
log = "=0.4.18"
rand = "^0.8"
miniscript = { version = "9", features = ["serde"], default-features = false }
bitcoin = { version = "0.29", features = ["serde", "base64", "rand"], default-features = false }
serde = { version = "^1.0", features = ["derive"] }
serde_json = { version = "^1.0" }
bdk_chain = { path = "../chain", version = "0.5.0", features = ["miniscript", "serde"], default-features = false }
# Optional dependencies
hwi = { version = "0.5", optional = true, features = [ "use-miniscript"] }
bip39 = { version = "1.0.1", optional = true }
[target.'cfg(target_arch = "wasm32")'.dependencies]
getrandom = "0.2"
js-sys = "0.3"
[features]
default = ["std"]
std = ["bitcoin/std", "miniscript/std", "bdk_chain/std"]
compiler = ["miniscript/compiler"]
all-keys = ["keys-bip39"]
keys-bip39 = ["bip39"]
hardware-signer = ["hwi"]
test-hardware-signer = ["hardware-signer"]
# This feature is used to run `cargo check` in our CI targeting wasm. It's not recommended
# for libraries to explicitly include the "getrandom/js" feature, so we only do it when
# necessary for running our CI. See: https://docs.rs/getrandom/0.2.8/getrandom/#webassembly-support
dev-getrandom-wasm = ["getrandom/js"]
[dev-dependencies]
lazy_static = "1.4"
env_logger = "0.7"
# Move back to importing from rust-bitcoin once https://github.com/rust-bitcoin/rust-bitcoin/pull/1342 is released
base64 = "^0.13"
assert_matches = "1.5.0"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[[example]]
name = "mnemonic_to_descriptors"
path = "examples/mnemonic_to_descriptors.rs"
required-features = ["all-keys"]
[[example]]
name = "miniscriptc"
path = "examples/compiler.rs"
required-features = ["compiler"]

View File

@@ -1,227 +0,0 @@
<div align="center">
<h1>BDK</h1>
<img src="https://raw.githubusercontent.com/bitcoindevkit/bdk/master/static/bdk.png" width="220" />
<p>
<strong>A modern, lightweight, descriptor-based wallet library written in Rust!</strong>
</p>
<p>
<a href="https://crates.io/crates/bdk"><img alt="Crate Info" src="https://img.shields.io/crates/v/bdk.svg"/></a>
<a href="https://github.com/bitcoindevkit/bdk/blob/master/LICENSE"><img alt="MIT or Apache-2.0 Licensed" src="https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg"/></a>
<a href="https://github.com/bitcoindevkit/bdk/actions?query=workflow%3ACI"><img alt="CI Status" src="https://github.com/bitcoindevkit/bdk/workflows/CI/badge.svg"></a>
<a href="https://coveralls.io/github/bitcoindevkit/bdk?branch=master"><img src="https://coveralls.io/repos/github/bitcoindevkit/bdk/badge.svg?branch=master"/></a>
<a href="https://docs.rs/bdk"><img alt="API Docs" src="https://img.shields.io/badge/docs.rs-bdk-green"/></a>
<a href="https://blog.rust-lang.org/2021/12/02/Rust-1.57.0.html"><img alt="Rustc Version 1.57.0+" src="https://img.shields.io/badge/rustc-1.57.0%2B-lightgrey.svg"/></a>
<a href="https://discord.gg/d7NkDKm"><img alt="Chat on Discord" src="https://img.shields.io/discord/753336465005608961?logo=discord"></a>
</p>
<h4>
<a href="https://bitcoindevkit.org">Project Homepage</a>
<span> | </span>
<a href="https://docs.rs/bdk">Documentation</a>
</h4>
</div>
## `bdk`
The `bdk` crate provides the [`Wallet`](`crate::Wallet`) type which is a simple, high-level
interface built from the low-level components of [`bdk_chain`]. `Wallet` is a good starting point
for many simple applications as well as a good demonstration of how to use the other mechanisms to
construct a wallet. It has two keychains (external and internal) which are defined by
[miniscript descriptors][`rust-miniscript`] and uses them to generate addresses. When you give it
chain data it also uses the descriptors to find transaction outputs owned by them. From there, you
can create and sign transactions.
For more information, see the [`Wallet`'s documentation](https://docs.rs/bdk/latest/bdk/wallet/struct.Wallet.html).
### Blockchain data
In order to get blockchain data for `Wallet` to consume, you have to put it into particular form.
Right now this is [`KeychainScan`] which is defined in [`bdk_chain`].
This can be created manually or from blockchain-scanning crates.
**Blockchain Data Sources**
* [`bdk_esplora`]: Grabs blockchain data from Esplora for updating BDK structures.
* [`bdk_electrum`]: Grabs blockchain data from Electrum for updating BDK structures.
**Examples**
* [`example-crates/wallet_esplora`](https://github.com/bitcoindevkit/bdk/tree/master/example-crates/wallet_esplora)
* [`example-crates/wallet_electrum`](https://github.com/bitcoindevkit/bdk/tree/master/example-crates/wallet_electrum)
### Persistence
To persist the `Wallet` on disk, `Wallet` needs to be constructed with a
[`Persist`](https://docs.rs/bdk_chain/latest/bdk_chain/keychain/struct.KeychainPersist.html) implementation.
**Implementations**
* [`bdk_file_store`]: a simple flat-file implementation of `Persist`.
**Example**
```rust
use bdk::{bitcoin::Network, wallet::{AddressIndex, Wallet}};
fn main() {
// a type that implements `Persist`
let db = ();
let descriptor = "wpkh(tprv8ZgxMBicQKsPdy6LMhUtFHAgpocR8GC6QmwMSFpZs7h6Eziw3SpThFfczTDh5rW2krkqffa11UpX3XkeTTB2FvzZKWXqPY54Y6Rq4AQ5R8L/84'/0'/0'/0/*)";
let mut wallet = Wallet::new(descriptor, None, db, Network::Testnet).expect("should create");
// get a new address (this increments revealed derivation index)
println!("revealed address: {}", wallet.get_address(AddressIndex::New));
println!("staged changes: {:?}", wallet.staged());
// persist changes
wallet.commit().expect("must save");
}
```
<!-- ### Sync the balance of a descriptor -->
<!-- ```rust,no_run -->
<!-- use bdk::Wallet; -->
<!-- use bdk::blockchain::ElectrumBlockchain; -->
<!-- use bdk::SyncOptions; -->
<!-- use bdk::electrum_client::Client; -->
<!-- use bdk::bitcoin::Network; -->
<!-- fn main() -> Result<(), bdk::Error> { -->
<!-- let blockchain = ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?); -->
<!-- let wallet = Wallet::new( -->
<!-- "wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)", -->
<!-- Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"), -->
<!-- Network::Testnet, -->
<!-- )?; -->
<!-- wallet.sync(&blockchain, SyncOptions::default())?; -->
<!-- println!("Descriptor balance: {} SAT", wallet.get_balance()?); -->
<!-- Ok(()) -->
<!-- } -->
<!-- ``` -->
<!-- ### Generate a few addresses -->
<!-- ```rust -->
<!-- use bdk::Wallet; -->
<!-- use bdk::wallet::AddressIndex::New; -->
<!-- use bdk::bitcoin::Network; -->
<!-- fn main() -> Result<(), bdk::Error> { -->
<!-- let wallet = Wallet::new_no_persist( -->
<!-- "wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)", -->
<!-- Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"), -->
<!-- Network::Testnet, -->
<!-- )?; -->
<!-- println!("Address #0: {}", wallet.get_address(New)); -->
<!-- println!("Address #1: {}", wallet.get_address(New)); -->
<!-- println!("Address #2: {}", wallet.get_address(New)); -->
<!-- Ok(()) -->
<!-- } -->
<!-- ``` -->
<!-- ### Create a transaction -->
<!-- ```rust,no_run -->
<!-- use bdk::{FeeRate, Wallet, SyncOptions}; -->
<!-- use bdk::blockchain::ElectrumBlockchain; -->
<!-- use bdk::electrum_client::Client; -->
<!-- use bdk::wallet::AddressIndex::New; -->
<!-- use base64; -->
<!-- use bdk::bitcoin::consensus::serialize; -->
<!-- use bdk::bitcoin::Network; -->
<!-- fn main() -> Result<(), bdk::Error> { -->
<!-- let blockchain = ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?); -->
<!-- let wallet = Wallet::new_no_persist( -->
<!-- "wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)", -->
<!-- Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"), -->
<!-- Network::Testnet, -->
<!-- )?; -->
<!-- wallet.sync(&blockchain, SyncOptions::default())?; -->
<!-- let send_to = wallet.get_address(New); -->
<!-- let (psbt, details) = { -->
<!-- let mut builder = wallet.build_tx(); -->
<!-- builder -->
<!-- .add_recipient(send_to.script_pubkey(), 50_000) -->
<!-- .enable_rbf() -->
<!-- .do_not_spend_change() -->
<!-- .fee_rate(FeeRate::from_sat_per_vb(5.0)); -->
<!-- builder.finish()? -->
<!-- }; -->
<!-- println!("Transaction details: {:#?}", details); -->
<!-- println!("Unsigned PSBT: {}", base64::encode(&serialize(&psbt))); -->
<!-- Ok(()) -->
<!-- } -->
<!-- ``` -->
<!-- ### Sign a transaction -->
<!-- ```rust,no_run -->
<!-- use bdk::{Wallet, SignOptions}; -->
<!-- use base64; -->
<!-- use bdk::bitcoin::consensus::deserialize; -->
<!-- use bdk::bitcoin::Network; -->
<!-- fn main() -> Result<(), bdk::Error> { -->
<!-- let wallet = Wallet::new_no_persist( -->
<!-- "wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/0/*)", -->
<!-- Some("wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/1/*)"), -->
<!-- Network::Testnet, -->
<!-- )?; -->
<!-- let psbt = "..."; -->
<!-- let mut psbt = deserialize(&base64::decode(psbt).unwrap())?; -->
<!-- let _finalized = wallet.sign(&mut psbt, SignOptions::default())?; -->
<!-- Ok(()) -->
<!-- } -->
<!-- ``` -->
## Testing
### Unit testing
```bash
cargo test
```
## License
Licensed under either of
* Apache License, Version 2.0
([LICENSE-APACHE](LICENSE-APACHE) or <https://www.apache.org/licenses/LICENSE-2.0>)
* MIT license
([LICENSE-MIT](LICENSE-MIT) or <https://opensource.org/licenses/MIT>)
at your option.
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.
[`bdk_chain`]: https://docs.rs/bdk_chain/latest
[`bdk_file_store`]: https://docs.rs/bdk_file_store/latest
[`bdk_electrum`]: https://docs.rs/bdk_electrum/latest
[`bdk_esplora`]: https://docs.rs/bdk_esplora/latest
[`KeychainScan`]: https://docs.rs/bdk_chain/latest/bdk_chain/keychain/struct.KeychainScan.html
[`rust-miniscript`]: https://docs.rs/miniscript/latest/miniscript/index.html

View File

@@ -1,54 +0,0 @@
#![doc = include_str!("../README.md")]
// only enables the `doc_cfg` feature when the `docsrs` configuration attribute is defined
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(
docsrs,
doc(html_logo_url = "https://github.com/bitcoindevkit/bdk/raw/master/static/bdk.png")
)]
#![no_std]
#![warn(missing_docs)]
#[cfg(feature = "std")]
#[macro_use]
extern crate std;
#[doc(hidden)]
#[macro_use]
pub extern crate alloc;
pub extern crate bitcoin;
#[cfg(feature = "hardware-signer")]
pub extern crate hwi;
extern crate log;
pub extern crate miniscript;
extern crate serde;
extern crate serde_json;
#[cfg(feature = "keys-bip39")]
extern crate bip39;
#[allow(unused_imports)]
#[macro_use]
pub(crate) mod error;
pub mod descriptor;
pub mod keys;
pub mod psbt;
pub(crate) mod types;
pub mod wallet;
pub use descriptor::template;
pub use descriptor::HdKeyPaths;
pub use error::Error;
pub use types::*;
pub use wallet::signer;
pub use wallet::signer::SignOptions;
pub use wallet::tx_builder::TxBuilder;
pub use wallet::Wallet;
/// Get the version of BDK at runtime
pub fn version() -> &'static str {
env!("CARGO_PKG_VERSION", "unknown")
}
pub use bdk_chain as chain;
pub(crate) use bdk_chain::collections;

View File

@@ -1,79 +0,0 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Additional functions on the `rust-bitcoin` `PartiallySignedTransaction` structure.
use crate::FeeRate;
use alloc::vec::Vec;
use bitcoin::util::psbt::PartiallySignedTransaction as Psbt;
use bitcoin::TxOut;
// TODO upstream the functions here to `rust-bitcoin`?
/// Trait to add functions to extract utxos and calculate fees.
pub trait PsbtUtils {
/// Get the `TxOut` for the specified input index, if it doesn't exist in the PSBT `None` is returned.
fn get_utxo_for(&self, input_index: usize) -> Option<TxOut>;
/// The total transaction fee amount, sum of input amounts minus sum of output amounts, in sats.
/// If the PSBT is missing a TxOut for an input returns None.
fn fee_amount(&self) -> Option<u64>;
/// The transaction's fee rate. This value will only be accurate if calculated AFTER the
/// `PartiallySignedTransaction` is finalized and all witness/signature data is added to the
/// transaction.
/// If the PSBT is missing a TxOut for an input returns None.
fn fee_rate(&self) -> Option<FeeRate>;
}
impl PsbtUtils for Psbt {
#[allow(clippy::all)] // We want to allow `manual_map` but it is too new.
fn get_utxo_for(&self, input_index: usize) -> Option<TxOut> {
let tx = &self.unsigned_tx;
if input_index >= tx.input.len() {
return None;
}
if let Some(input) = self.inputs.get(input_index) {
if let Some(wit_utxo) = &input.witness_utxo {
Some(wit_utxo.clone())
} else if let Some(in_tx) = &input.non_witness_utxo {
Some(in_tx.output[tx.input[input_index].previous_output.vout as usize].clone())
} else {
None
}
} else {
None
}
}
fn fee_amount(&self) -> Option<u64> {
let tx = &self.unsigned_tx;
let utxos: Option<Vec<TxOut>> = (0..tx.input.len()).map(|i| self.get_utxo_for(i)).collect();
utxos.map(|inputs| {
let input_amount: u64 = inputs.iter().map(|i| i.value).sum();
let output_amount: u64 = self.unsigned_tx.output.iter().map(|o| o.value).sum();
input_amount
.checked_sub(output_amount)
.expect("input amount must be greater than output amount")
})
}
fn fee_rate(&self) -> Option<FeeRate> {
let fee_amount = self.fee_amount();
fee_amount.map(|fee| {
let weight = self.clone().extract_tx().weight();
FeeRate::from_wu(fee, weight)
})
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,93 +0,0 @@
#![allow(unused)]
use bdk::{wallet::AddressIndex, Wallet};
use bdk_chain::{BlockId, ConfirmationTime};
use bitcoin::hashes::Hash;
use bitcoin::{BlockHash, Network, Transaction, TxOut};
/// Return a fake wallet that appears to be funded for testing.
pub fn get_funded_wallet_with_change(
descriptor: &str,
change: Option<&str>,
) -> (Wallet, bitcoin::Txid) {
let mut wallet = Wallet::new_no_persist(descriptor, change, Network::Regtest).unwrap();
let address = wallet.get_address(AddressIndex::New).address;
let tx = Transaction {
version: 1,
lock_time: bitcoin::PackedLockTime(0),
input: vec![],
output: vec![TxOut {
value: 50_000,
script_pubkey: address.script_pubkey(),
}],
};
wallet
.insert_checkpoint(BlockId {
height: 1_000,
hash: BlockHash::all_zeros(),
})
.unwrap();
wallet
.insert_tx(
tx.clone(),
ConfirmationTime::Confirmed {
height: 1_000,
time: 100,
},
)
.unwrap();
(wallet, tx.txid())
}
pub fn get_funded_wallet(descriptor: &str) -> (Wallet, bitcoin::Txid) {
get_funded_wallet_with_change(descriptor, None)
}
pub fn get_test_wpkh() -> &'static str {
"wpkh(cVpPVruEDdmutPzisEsYvtST1usBR3ntr8pXSyt6D2YYqXRyPcFW)"
}
pub fn get_test_single_sig_csv() -> &'static str {
// and(pk(Alice),older(6))
"wsh(and_v(v:pk(cVpPVruEDdmutPzisEsYvtST1usBR3ntr8pXSyt6D2YYqXRyPcFW),older(6)))"
}
pub fn get_test_a_or_b_plus_csv() -> &'static str {
// or(pk(Alice),and(pk(Bob),older(144)))
"wsh(or_d(pk(cRjo6jqfVNP33HhSS76UhXETZsGTZYx8FMFvR9kpbtCSV1PmdZdu),and_v(v:pk(cMnkdebixpXMPfkcNEjjGin7s94hiehAH4mLbYkZoh9KSiNNmqC8),older(144))))"
}
pub fn get_test_single_sig_cltv() -> &'static str {
// and(pk(Alice),after(100000))
"wsh(and_v(v:pk(cVpPVruEDdmutPzisEsYvtST1usBR3ntr8pXSyt6D2YYqXRyPcFW),after(100000)))"
}
pub fn get_test_tr_single_sig() -> &'static str {
"tr(cNJmN3fH9DDbDt131fQNkVakkpzawJBSeybCUNmP1BovpmGQ45xG)"
}
pub fn get_test_tr_with_taptree() -> &'static str {
"tr(b511bd5771e47ee27558b1765e87b541668304ec567721c7b880edc0a010da55,{pk(cPZzKuNmpuUjD1e8jUU4PVzy2b5LngbSip8mBsxf4e7rSFZVb4Uh),pk(8aee2b8120a5f157f1223f72b5e62b825831a27a9fdf427db7cc697494d4a642)})"
}
pub fn get_test_tr_with_taptree_both_priv() -> &'static str {
"tr(b511bd5771e47ee27558b1765e87b541668304ec567721c7b880edc0a010da55,{pk(cPZzKuNmpuUjD1e8jUU4PVzy2b5LngbSip8mBsxf4e7rSFZVb4Uh),pk(cNaQCDwmmh4dS9LzCgVtyy1e1xjCJ21GUDHe9K98nzb689JvinGV)})"
}
pub fn get_test_tr_repeated_key() -> &'static str {
"tr(b511bd5771e47ee27558b1765e87b541668304ec567721c7b880edc0a010da55,{and_v(v:pk(cVpPVruEDdmutPzisEsYvtST1usBR3ntr8pXSyt6D2YYqXRyPcFW),after(100)),and_v(v:pk(cVpPVruEDdmutPzisEsYvtST1usBR3ntr8pXSyt6D2YYqXRyPcFW),after(200))})"
}
pub fn get_test_tr_single_sig_xprv() -> &'static str {
"tr(tprv8ZgxMBicQKsPdDArR4xSAECuVxeX1jwwSXR4ApKbkYgZiziDc4LdBy2WvJeGDfUSE4UT4hHhbgEwbdq8ajjUHiKDegkwrNU6V55CxcxonVN/*)"
}
pub fn get_test_tr_with_taptree_xprv() -> &'static str {
"tr(cNJmN3fH9DDbDt131fQNkVakkpzawJBSeybCUNmP1BovpmGQ45xG,{pk(tprv8ZgxMBicQKsPdDArR4xSAECuVxeX1jwwSXR4ApKbkYgZiziDc4LdBy2WvJeGDfUSE4UT4hHhbgEwbdq8ajjUHiKDegkwrNU6V55CxcxonVN/*),pk(8aee2b8120a5f157f1223f72b5e62b825831a27a9fdf427db7cc697494d4a642)})"
}
pub fn get_test_tr_dup_keys() -> &'static str {
"tr(cNJmN3fH9DDbDt131fQNkVakkpzawJBSeybCUNmP1BovpmGQ45xG,{pk(8aee2b8120a5f157f1223f72b5e62b825831a27a9fdf427db7cc697494d4a642),pk(8aee2b8120a5f157f1223f72b5e62b825831a27a9fdf427db7cc697494d4a642)})"
}

View File

@@ -1,158 +0,0 @@
use bdk::bitcoin::TxIn;
use bdk::wallet::AddressIndex;
use bdk::wallet::AddressIndex::New;
use bdk::{psbt, FeeRate, SignOptions};
use bitcoin::util::psbt::PartiallySignedTransaction as Psbt;
use core::str::FromStr;
mod common;
use common::*;
// from bip 174
const PSBT_STR: &str = "cHNidP8BAKACAAAAAqsJSaCMWvfEm4IS9Bfi8Vqz9cM9zxU4IagTn4d6W3vkAAAAAAD+////qwlJoIxa98SbghL0F+LxWrP1wz3PFTghqBOfh3pbe+QBAAAAAP7///8CYDvqCwAAAAAZdqkUdopAu9dAy+gdmI5x3ipNXHE5ax2IrI4kAAAAAAAAGXapFG9GILVT+glechue4O/p+gOcykWXiKwAAAAAAAEHakcwRAIgR1lmF5fAGwNrJZKJSGhiGDR9iYZLcZ4ff89X0eURZYcCIFMJ6r9Wqk2Ikf/REf3xM286KdqGbX+EhtdVRs7tr5MZASEDXNxh/HupccC1AaZGoqg7ECy0OIEhfKaC3Ibi1z+ogpIAAQEgAOH1BQAAAAAXqRQ1RebjO4MsRwUPJNPuuTycA5SLx4cBBBYAFIXRNTfy4mVAWjTbr6nj3aAfuCMIAAAA";
#[test]
#[should_panic(expected = "InputIndexOutOfRange")]
fn test_psbt_malformed_psbt_input_legacy() {
let psbt_bip = Psbt::from_str(PSBT_STR).unwrap();
let (mut wallet, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New);
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
psbt.inputs.push(psbt_bip.inputs[0].clone());
let options = SignOptions {
trust_witness_utxo: true,
..Default::default()
};
let _ = wallet.sign(&mut psbt, options).unwrap();
}
#[test]
#[should_panic(expected = "InputIndexOutOfRange")]
fn test_psbt_malformed_psbt_input_segwit() {
let psbt_bip = Psbt::from_str(PSBT_STR).unwrap();
let (mut wallet, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New);
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
psbt.inputs.push(psbt_bip.inputs[1].clone());
let options = SignOptions {
trust_witness_utxo: true,
..Default::default()
};
let _ = wallet.sign(&mut psbt, options).unwrap();
}
#[test]
#[should_panic(expected = "InputIndexOutOfRange")]
fn test_psbt_malformed_tx_input() {
let (mut wallet, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New);
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
psbt.unsigned_tx.input.push(TxIn::default());
let options = SignOptions {
trust_witness_utxo: true,
..Default::default()
};
let _ = wallet.sign(&mut psbt, options).unwrap();
}
#[test]
fn test_psbt_sign_with_finalized() {
let psbt_bip = Psbt::from_str(PSBT_STR).unwrap();
let (mut wallet, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New);
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
// add a finalized input
psbt.inputs.push(psbt_bip.inputs[0].clone());
psbt.unsigned_tx
.input
.push(psbt_bip.unsigned_tx.input[0].clone());
let _ = wallet.sign(&mut psbt, SignOptions::default()).unwrap();
}
#[test]
fn test_psbt_fee_rate_with_witness_utxo() {
use psbt::PsbtUtils;
let expected_fee_rate = 1.2345;
let (mut wallet, _) = get_funded_wallet("wpkh(tprv8ZgxMBicQKsPd3EupYiPRhaMooHKUHJxNsTfYuScep13go8QFfHdtkG9nRkFGb7busX4isf6X9dURGCoKgitaApQ6MupRhZMcELAxTBRJgS/*)");
let addr = wallet.get_address(New);
let mut builder = wallet.build_tx();
builder.drain_to(addr.script_pubkey()).drain_wallet();
builder.fee_rate(FeeRate::from_sat_per_vb(expected_fee_rate));
let (mut psbt, _) = builder.finish().unwrap();
let fee_amount = psbt.fee_amount();
assert!(fee_amount.is_some());
let unfinalized_fee_rate = psbt.fee_rate().unwrap();
let finalized = wallet.sign(&mut psbt, Default::default()).unwrap();
assert!(finalized);
let finalized_fee_rate = psbt.fee_rate().unwrap();
assert!(finalized_fee_rate.as_sat_per_vb() >= expected_fee_rate);
assert!(finalized_fee_rate.as_sat_per_vb() < unfinalized_fee_rate.as_sat_per_vb());
}
#[test]
fn test_psbt_fee_rate_with_nonwitness_utxo() {
use psbt::PsbtUtils;
let expected_fee_rate = 1.2345;
let (mut wallet, _) = get_funded_wallet("pkh(tprv8ZgxMBicQKsPd3EupYiPRhaMooHKUHJxNsTfYuScep13go8QFfHdtkG9nRkFGb7busX4isf6X9dURGCoKgitaApQ6MupRhZMcELAxTBRJgS/*)");
let addr = wallet.get_address(New);
let mut builder = wallet.build_tx();
builder.drain_to(addr.script_pubkey()).drain_wallet();
builder.fee_rate(FeeRate::from_sat_per_vb(expected_fee_rate));
let (mut psbt, _) = builder.finish().unwrap();
let fee_amount = psbt.fee_amount();
assert!(fee_amount.is_some());
let unfinalized_fee_rate = psbt.fee_rate().unwrap();
let finalized = wallet.sign(&mut psbt, Default::default()).unwrap();
assert!(finalized);
let finalized_fee_rate = psbt.fee_rate().unwrap();
assert!(finalized_fee_rate.as_sat_per_vb() >= expected_fee_rate);
assert!(finalized_fee_rate.as_sat_per_vb() < unfinalized_fee_rate.as_sat_per_vb());
}
#[test]
fn test_psbt_fee_rate_with_missing_txout() {
use psbt::PsbtUtils;
let expected_fee_rate = 1.2345;
let (mut wpkh_wallet, _) = get_funded_wallet("wpkh(tprv8ZgxMBicQKsPd3EupYiPRhaMooHKUHJxNsTfYuScep13go8QFfHdtkG9nRkFGb7busX4isf6X9dURGCoKgitaApQ6MupRhZMcELAxTBRJgS/*)");
let addr = wpkh_wallet.get_address(New);
let mut builder = wpkh_wallet.build_tx();
builder.drain_to(addr.script_pubkey()).drain_wallet();
builder.fee_rate(FeeRate::from_sat_per_vb(expected_fee_rate));
let (mut wpkh_psbt, _) = builder.finish().unwrap();
wpkh_psbt.inputs[0].witness_utxo = None;
wpkh_psbt.inputs[0].non_witness_utxo = None;
assert!(wpkh_psbt.fee_amount().is_none());
assert!(wpkh_psbt.fee_rate().is_none());
let (mut pkh_wallet, _) = get_funded_wallet("pkh(tprv8ZgxMBicQKsPd3EupYiPRhaMooHKUHJxNsTfYuScep13go8QFfHdtkG9nRkFGb7busX4isf6X9dURGCoKgitaApQ6MupRhZMcELAxTBRJgS/*)");
let addr = pkh_wallet.get_address(New);
let mut builder = pkh_wallet.build_tx();
builder.drain_to(addr.script_pubkey()).drain_wallet();
builder.fee_rate(FeeRate::from_sat_per_vb(expected_fee_rate));
let (mut pkh_psbt, _) = builder.finish().unwrap();
pkh_psbt.inputs[0].non_witness_utxo = None;
assert!(pkh_psbt.fee_amount().is_none());
assert!(pkh_psbt.fee_rate().is_none());
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,31 +0,0 @@
[package]
name = "bdk_chain"
version = "0.5.0"
edition = "2021"
rust-version = "1.57"
homepage = "https://bitcoindevkit.org"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk_chain"
description = "Collection of core structures for Bitcoin Dev Kit."
license = "MIT OR Apache-2.0"
readme = "README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
# For no-std, remember to enable the bitcoin/no-std feature
bitcoin = { version = "0.29", default-features = false }
serde_crate = { package = "serde", version = "1", optional = true, features = ["derive"] }
# Use hashbrown as a feature flag to have HashSet and HashMap from it.
# note version 0.13 breaks outs MSRV.
hashbrown = { version = "0.11", optional = true, features = ["serde"] }
miniscript = { version = "9.0.0", optional = true, default-features = false }
[dev-dependencies]
rand = "0.8"
[features]
default = ["std"]
std = ["bitcoin/std", "miniscript/std"]
serde = ["serde_crate", "bitcoin/serde"]

View File

@@ -1,3 +0,0 @@
# BDK Chain
BDK keychain tracker, tools for storing and indexing chain data.

View File

@@ -1,258 +0,0 @@
use bitcoin::{hashes::Hash, BlockHash, OutPoint, TxOut, Txid};
use crate::{Anchor, COINBASE_MATURITY};
/// Represents the observed position of some chain data.
///
/// The generic `A` should be a [`Anchor`] implementation.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, core::hash::Hash)]
pub enum ChainPosition<A> {
/// The chain data is seen as confirmed, and in anchored by `A`.
Confirmed(A),
/// The chain data is seen in mempool at this given timestamp.
Unconfirmed(u64),
}
impl<A> ChainPosition<A> {
/// Returns whether [`ChainPosition`] is confirmed or not.
pub fn is_confirmed(&self) -> bool {
matches!(self, Self::Confirmed(_))
}
}
impl<A: Clone> ChainPosition<&A> {
/// Maps a [`ChainPosition<&A>`] into a [`ChainPosition<A>`] by cloning the contents.
pub fn cloned(self) -> ChainPosition<A> {
match self {
ChainPosition::Confirmed(a) => ChainPosition::Confirmed(a.clone()),
ChainPosition::Unconfirmed(last_seen) => ChainPosition::Unconfirmed(last_seen),
}
}
}
impl<A: Anchor> ChainPosition<A> {
/// Determines the upper bound of the confirmation height.
pub fn confirmation_height_upper_bound(&self) -> Option<u32> {
match self {
ChainPosition::Confirmed(a) => Some(a.confirmation_height_upper_bound()),
ChainPosition::Unconfirmed(_) => None,
}
}
}
/// Block height and timestamp at which a transaction is confirmed.
#[derive(Debug, Clone, PartialEq, Eq, Copy, PartialOrd, Ord, core::hash::Hash)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(crate = "serde_crate")
)]
pub enum ConfirmationTime {
/// The confirmed variant.
Confirmed {
/// Confirmation height.
height: u32,
/// Confirmation time in unix seconds.
time: u64,
},
/// The unconfirmed variant.
Unconfirmed {
/// The last-seen timestamp in unix seconds.
last_seen: u64,
},
}
impl ConfirmationTime {
/// Construct an unconfirmed variant using the given `last_seen` time in unix seconds.
pub fn unconfirmed(last_seen: u64) -> Self {
Self::Unconfirmed { last_seen }
}
/// Returns whether [`ConfirmationTime`] is the confirmed variant.
pub fn is_confirmed(&self) -> bool {
matches!(self, Self::Confirmed { .. })
}
}
impl From<ChainPosition<ConfirmationTimeAnchor>> for ConfirmationTime {
fn from(observed_as: ChainPosition<ConfirmationTimeAnchor>) -> Self {
match observed_as {
ChainPosition::Confirmed(a) => Self::Confirmed {
height: a.confirmation_height,
time: a.confirmation_time,
},
ChainPosition::Unconfirmed(_) => Self::Unconfirmed { last_seen: 0 },
}
}
}
/// A reference to a block in the canonical chain.
#[derive(Debug, Clone, PartialEq, Eq, Copy, PartialOrd, Ord, core::hash::Hash)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(crate = "serde_crate")
)]
pub struct BlockId {
/// The height of the block.
pub height: u32,
/// The hash of the block.
pub hash: BlockHash,
}
impl Default for BlockId {
fn default() -> Self {
Self {
height: Default::default(),
hash: BlockHash::from_inner([0u8; 32]),
}
}
}
impl From<(u32, BlockHash)> for BlockId {
fn from((height, hash): (u32, BlockHash)) -> Self {
Self { height, hash }
}
}
impl From<BlockId> for (u32, BlockHash) {
fn from(block_id: BlockId) -> Self {
(block_id.height, block_id.hash)
}
}
impl From<(&u32, &BlockHash)> for BlockId {
fn from((height, hash): (&u32, &BlockHash)) -> Self {
Self {
height: *height,
hash: *hash,
}
}
}
/// An [`Anchor`] implementation that also records the exact confirmation height of the transaction.
#[derive(Debug, Default, Clone, PartialEq, Eq, Copy, PartialOrd, Ord, core::hash::Hash)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(crate = "serde_crate")
)]
pub struct ConfirmationHeightAnchor {
/// The anchor block.
pub anchor_block: BlockId,
/// The exact confirmation height of the transaction.
///
/// It is assumed that this value is never larger than the height of the anchor block.
pub confirmation_height: u32,
}
impl Anchor for ConfirmationHeightAnchor {
fn anchor_block(&self) -> BlockId {
self.anchor_block
}
fn confirmation_height_upper_bound(&self) -> u32 {
self.confirmation_height
}
}
/// An [`Anchor`] implementation that also records the exact confirmation time and height of the
/// transaction.
#[derive(Debug, Default, Clone, PartialEq, Eq, Copy, PartialOrd, Ord, core::hash::Hash)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(crate = "serde_crate")
)]
pub struct ConfirmationTimeAnchor {
/// The anchor block.
pub anchor_block: BlockId,
/// The confirmation height of the chain data being anchored.
pub confirmation_height: u32,
/// The confirmation time of the chain data being anchored.
pub confirmation_time: u64,
}
impl Anchor for ConfirmationTimeAnchor {
fn anchor_block(&self) -> BlockId {
self.anchor_block
}
fn confirmation_height_upper_bound(&self) -> u32 {
self.confirmation_height
}
}
/// A `TxOut` with as much data as we can retrieve about it
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct FullTxOut<A> {
/// The location of the `TxOut`.
pub outpoint: OutPoint,
/// The `TxOut`.
pub txout: TxOut,
/// The position of the transaction in `outpoint` in the overall chain.
pub chain_position: ChainPosition<A>,
/// The txid and chain position of the transaction (if any) that has spent this output.
pub spent_by: Option<(ChainPosition<A>, Txid)>,
/// Whether this output is on a coinbase transaction.
pub is_on_coinbase: bool,
}
impl<A: Anchor> FullTxOut<A> {
/// Whether the `txout` is considered mature.
///
/// Depending on the implementation of [`confirmation_height_upper_bound`] in [`Anchor`], this
/// method may return false-negatives. In other words, interpretted confirmation count may be
/// less than the actual value.
///
/// [`confirmation_height_upper_bound`]: Anchor::confirmation_height_upper_bound
pub fn is_mature(&self, tip: u32) -> bool {
if self.is_on_coinbase {
let tx_height = match &self.chain_position {
ChainPosition::Confirmed(anchor) => anchor.confirmation_height_upper_bound(),
ChainPosition::Unconfirmed(_) => {
debug_assert!(false, "coinbase tx can never be unconfirmed");
return false;
}
};
let age = tip.saturating_sub(tx_height);
if age + 1 < COINBASE_MATURITY {
return false;
}
}
true
}
/// Whether the utxo is/was/will be spendable with chain `tip`.
///
/// This method does not take into account the locktime.
///
/// Depending on the implementation of [`confirmation_height_upper_bound`] in [`Anchor`], this
/// method may return false-negatives. In other words, interpretted confirmation count may be
/// less than the actual value.
///
/// [`confirmation_height_upper_bound`]: Anchor::confirmation_height_upper_bound
pub fn is_confirmed_and_spendable(&self, tip: u32) -> bool {
if !self.is_mature(tip) {
return false;
}
let confirmation_height = match &self.chain_position {
ChainPosition::Confirmed(anchor) => anchor.confirmation_height_upper_bound(),
ChainPosition::Unconfirmed(_) => return false,
};
if confirmation_height > tip {
return false;
}
// if the spending tx is confirmed within tip height, the txout is no longer spendable
if let Some((ChainPosition::Confirmed(spending_anchor), _)) = &self.spent_by {
if spending_anchor.anchor_block().height <= tip {
return false;
}
}
true
}
}

View File

@@ -1,25 +0,0 @@
use crate::BlockId;
/// Represents a service that tracks the blockchain.
///
/// The main method is [`is_block_in_chain`] which determines whether a given block of [`BlockId`]
/// is an ancestor of another "static block".
///
/// [`is_block_in_chain`]: Self::is_block_in_chain
pub trait ChainOracle {
/// Error type.
type Error: core::fmt::Debug;
/// Determines whether `block` of [`BlockId`] exists as an ancestor of `chain_tip`.
///
/// If `None` is returned, it means the implementation cannot determine whether `block` exists
/// under `chain_tip`.
fn is_block_in_chain(
&self,
block: BlockId,
chain_tip: BlockId,
) -> Result<Option<bool>, Self::Error>;
/// Get the best chain's chain tip.
fn get_chain_tip(&self) -> Result<Option<BlockId>, Self::Error>;
}

View File

@@ -1,16 +0,0 @@
use crate::miniscript::{Descriptor, DescriptorPublicKey};
/// A trait to extend the functionality of a miniscript descriptor.
pub trait DescriptorExt {
/// Returns the minimum value (in satoshis) at which an output is broadcastable.
fn dust_value(&self) -> u64;
}
impl DescriptorExt for Descriptor<DescriptorPublicKey> {
fn dust_value(&self) -> u64 {
self.at_derivation_index(0)
.script_pubkey()
.dust_value()
.to_sat()
}
}

View File

@@ -1,30 +0,0 @@
#![allow(unused)]
use alloc::vec::Vec;
use bitcoin::{
consensus,
hashes::{hex::FromHex, Hash},
Transaction,
};
use crate::BlockId;
pub const RAW_TX_1: &str = "0200000000010116d6174da7183d70d0a7d4dc314d517a7d135db79ad63515028b293a76f4f9d10000000000feffffff023a21fc8350060000160014531c405e1881ef192294b8813631e258bf98ea7a1027000000000000225120a60869f0dbcf1dc659c9cecbaf8050135ea9e8cdc487053f1dc6880949dc684c024730440220591b1a172a122da49ba79a3e79f98aaa03fd7a372f9760da18890b6a327e6010022013e82319231da6c99abf8123d7c07e13cf9bd8d76e113e18dc452e5024db156d012102318a2d558b2936c52e320decd6d92a88d7f530be91b6fe0af5caf41661e77da3ef2e0100";
pub const RAW_TX_2: &str = "02000000000101a688607020cfae91a61e7c516b5ef1264d5d77f17200c3866826c6c808ebf1620000000000feffffff021027000000000000225120a60869f0dbcf1dc659c9cecbaf8050135ea9e8cdc487053f1dc6880949dc684c20fd48ff530600001600146886c525e41d4522042bd0b159dfbade2504a6bb024730440220740ff7e665cd20565d4296b549df8d26b941be3f1e3af89a0b60e50c0dbeb69a02206213ab7030cf6edc6c90d4ccf33010644261e029950a688dc0b1a9ebe6ddcc5a012102f2ac6b396a97853cb6cd62242c8ae4842024742074475023532a51e9c53194253e760100";
pub const RAW_TX_3: &str = "0200000000010135d67ee47b557e68b8c6223958f597381965ed719f1207ee2b9e20432a24a5dc0100000000feffffff021027000000000000225120a82f29944d65b86ae6b5e5cc75e294ead6c59391a1edc5e016e3498c67fc7bbb62215a5055060000160014070df7671dea67a50c4799a744b5c9be8f4bac690247304402207ebf8d29f71fd03e7e6977b3ea78ca5fcc5c49a42ae822348fc401862fdd766c02201d7e4ff0684ecb008b6142f36ead1b0b4d615524c4f58c261113d361f4427e25012103e6a75e2fab85e5ecad641afc4ffba7222f998649d9f18cac92f0fcc8618883b3ee760100";
pub const RAW_TX_4: &str = "02000000000101d00e8f76ed313e19b339ee293c0f52b0325c95e24c8f3966fa353fb2bedbcf580100000000feffffff021027000000000000225120882d74e5d0572d5a816cef0041a96b6c1de832f6f9676d9605c44d5e9a97d3dc9cda55fe53060000160014852b5864b8edd42fab4060c87f818e50780865ff0247304402201dccbb9bed7fba924b6d249c5837cc9b37470c0e3d8fbea77cb59baba3efe6fa0220700cc170916913b9bfc2bc0fefb6af776e8b542c561702f136cddc1c7aa43141012103acec3fc79dbbca745815c2a807dc4e81010c80e308e84913f59cb42a275dad97f3760100";
pub fn tx_from_hex(s: &str) -> Transaction {
let raw = Vec::from_hex(s).expect("data must be in hex");
consensus::deserialize(raw.as_slice()).expect("must deserialize")
}
pub fn new_hash<H: Hash>(s: &str) -> H {
<H as bitcoin::hashes::Hash>::hash(s.as_bytes())
}
pub fn new_block_id(height: u32, hash: &str) -> BlockId {
BlockId {
height,
hash: new_hash(hash),
}
}

View File

@@ -1,245 +0,0 @@
//! Contains the [`IndexedTxGraph`] structure and associated types.
//!
//! This is essentially a [`TxGraph`] combined with an indexer.
use alloc::vec::Vec;
use bitcoin::{OutPoint, Transaction, TxOut};
use crate::{
keychain::DerivationAdditions,
tx_graph::{Additions, TxGraph},
Anchor, Append,
};
/// A struct that combines [`TxGraph`] and an [`Indexer`] implementation.
///
/// This structure ensures that [`TxGraph`] and [`Indexer`] are updated atomically.
#[derive(Debug)]
pub struct IndexedTxGraph<A, I> {
/// Transaction index.
pub index: I,
graph: TxGraph<A>,
}
impl<A, I: Default> Default for IndexedTxGraph<A, I> {
fn default() -> Self {
Self {
graph: Default::default(),
index: Default::default(),
}
}
}
impl<A, I> IndexedTxGraph<A, I> {
/// Construct a new [`IndexedTxGraph`] with a given `index`.
pub fn new(index: I) -> Self {
Self {
index,
graph: TxGraph::default(),
}
}
/// Get a reference of the internal transaction graph.
pub fn graph(&self) -> &TxGraph<A> {
&self.graph
}
}
impl<A: Anchor, I: Indexer> IndexedTxGraph<A, I> {
/// Applies the [`IndexedAdditions`] to the [`IndexedTxGraph`].
pub fn apply_additions(&mut self, additions: IndexedAdditions<A, I::Additions>) {
let IndexedAdditions {
graph_additions,
index_additions,
} = additions;
self.index.apply_additions(index_additions);
for tx in &graph_additions.txs {
self.index.index_tx(tx);
}
for (&outpoint, txout) in &graph_additions.txouts {
self.index.index_txout(outpoint, txout);
}
self.graph.apply_additions(graph_additions);
}
}
impl<A: Anchor, I: Indexer> IndexedTxGraph<A, I>
where
I::Additions: Default + Append,
{
/// Apply an `update` directly.
///
/// `update` is a [`TxGraph<A>`] and the resultant changes is returned as [`IndexedAdditions`].
pub fn apply_update(&mut self, update: TxGraph<A>) -> IndexedAdditions<A, I::Additions> {
let graph_additions = self.graph.apply_update(update);
let mut index_additions = I::Additions::default();
for added_tx in &graph_additions.txs {
index_additions.append(self.index.index_tx(added_tx));
}
for (&added_outpoint, added_txout) in &graph_additions.txouts {
index_additions.append(self.index.index_txout(added_outpoint, added_txout));
}
IndexedAdditions {
graph_additions,
index_additions,
}
}
/// Insert a floating `txout` of given `outpoint`.
pub fn insert_txout(
&mut self,
outpoint: OutPoint,
txout: &TxOut,
) -> IndexedAdditions<A, I::Additions> {
let mut update = TxGraph::<A>::default();
let _ = update.insert_txout(outpoint, txout.clone());
self.apply_update(update)
}
/// Insert and index a transaction into the graph.
///
/// `anchors` can be provided to anchor the transaction to various blocks. `seen_at` is a
/// unix timestamp of when the transaction is last seen.
pub fn insert_tx(
&mut self,
tx: &Transaction,
anchors: impl IntoIterator<Item = A>,
seen_at: Option<u64>,
) -> IndexedAdditions<A, I::Additions> {
let txid = tx.txid();
let mut update = TxGraph::<A>::default();
if self.graph.get_tx(txid).is_none() {
let _ = update.insert_tx(tx.clone());
}
for anchor in anchors.into_iter() {
let _ = update.insert_anchor(txid, anchor);
}
if let Some(seen_at) = seen_at {
let _ = update.insert_seen_at(txid, seen_at);
}
self.apply_update(update)
}
/// Insert relevant transactions from the given `txs` iterator.
///
/// Relevancy is determined by the [`Indexer::is_tx_relevant`] implementation of `I`. Irrelevant
/// transactions in `txs` will be ignored. `txs` do not need to be in topological order.
///
/// `anchors` can be provided to anchor the transactions to blocks. `seen_at` is a unix
/// timestamp of when the transactions are last seen.
pub fn insert_relevant_txs<'t>(
&mut self,
txs: impl IntoIterator<Item = (&'t Transaction, impl IntoIterator<Item = A>)>,
seen_at: Option<u64>,
) -> IndexedAdditions<A, I::Additions> {
// The algorithm below allows for non-topologically ordered transactions by using two loops.
// This is achieved by:
// 1. insert all txs into the index. If they are irrelevant then that's fine it will just
// not store anything about them.
// 2. decide whether to insert them into the graph depending on whether `is_tx_relevant`
// returns true or not. (in a second loop).
let mut additions = IndexedAdditions::<A, I::Additions>::default();
let mut transactions = Vec::new();
for (tx, anchors) in txs.into_iter() {
additions.index_additions.append(self.index.index_tx(tx));
transactions.push((tx, anchors));
}
additions.append(
transactions
.into_iter()
.filter_map(|(tx, anchors)| match self.index.is_tx_relevant(tx) {
true => Some(self.insert_tx(tx, anchors, seen_at)),
false => None,
})
.fold(Default::default(), |mut acc, other| {
acc.append(other);
acc
}),
);
additions
}
}
/// A structure that represents changes to an [`IndexedTxGraph`].
#[derive(Clone, Debug, PartialEq)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(
crate = "serde_crate",
bound(
deserialize = "A: Ord + serde::Deserialize<'de>, IA: serde::Deserialize<'de>",
serialize = "A: Ord + serde::Serialize, IA: serde::Serialize"
)
)
)]
#[must_use]
pub struct IndexedAdditions<A, IA> {
/// [`TxGraph`] additions.
pub graph_additions: Additions<A>,
/// [`Indexer`] additions.
pub index_additions: IA,
}
impl<A, IA: Default> Default for IndexedAdditions<A, IA> {
fn default() -> Self {
Self {
graph_additions: Default::default(),
index_additions: Default::default(),
}
}
}
impl<A: Anchor, IA: Append> Append for IndexedAdditions<A, IA> {
fn append(&mut self, other: Self) {
self.graph_additions.append(other.graph_additions);
self.index_additions.append(other.index_additions);
}
fn is_empty(&self) -> bool {
self.graph_additions.is_empty() && self.index_additions.is_empty()
}
}
impl<A, IA: Default> From<Additions<A>> for IndexedAdditions<A, IA> {
fn from(graph_additions: Additions<A>) -> Self {
Self {
graph_additions,
..Default::default()
}
}
}
impl<A, K> From<DerivationAdditions<K>> for IndexedAdditions<A, DerivationAdditions<K>> {
fn from(index_additions: DerivationAdditions<K>) -> Self {
Self {
graph_additions: Default::default(),
index_additions,
}
}
}
/// Represents a structure that can index transaction data.
pub trait Indexer {
/// The resultant "additions" when new transaction data is indexed.
type Additions;
/// Scan and index the given `outpoint` and `txout`.
fn index_txout(&mut self, outpoint: OutPoint, txout: &TxOut) -> Self::Additions;
/// Scan and index the given transaction.
fn index_tx(&mut self, tx: &Transaction) -> Self::Additions;
/// Apply additions to itself.
fn apply_additions(&mut self, additions: Self::Additions);
/// Determines whether the transaction should be included in the index.
fn is_tx_relevant(&self, tx: &Transaction) -> bool;
}

View File

@@ -1,266 +0,0 @@
//! Module for keychain related structures.
//!
//! A keychain here is a set of application-defined indexes for a miniscript descriptor where we can
//! derive script pubkeys at a particular derivation index. The application's index is simply
//! anything that implements `Ord`.
//!
//! [`KeychainTxOutIndex`] indexes script pubkeys of keychains and scans in relevant outpoints (that
//! has a `txout` containing an indexed script pubkey). Internally, this uses [`SpkTxOutIndex`], but
//! also maintains "revealed" and "lookahead" index counts per keychain.
//!
//! [`SpkTxOutIndex`]: crate::SpkTxOutIndex
use crate::{
collections::BTreeMap,
indexed_tx_graph::IndexedAdditions,
local_chain::{self, LocalChain},
tx_graph::TxGraph,
Anchor, Append,
};
#[cfg(feature = "miniscript")]
mod txout_index;
#[cfg(feature = "miniscript")]
pub use txout_index::*;
/// Represents updates to the derivation index of a [`KeychainTxOutIndex`].
///
/// It can be applied to [`KeychainTxOutIndex`] with [`apply_additions`]. [`DerivationAdditions] are
/// monotone in that they will never decrease the revealed derivation index.
///
/// [`KeychainTxOutIndex`]: crate::keychain::KeychainTxOutIndex
/// [`apply_additions`]: crate::keychain::KeychainTxOutIndex::apply_additions
#[derive(Clone, Debug, PartialEq)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(
crate = "serde_crate",
bound(
deserialize = "K: Ord + serde::Deserialize<'de>",
serialize = "K: Ord + serde::Serialize"
)
)
)]
#[must_use]
pub struct DerivationAdditions<K>(pub BTreeMap<K, u32>);
impl<K> DerivationAdditions<K> {
/// Returns whether the additions are empty.
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
/// Get the inner map of the keychain to its new derivation index.
pub fn as_inner(&self) -> &BTreeMap<K, u32> {
&self.0
}
}
impl<K: Ord> Append for DerivationAdditions<K> {
/// Append another [`DerivationAdditions`] into self.
///
/// If the keychain already exists, increase the index when the other's index > self's index.
/// If the keychain did not exist, append the new keychain.
fn append(&mut self, mut other: Self) {
self.0.iter_mut().for_each(|(key, index)| {
if let Some(other_index) = other.0.remove(key) {
*index = other_index.max(*index);
}
});
self.0.append(&mut other.0);
}
fn is_empty(&self) -> bool {
self.0.is_empty()
}
}
impl<K> Default for DerivationAdditions<K> {
fn default() -> Self {
Self(Default::default())
}
}
impl<K> AsRef<BTreeMap<K, u32>> for DerivationAdditions<K> {
fn as_ref(&self) -> &BTreeMap<K, u32> {
&self.0
}
}
/// A structure to update [`KeychainTxOutIndex`], [`TxGraph`] and [`LocalChain`]
/// atomically.
#[derive(Debug, Clone, PartialEq)]
pub struct LocalUpdate<K, A> {
/// Last active derivation index per keychain (`K`).
pub keychain: BTreeMap<K, u32>,
/// Update for the [`TxGraph`].
pub graph: TxGraph<A>,
/// Update for the [`LocalChain`].
pub chain: LocalChain,
}
impl<K, A> Default for LocalUpdate<K, A> {
fn default() -> Self {
Self {
keychain: Default::default(),
graph: Default::default(),
chain: Default::default(),
}
}
}
/// A structure that records the corresponding changes as result of applying an [`LocalUpdate`].
#[derive(Debug, Clone, PartialEq)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(
crate = "serde_crate",
bound(
deserialize = "K: Ord + serde::Deserialize<'de>, A: Ord + serde::Deserialize<'de>",
serialize = "K: Ord + serde::Serialize, A: Ord + serde::Serialize",
)
)
)]
pub struct LocalChangeSet<K, A> {
/// Changes to the [`LocalChain`].
pub chain_changeset: local_chain::ChangeSet,
/// Additions to [`IndexedTxGraph`].
///
/// [`IndexedTxGraph`]: crate::indexed_tx_graph::IndexedTxGraph
pub indexed_additions: IndexedAdditions<A, DerivationAdditions<K>>,
}
impl<K, A> Default for LocalChangeSet<K, A> {
fn default() -> Self {
Self {
chain_changeset: Default::default(),
indexed_additions: Default::default(),
}
}
}
impl<K: Ord, A: Anchor> Append for LocalChangeSet<K, A> {
fn append(&mut self, other: Self) {
Append::append(&mut self.chain_changeset, other.chain_changeset);
Append::append(&mut self.indexed_additions, other.indexed_additions);
}
fn is_empty(&self) -> bool {
self.chain_changeset.is_empty() && self.indexed_additions.is_empty()
}
}
impl<K, A> From<local_chain::ChangeSet> for LocalChangeSet<K, A> {
fn from(chain_changeset: local_chain::ChangeSet) -> Self {
Self {
chain_changeset,
..Default::default()
}
}
}
impl<K, A> From<IndexedAdditions<A, DerivationAdditions<K>>> for LocalChangeSet<K, A> {
fn from(indexed_additions: IndexedAdditions<A, DerivationAdditions<K>>) -> Self {
Self {
indexed_additions,
..Default::default()
}
}
}
/// Balance, differentiated into various categories.
#[derive(Debug, PartialEq, Eq, Clone, Default)]
#[cfg_attr(
feature = "serde",
derive(serde::Deserialize, serde::Serialize),
serde(crate = "serde_crate",)
)]
pub struct Balance {
/// All coinbase outputs not yet matured
pub immature: u64,
/// Unconfirmed UTXOs generated by a wallet tx
pub trusted_pending: u64,
/// Unconfirmed UTXOs received from an external wallet
pub untrusted_pending: u64,
/// Confirmed and immediately spendable balance
pub confirmed: u64,
}
impl Balance {
/// Get sum of trusted_pending and confirmed coins.
///
/// This is the balance you can spend right now that shouldn't get cancelled via another party
/// double spending it.
pub fn trusted_spendable(&self) -> u64 {
self.confirmed + self.trusted_pending
}
/// Get the whole balance visible to the wallet.
pub fn total(&self) -> u64 {
self.confirmed + self.trusted_pending + self.untrusted_pending + self.immature
}
}
impl core::fmt::Display for Balance {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
write!(
f,
"{{ immature: {}, trusted_pending: {}, untrusted_pending: {}, confirmed: {} }}",
self.immature, self.trusted_pending, self.untrusted_pending, self.confirmed
)
}
}
impl core::ops::Add for Balance {
type Output = Self;
fn add(self, other: Self) -> Self {
Self {
immature: self.immature + other.immature,
trusted_pending: self.trusted_pending + other.trusted_pending,
untrusted_pending: self.untrusted_pending + other.untrusted_pending,
confirmed: self.confirmed + other.confirmed,
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn append_keychain_derivation_indices() {
#[derive(Ord, PartialOrd, Eq, PartialEq, Clone, Debug)]
enum Keychain {
One,
Two,
Three,
Four,
}
let mut lhs_di = BTreeMap::<Keychain, u32>::default();
let mut rhs_di = BTreeMap::<Keychain, u32>::default();
lhs_di.insert(Keychain::One, 7);
lhs_di.insert(Keychain::Two, 0);
rhs_di.insert(Keychain::One, 3);
rhs_di.insert(Keychain::Two, 5);
lhs_di.insert(Keychain::Three, 3);
rhs_di.insert(Keychain::Four, 4);
let mut lhs = DerivationAdditions(lhs_di);
let rhs = DerivationAdditions(rhs_di);
lhs.append(rhs);
// Exiting index doesn't update if the new index in `other` is lower than `self`.
assert_eq!(lhs.0.get(&Keychain::One), Some(&7));
// Existing index updates if the new index in `other` is higher than `self`.
assert_eq!(lhs.0.get(&Keychain::Two), Some(&5));
// Existing index is unchanged if keychain doesn't exist in `other`.
assert_eq!(lhs.0.get(&Keychain::Three), Some(&3));
// New keychain gets added if the keychain is in `other` but not in `self`.
assert_eq!(lhs.0.get(&Keychain::Four), Some(&4));
}
}

View File

@@ -1,588 +0,0 @@
use crate::{
collections::*,
indexed_tx_graph::Indexer,
miniscript::{Descriptor, DescriptorPublicKey},
spk_iter::BIP32_MAX_INDEX,
ForEachTxOut, SpkIterator, SpkTxOutIndex,
};
use alloc::vec::Vec;
use bitcoin::{OutPoint, Script, TxOut};
use core::{fmt::Debug, ops::Deref};
use crate::Append;
use super::DerivationAdditions;
/// A convenient wrapper around [`SpkTxOutIndex`] that relates script pubkeys to miniscript public
/// [`Descriptor`]s.
///
/// Descriptors are referenced by the provided keychain generic (`K`).
///
/// Script pubkeys for a descriptor are revealed chronologically from index 0. I.e., If the last
/// revealed index of a descriptor is 5; scripts of indices 0 to 4 are guaranteed to be already
/// revealed. In addition to revealed scripts, we have a `lookahead` parameter for each keychain,
/// which defines the number of script pubkeys to store ahead of the last revealed index.
///
/// Methods that could update the last revealed index will return [`DerivationAdditions`] to report
/// these changes. This can be persisted for future recovery.
///
/// ## Synopsis
///
/// ```
/// use bdk_chain::keychain::KeychainTxOutIndex;
/// # use bdk_chain::{ miniscript::{Descriptor, DescriptorPublicKey} };
/// # use core::str::FromStr;
///
/// // imagine our service has internal and external addresses but also addresses for users
/// #[derive(Clone, Debug, PartialEq, Eq, Ord, PartialOrd)]
/// enum MyKeychain {
/// External,
/// Internal,
/// MyAppUser {
/// user_id: u32
/// }
/// }
///
/// let mut txout_index = KeychainTxOutIndex::<MyKeychain>::default();
///
/// # let secp = bdk_chain::bitcoin::secp256k1::Secp256k1::signing_only();
/// # let (external_descriptor,_) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "tr([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/0/*)").unwrap();
/// # let (internal_descriptor,_) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "tr([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/1/*)").unwrap();
/// # let descriptor_for_user_42 = external_descriptor.clone();
/// txout_index.add_keychain(MyKeychain::External, external_descriptor);
/// txout_index.add_keychain(MyKeychain::Internal, internal_descriptor);
/// txout_index.add_keychain(MyKeychain::MyAppUser { user_id: 42 }, descriptor_for_user_42);
///
/// let new_spk_for_user = txout_index.reveal_next_spk(&MyKeychain::MyAppUser{ user_id: 42 });
/// ```
///
/// [`Ord`]: core::cmp::Ord
/// [`SpkTxOutIndex`]: crate::spk_txout_index::SpkTxOutIndex
/// [`Descriptor`]: crate::miniscript::Descriptor
#[derive(Clone, Debug)]
pub struct KeychainTxOutIndex<K> {
inner: SpkTxOutIndex<(K, u32)>,
// descriptors of each keychain
keychains: BTreeMap<K, Descriptor<DescriptorPublicKey>>,
// last revealed indexes
last_revealed: BTreeMap<K, u32>,
// lookahead settings for each keychain
lookahead: BTreeMap<K, u32>,
}
impl<K> Default for KeychainTxOutIndex<K> {
fn default() -> Self {
Self {
inner: SpkTxOutIndex::default(),
keychains: BTreeMap::default(),
last_revealed: BTreeMap::default(),
lookahead: BTreeMap::default(),
}
}
}
impl<K> Deref for KeychainTxOutIndex<K> {
type Target = SpkTxOutIndex<(K, u32)>;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl<K: Clone + Ord + Debug> Indexer for KeychainTxOutIndex<K> {
type Additions = DerivationAdditions<K>;
fn index_txout(&mut self, outpoint: OutPoint, txout: &TxOut) -> Self::Additions {
self.scan_txout(outpoint, txout)
}
fn index_tx(&mut self, tx: &bitcoin::Transaction) -> Self::Additions {
self.scan(tx)
}
fn apply_additions(&mut self, additions: Self::Additions) {
self.apply_additions(additions)
}
fn is_tx_relevant(&self, tx: &bitcoin::Transaction) -> bool {
self.is_relevant(tx)
}
}
impl<K: Clone + Ord + Debug> KeychainTxOutIndex<K> {
/// Scans an object for relevant outpoints, which are stored and indexed internally.
///
/// If the matched script pubkey is part of the lookahead, the last stored index is updated for
/// the script pubkey's keychain and the [`DerivationAdditions`] returned will reflect the
/// change.
///
/// Typically, this method is used in two situations:
///
/// 1. After loading transaction data from the disk, you may scan over all the txouts to restore all
/// your txouts.
/// 2. When getting new data from the chain, you usually scan it before incorporating it into
/// your chain state (i.e., `SparseChain`, `ChainGraph`).
///
/// See [`ForEachTxout`] for the types that support this.
///
/// [`ForEachTxout`]: crate::ForEachTxOut
pub fn scan(&mut self, txouts: &impl ForEachTxOut) -> DerivationAdditions<K> {
let mut additions = DerivationAdditions::<K>::default();
txouts.for_each_txout(|(op, txout)| additions.append(self.scan_txout(op, txout)));
additions
}
/// Scan a single outpoint for a matching script pubkey.
///
/// If it matches, this will store and index it.
pub fn scan_txout(&mut self, op: OutPoint, txout: &TxOut) -> DerivationAdditions<K> {
match self.inner.scan_txout(op, txout).cloned() {
Some((keychain, index)) => self.reveal_to_target(&keychain, index).1,
None => DerivationAdditions::default(),
}
}
/// Return a reference to the internal [`SpkTxOutIndex`].
pub fn inner(&self) -> &SpkTxOutIndex<(K, u32)> {
&self.inner
}
/// Get a reference to the set of indexed outpoints.
pub fn outpoints(&self) -> &BTreeSet<((K, u32), OutPoint)> {
self.inner.outpoints()
}
/// Return a reference to the internal map of the keychain to descriptors.
pub fn keychains(&self) -> &BTreeMap<K, Descriptor<DescriptorPublicKey>> {
&self.keychains
}
/// Add a keychain to the tracker's `txout_index` with a descriptor to derive addresses.
///
/// Adding a keychain means you will be able to derive new script pubkeys under that keychain
/// and the txout index will discover transaction outputs with those script pubkeys.
///
/// # Panics
///
/// This will panic if a different `descriptor` is introduced to the same `keychain`.
pub fn add_keychain(&mut self, keychain: K, descriptor: Descriptor<DescriptorPublicKey>) {
let old_descriptor = &*self
.keychains
.entry(keychain)
.or_insert_with(|| descriptor.clone());
assert_eq!(
&descriptor, old_descriptor,
"keychain already contains a different descriptor"
);
}
/// Return the lookahead setting for each keychain.
///
/// Refer to [`set_lookahead`] for a deeper explanation of the `lookahead`.
///
/// [`set_lookahead`]: Self::set_lookahead
pub fn lookaheads(&self) -> &BTreeMap<K, u32> {
&self.lookahead
}
/// Convenience method to call [`set_lookahead`] for all keychains.
///
/// [`set_lookahead`]: Self::set_lookahead
pub fn set_lookahead_for_all(&mut self, lookahead: u32) {
for keychain in &self.keychains.keys().cloned().collect::<Vec<_>>() {
self.lookahead.insert(keychain.clone(), lookahead);
self.replenish_lookahead(keychain);
}
}
/// Set the lookahead count for `keychain`.
///
/// The lookahead is the number of scripts to cache ahead of the last stored script index. This
/// is useful during a scan via [`scan`] or [`scan_txout`].
///
/// # Panics
///
/// This will panic if the `keychain` does not exist.
///
/// [`scan`]: Self::scan
/// [`scan_txout`]: Self::scan_txout
pub fn set_lookahead(&mut self, keychain: &K, lookahead: u32) {
self.lookahead.insert(keychain.clone(), lookahead);
self.replenish_lookahead(keychain);
}
/// Convenience method to call [`lookahead_to_target`] for multiple keychains.
///
/// [`lookahead_to_target`]: Self::lookahead_to_target
pub fn lookahead_to_target_multi(&mut self, target_indexes: BTreeMap<K, u32>) {
for (keychain, target_index) in target_indexes {
self.lookahead_to_target(&keychain, target_index)
}
}
/// Store lookahead scripts until `target_index`.
///
/// This does not change the `lookahead` setting.
pub fn lookahead_to_target(&mut self, keychain: &K, target_index: u32) {
let next_index = self.next_store_index(keychain);
if let Some(temp_lookahead) = target_index.checked_sub(next_index).filter(|&v| v > 0) {
let old_lookahead = self.lookahead.insert(keychain.clone(), temp_lookahead);
self.replenish_lookahead(keychain);
// revert
match old_lookahead {
Some(lookahead) => self.lookahead.insert(keychain.clone(), lookahead),
None => self.lookahead.remove(keychain),
};
}
}
fn replenish_lookahead(&mut self, keychain: &K) {
let descriptor = self.keychains.get(keychain).expect("keychain must exist");
let next_store_index = self.next_store_index(keychain);
let next_reveal_index = self.last_revealed.get(keychain).map_or(0, |v| *v + 1);
let lookahead = self.lookahead.get(keychain).map_or(0, |v| *v);
for (new_index, new_spk) in
SpkIterator::new_with_range(descriptor, next_store_index..next_reveal_index + lookahead)
{
let _inserted = self
.inner
.insert_spk((keychain.clone(), new_index), new_spk);
debug_assert!(_inserted, "replenish lookahead: must not have existing spk: keychain={:?}, lookahead={}, next_store_index={}, next_reveal_index={}", keychain, lookahead, next_store_index, next_reveal_index);
}
}
fn next_store_index(&self, keychain: &K) -> u32 {
self.inner()
.all_spks()
.range((keychain.clone(), u32::MIN)..(keychain.clone(), u32::MAX))
.last()
.map_or(0, |((_, v), _)| *v + 1)
}
/// Generates script pubkey iterators for every `keychain`. The iterators iterate over all
/// derivable script pubkeys.
pub fn spks_of_all_keychains(
&self,
) -> BTreeMap<K, SpkIterator<Descriptor<DescriptorPublicKey>>> {
self.keychains
.iter()
.map(|(keychain, descriptor)| {
(
keychain.clone(),
SpkIterator::new_with_range(descriptor.clone(), 0..),
)
})
.collect()
}
/// Generates a script pubkey iterator for the given `keychain`'s descriptor (if it exists). The
/// iterator iterates over all derivable scripts of the keychain's descriptor.
///
/// # Panics
///
/// This will panic if the `keychain` does not exist.
pub fn spks_of_keychain(&self, keychain: &K) -> SpkIterator<Descriptor<DescriptorPublicKey>> {
let descriptor = self
.keychains
.get(keychain)
.expect("keychain must exist")
.clone();
SpkIterator::new_with_range(descriptor, 0..)
}
/// Convenience method to get [`revealed_spks_of_keychain`] of all keychains.
///
/// [`revealed_spks_of_keychain`]: Self::revealed_spks_of_keychain
pub fn revealed_spks_of_all_keychains(
&self,
) -> BTreeMap<K, impl Iterator<Item = (u32, &Script)> + Clone> {
self.keychains
.keys()
.map(|keychain| (keychain.clone(), self.revealed_spks_of_keychain(keychain)))
.collect()
}
/// Iterates over the script pubkeys revealed by this index under `keychain`.
pub fn revealed_spks_of_keychain(
&self,
keychain: &K,
) -> impl DoubleEndedIterator<Item = (u32, &Script)> + Clone {
let next_index = self.last_revealed.get(keychain).map_or(0, |v| *v + 1);
self.inner
.all_spks()
.range((keychain.clone(), u32::MIN)..(keychain.clone(), next_index))
.map(|((_, derivation_index), spk)| (*derivation_index, spk))
}
/// Get the next derivation index for `keychain`. The next index is the index after the last revealed
/// derivation index.
///
/// The second field in the returned tuple represents whether the next derivation index is new.
/// There are two scenarios where the next derivation index is reused (not new):
///
/// 1. The keychain's descriptor has no wildcard, and a script has already been revealed.
/// 2. The number of revealed scripts has already reached 2^31 (refer to BIP-32).
///
/// Not checking the second field of the tuple may result in address reuse.
///
/// # Panics
///
/// Panics if the `keychain` does not exist.
pub fn next_index(&self, keychain: &K) -> (u32, bool) {
let descriptor = self.keychains.get(keychain).expect("keychain must exist");
let last_index = self.last_revealed.get(keychain).cloned();
// we can only get the next index if the wildcard exists.
let has_wildcard = descriptor.has_wildcard();
match last_index {
// if there is no index, next_index is always 0.
None => (0, true),
// descriptors without wildcards can only have one index.
Some(_) if !has_wildcard => (0, false),
// derivation index must be < 2^31 (BIP-32).
Some(index) if index > BIP32_MAX_INDEX => {
unreachable!("index is out of bounds")
}
Some(index) if index == BIP32_MAX_INDEX => (index, false),
// get the next derivation index.
Some(index) => (index + 1, true),
}
}
/// Get the last derivation index that is revealed for each keychain.
///
/// Keychains with no revealed indices will not be included in the returned [`BTreeMap`].
pub fn last_revealed_indices(&self) -> &BTreeMap<K, u32> {
&self.last_revealed
}
/// Get the last derivation index revealed for `keychain`.
pub fn last_revealed_index(&self, keychain: &K) -> Option<u32> {
self.last_revealed.get(keychain).cloned()
}
/// Convenience method to call [`Self::reveal_to_target`] on multiple keychains.
pub fn reveal_to_target_multi(
&mut self,
keychains: &BTreeMap<K, u32>,
) -> (
BTreeMap<K, SpkIterator<Descriptor<DescriptorPublicKey>>>,
DerivationAdditions<K>,
) {
let mut additions = DerivationAdditions::default();
let mut spks = BTreeMap::new();
for (keychain, &index) in keychains {
let (new_spks, new_additions) = self.reveal_to_target(keychain, index);
if !new_additions.is_empty() {
spks.insert(keychain.clone(), new_spks);
additions.append(new_additions.clone());
}
}
(spks, additions)
}
/// Reveals script pubkeys of the `keychain`'s descriptor **up to and including** the
/// `target_index`.
///
/// If the `target_index` cannot be reached (due to the descriptor having no wildcard and/or
/// the `target_index` is in the hardened index range), this method will make a best-effort and
/// reveal up to the last possible index.
///
/// This returns an iterator of newly revealed indices (alongside their scripts) and a
/// [`DerivationAdditions`], which reports updates to the latest revealed index. If no new script
/// pubkeys are revealed, then both of these will be empty.
///
/// # Panics
///
/// Panics if `keychain` does not exist.
pub fn reveal_to_target(
&mut self,
keychain: &K,
target_index: u32,
) -> (
SpkIterator<Descriptor<DescriptorPublicKey>>,
DerivationAdditions<K>,
) {
let descriptor = self.keychains.get(keychain).expect("keychain must exist");
let has_wildcard = descriptor.has_wildcard();
let target_index = if has_wildcard { target_index } else { 0 };
let next_reveal_index = self.last_revealed.get(keychain).map_or(0, |v| *v + 1);
let lookahead = self.lookahead.get(keychain).map_or(0, |v| *v);
debug_assert_eq!(
next_reveal_index + lookahead,
self.next_store_index(keychain)
);
// if we need to reveal new indices, the latest revealed index goes here
let mut reveal_to_index = None;
// if the target is not yet revealed, but is already stored (due to lookahead), we need to
// set the `reveal_to_index` as target here (as the `for` loop below only updates
// `reveal_to_index` for indexes that are NOT stored)
if next_reveal_index <= target_index && target_index < next_reveal_index + lookahead {
reveal_to_index = Some(target_index);
}
// we range over indexes that are not stored
let range = next_reveal_index + lookahead..=target_index + lookahead;
for (new_index, new_spk) in SpkIterator::new_with_range(descriptor, range) {
let _inserted = self
.inner
.insert_spk((keychain.clone(), new_index), new_spk);
debug_assert!(_inserted, "must not have existing spk",);
// everything after `target_index` is stored for lookahead only
if new_index <= target_index {
reveal_to_index = Some(new_index);
}
}
match reveal_to_index {
Some(index) => {
let _old_index = self.last_revealed.insert(keychain.clone(), index);
debug_assert!(_old_index < Some(index));
(
SpkIterator::new_with_range(descriptor.clone(), next_reveal_index..index + 1),
DerivationAdditions(core::iter::once((keychain.clone(), index)).collect()),
)
}
None => (
SpkIterator::new_with_range(
descriptor.clone(),
next_reveal_index..next_reveal_index,
),
DerivationAdditions::default(),
),
}
}
/// Attempts to reveal the next script pubkey for `keychain`.
///
/// Returns the derivation index of the revealed script pubkey, the revealed script pubkey and a
/// [`DerivationAdditions`] which represents changes in the last revealed index (if any).
///
/// When a new script cannot be revealed, we return the last revealed script and an empty
/// [`DerivationAdditions`]. There are two scenarios when a new script pubkey cannot be derived:
///
/// 1. The descriptor has no wildcard and already has one script revealed.
/// 2. The descriptor has already revealed scripts up to the numeric bound.
///
/// # Panics
///
/// Panics if the `keychain` does not exist.
pub fn reveal_next_spk(&mut self, keychain: &K) -> ((u32, &Script), DerivationAdditions<K>) {
let (next_index, _) = self.next_index(keychain);
let additions = self.reveal_to_target(keychain, next_index).1;
let script = self
.inner
.spk_at_index(&(keychain.clone(), next_index))
.expect("script must already be stored");
((next_index, script), additions)
}
/// Gets the next unused script pubkey in the keychain. I.e., the script pubkey with the lowest
/// index that has not been used yet.
///
/// This will derive and reveal a new script pubkey if no more unused script pubkeys exist.
///
/// If the descriptor has no wildcard and already has a used script pubkey or if a descriptor
/// has used all scripts up to the derivation bounds, then the last derived script pubkey will be
/// returned.
///
/// # Panics
///
/// Panics if `keychain` has never been added to the index
pub fn next_unused_spk(&mut self, keychain: &K) -> ((u32, &Script), DerivationAdditions<K>) {
let need_new = self.unused_spks_of_keychain(keychain).next().is_none();
// this rather strange branch is needed because of some lifetime issues
if need_new {
self.reveal_next_spk(keychain)
} else {
(
self.unused_spks_of_keychain(keychain)
.next()
.expect("we already know next exists"),
DerivationAdditions::default(),
)
}
}
/// Marks the script pubkey at `index` as used even though the tracker hasn't seen an output with it.
/// This only has an effect when the `index` had been added to `self` already and was unused.
///
/// Returns whether the `index` was initially present as `unused`.
///
/// This is useful when you want to reserve a script pubkey for something but don't want to add
/// the transaction output using it to the index yet. Other callers will consider `index` on
/// `keychain` used until you call [`unmark_used`].
///
/// [`unmark_used`]: Self::unmark_used
pub fn mark_used(&mut self, keychain: &K, index: u32) -> bool {
self.inner.mark_used(&(keychain.clone(), index))
}
/// Undoes the effect of [`mark_used`]. Returns whether the `index` is inserted back into
/// `unused`.
///
/// Note that if `self` has scanned an output with this script pubkey, then this will have no
/// effect.
///
/// [`mark_used`]: Self::mark_used
pub fn unmark_used(&mut self, keychain: &K, index: u32) -> bool {
self.inner.unmark_used(&(keychain.clone(), index))
}
/// Iterates over all unused script pubkeys for a `keychain` stored in the index.
pub fn unused_spks_of_keychain(
&self,
keychain: &K,
) -> impl DoubleEndedIterator<Item = (u32, &Script)> {
let next_index = self.last_revealed.get(keychain).map_or(0, |&v| v + 1);
let range = (keychain.clone(), u32::MIN)..(keychain.clone(), next_index);
self.inner
.unused_spks(range)
.map(|((_, i), script)| (*i, script))
}
/// Iterates over all the [`OutPoint`] that have a `TxOut` with a script pubkey derived from
/// `keychain`.
pub fn txouts_of_keychain(
&self,
keychain: &K,
) -> impl DoubleEndedIterator<Item = (u32, OutPoint)> + '_ {
self.inner
.outputs_in_range((keychain.clone(), u32::MIN)..(keychain.clone(), u32::MAX))
.map(|((_, i), op)| (*i, op))
}
/// Returns the highest derivation index of the `keychain` where [`KeychainTxOutIndex`] has
/// found a [`TxOut`] with it's script pubkey.
pub fn last_used_index(&self, keychain: &K) -> Option<u32> {
self.txouts_of_keychain(keychain).last().map(|(i, _)| i)
}
/// Returns the highest derivation index of each keychain that [`KeychainTxOutIndex`] has found
/// a [`TxOut`] with it's script pubkey.
pub fn last_used_indices(&self) -> BTreeMap<K, u32> {
self.keychains
.iter()
.filter_map(|(keychain, _)| {
self.last_used_index(keychain)
.map(|index| (keychain.clone(), index))
})
.collect()
}
/// Applies the derivation additions to the [`KeychainTxOutIndex`], extending the number of
/// derived scripts per keychain, as specified in the `additions`.
pub fn apply_additions(&mut self, additions: DerivationAdditions<K>) {
let _ = self.reveal_to_target_multi(&additions.0);
}
}

View File

@@ -1,102 +0,0 @@
//! This crate is a collection of core structures for [Bitcoin Dev Kit] (alpha release).
//!
//! The goal of this crate is to give wallets the mechanisms needed to:
//!
//! 1. Figure out what data they need to fetch.
//! 2. Process the data in a way that never leads to inconsistent states.
//! 3. Fully index that data and expose it to be consumed without friction.
//!
//! Our design goals for these mechanisms are:
//!
//! 1. Data source agnostic -- nothing in `bdk_chain` cares about where you get data from or whether
//! you do it synchronously or asynchronously. If you know a fact about the blockchain, you can just
//! tell `bdk_chain`'s APIs about it, and that information will be integrated, if it can be done
//! consistently.
//! 2. Error-free APIs.
//! 3. Data persistence agnostic -- `bdk_chain` does not care where you cache on-chain data, what you
//! cache or how you fetch it.
//!
//! [Bitcoin Dev Kit]: https://bitcoindevkit.org/
#![no_std]
#![warn(missing_docs)]
pub use bitcoin;
mod spk_txout_index;
pub use spk_txout_index::*;
mod chain_data;
pub use chain_data::*;
pub mod indexed_tx_graph;
pub use indexed_tx_graph::IndexedTxGraph;
pub mod keychain;
pub mod local_chain;
mod tx_data_traits;
pub mod tx_graph;
pub use tx_data_traits::*;
pub use tx_graph::TxGraph;
mod chain_oracle;
pub use chain_oracle::*;
mod persist;
pub use persist::*;
#[doc(hidden)]
pub mod example_utils;
#[cfg(feature = "miniscript")]
pub use miniscript;
#[cfg(feature = "miniscript")]
mod descriptor_ext;
#[cfg(feature = "miniscript")]
pub use descriptor_ext::DescriptorExt;
#[cfg(feature = "miniscript")]
mod spk_iter;
#[cfg(feature = "miniscript")]
pub use spk_iter::*;
#[allow(unused_imports)]
#[macro_use]
extern crate alloc;
#[cfg(feature = "serde")]
pub extern crate serde_crate as serde;
#[cfg(feature = "bincode")]
extern crate bincode;
#[cfg(feature = "std")]
#[macro_use]
extern crate std;
#[cfg(all(not(feature = "std"), feature = "hashbrown"))]
extern crate hashbrown;
// When no-std use `alloc`'s Hash collections. This is activated by default
#[cfg(all(not(feature = "std"), not(feature = "hashbrown")))]
#[doc(hidden)]
pub mod collections {
#![allow(dead_code)]
pub type HashSet<K> = alloc::collections::BTreeSet<K>;
pub type HashMap<K, V> = alloc::collections::BTreeMap<K, V>;
pub use alloc::collections::{btree_map as hash_map, *};
}
// When we have std, use `std`'s all collections
#[cfg(all(feature = "std", not(feature = "hashbrown")))]
#[doc(hidden)]
pub mod collections {
pub use std::collections::{hash_map, *};
}
// With this special feature `hashbrown`, use `hashbrown`'s hash collections, and else from `alloc`.
#[cfg(feature = "hashbrown")]
#[doc(hidden)]
pub mod collections {
#![allow(dead_code)]
pub type HashSet<K> = hashbrown::HashSet<K>;
pub type HashMap<K, V> = hashbrown::HashMap<K, V>;
pub use alloc::collections::*;
pub use hashbrown::hash_map;
}
/// How many confirmations are needed f or a coinbase output to be spent.
pub const COINBASE_MATURITY: u32 = 100;

View File

@@ -1,250 +0,0 @@
//! The [`LocalChain`] is a local implementation of [`ChainOracle`].
use core::convert::Infallible;
use alloc::collections::BTreeMap;
use bitcoin::BlockHash;
use crate::{BlockId, ChainOracle};
/// This is a local implementation of [`ChainOracle`].
#[derive(Debug, Default, Clone, PartialEq, Eq, PartialOrd, Ord)]
pub struct LocalChain {
blocks: BTreeMap<u32, BlockHash>,
}
impl ChainOracle for LocalChain {
type Error = Infallible;
fn is_block_in_chain(
&self,
block: BlockId,
static_block: BlockId,
) -> Result<Option<bool>, Self::Error> {
if block.height > static_block.height {
return Ok(None);
}
Ok(
match (
self.blocks.get(&block.height),
self.blocks.get(&static_block.height),
) {
(Some(&hash), Some(&static_hash)) => {
Some(hash == block.hash && static_hash == static_block.hash)
}
_ => None,
},
)
}
fn get_chain_tip(&self) -> Result<Option<BlockId>, Self::Error> {
Ok(self.tip())
}
}
impl AsRef<BTreeMap<u32, BlockHash>> for LocalChain {
fn as_ref(&self) -> &BTreeMap<u32, BlockHash> {
&self.blocks
}
}
impl From<LocalChain> for BTreeMap<u32, BlockHash> {
fn from(value: LocalChain) -> Self {
value.blocks
}
}
impl From<BTreeMap<u32, BlockHash>> for LocalChain {
fn from(value: BTreeMap<u32, BlockHash>) -> Self {
Self { blocks: value }
}
}
impl LocalChain {
/// Contruct a [`LocalChain`] from a list of [`BlockId`]s.
pub fn from_blocks<B>(blocks: B) -> Self
where
B: IntoIterator<Item = BlockId>,
{
Self {
blocks: blocks.into_iter().map(|b| (b.height, b.hash)).collect(),
}
}
/// Get a reference to a map of block height to hash.
pub fn blocks(&self) -> &BTreeMap<u32, BlockHash> {
&self.blocks
}
/// Get the chain tip.
pub fn tip(&self) -> Option<BlockId> {
self.blocks
.iter()
.last()
.map(|(&height, &hash)| BlockId { height, hash })
}
/// This is like the sparsechain's logic, expect we must guarantee that all invalidated heights
/// are to be re-filled.
pub fn determine_changeset(&self, update: &Self) -> Result<ChangeSet, UpdateNotConnectedError> {
let update = update.as_ref();
let update_tip = match update.keys().last().cloned() {
Some(tip) => tip,
None => return Ok(ChangeSet::default()),
};
// this is the latest height where both the update and local chain has the same block hash
let agreement_height = update
.iter()
.rev()
.find(|&(u_height, u_hash)| self.blocks.get(u_height) == Some(u_hash))
.map(|(&height, _)| height);
// the lower bound of the range to invalidate
let invalidate_lb = match agreement_height {
Some(height) if height == update_tip => u32::MAX,
Some(height) => height + 1,
None => 0,
};
// the first block's height to invalidate in the local chain
let invalidate_from_height = self.blocks.range(invalidate_lb..).next().map(|(&h, _)| h);
// the first block of height to invalidate (if any) should be represented in the update
if let Some(first_invalid_height) = invalidate_from_height {
if !update.contains_key(&first_invalid_height) {
return Err(UpdateNotConnectedError(first_invalid_height));
}
}
let mut changeset: BTreeMap<u32, Option<BlockHash>> = match invalidate_from_height {
Some(first_invalid_height) => {
// the first block of height to invalidate should be represented in the update
if !update.contains_key(&first_invalid_height) {
return Err(UpdateNotConnectedError(first_invalid_height));
}
self.blocks
.range(first_invalid_height..)
.map(|(height, _)| (*height, None))
.collect()
}
None => BTreeMap::new(),
};
for (height, update_hash) in update {
let original_hash = self.blocks.get(height);
if Some(update_hash) != original_hash {
changeset.insert(*height, Some(*update_hash));
}
}
Ok(changeset)
}
/// Applies the given `changeset`.
pub fn apply_changeset(&mut self, changeset: ChangeSet) {
for (height, blockhash) in changeset {
match blockhash {
Some(blockhash) => self.blocks.insert(height, blockhash),
None => self.blocks.remove(&height),
};
}
}
/// Updates [`LocalChain`] with an update [`LocalChain`].
///
/// This is equivalent to calling [`determine_changeset`] and [`apply_changeset`] in sequence.
///
/// [`determine_changeset`]: Self::determine_changeset
/// [`apply_changeset`]: Self::apply_changeset
pub fn apply_update(&mut self, update: Self) -> Result<ChangeSet, UpdateNotConnectedError> {
let changeset = self.determine_changeset(&update)?;
self.apply_changeset(changeset.clone());
Ok(changeset)
}
/// Derives a [`ChangeSet`] that assumes that there are no preceding changesets.
///
/// The changeset returned will record additions of all blocks included in [`Self`].
pub fn initial_changeset(&self) -> ChangeSet {
self.blocks
.iter()
.map(|(&height, &hash)| (height, Some(hash)))
.collect()
}
/// Insert a block of [`BlockId`] into the [`LocalChain`].
///
/// # Error
///
/// If the insertion height already contains a block, and the block has a different blockhash,
/// this will result in an [`InsertBlockNotMatchingError`].
pub fn insert_block(
&mut self,
block_id: BlockId,
) -> Result<ChangeSet, InsertBlockNotMatchingError> {
let mut update = Self::from_blocks(self.tip());
if let Some(original_hash) = update.blocks.insert(block_id.height, block_id.hash) {
if original_hash != block_id.hash {
return Err(InsertBlockNotMatchingError {
height: block_id.height,
original_hash,
update_hash: block_id.hash,
});
}
}
Ok(self.apply_update(update).expect("should always connect"))
}
}
/// This is the return value of [`determine_changeset`] and represents changes to [`LocalChain`].
///
/// [`determine_changeset`]: LocalChain::determine_changeset
pub type ChangeSet = BTreeMap<u32, Option<BlockHash>>;
/// Represents an update failure of [`LocalChain`] due to the update not connecting to the original
/// chain.
///
/// The update cannot be applied to the chain because the chain suffix it represents did not
/// connect to the existing chain. This error case contains the checkpoint height to include so
/// that the chains can connect.
#[derive(Clone, Debug, PartialEq)]
pub struct UpdateNotConnectedError(pub u32);
impl core::fmt::Display for UpdateNotConnectedError {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
write!(
f,
"the update cannot connect with the chain, try include block at height {}",
self.0
)
}
}
#[cfg(feature = "std")]
impl std::error::Error for UpdateNotConnectedError {}
/// Represents a failure when trying to insert a checkpoint into [`LocalChain`].
#[derive(Clone, Debug, PartialEq)]
pub struct InsertBlockNotMatchingError {
/// The checkpoints' height.
pub height: u32,
/// Original checkpoint's block hash.
pub original_hash: BlockHash,
/// Update checkpoint's block hash.
pub update_hash: BlockHash,
}
impl core::fmt::Display for InsertBlockNotMatchingError {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
write!(
f,
"failed to insert block at height {} as blockhashes conflict: original={}, update={}",
self.height, self.original_hash, self.update_hash
)
}
}
#[cfg(feature = "std")]
impl std::error::Error for InsertBlockNotMatchingError {}

View File

@@ -1,97 +0,0 @@
use core::convert::Infallible;
use crate::Append;
/// `Persist` wraps a [`PersistBackend`] (`B`) to create a convenient staging area for changes (`C`)
/// before they are persisted.
///
/// Not all changes to the in-memory representation needs to be written to disk right away, so
/// [`Persist::stage`] can be used to *stage* changes first and then [`Persist::commit`] can be used
/// to write changes to disk.
#[derive(Debug)]
pub struct Persist<B, C> {
backend: B,
stage: C,
}
impl<B, C> Persist<B, C>
where
B: PersistBackend<C>,
C: Default + Append,
{
/// Create a new [`Persist`] from [`PersistBackend`].
pub fn new(backend: B) -> Self {
Self {
backend,
stage: Default::default(),
}
}
/// Stage a `changeset` to be commited later with [`commit`].
///
/// [`commit`]: Self::commit
pub fn stage(&mut self, changeset: C) {
self.stage.append(changeset)
}
/// Get the changes that have not been commited yet.
pub fn staged(&self) -> &C {
&self.stage
}
/// Commit the staged changes to the underlying persistance backend.
///
/// Changes that are committed (if any) are returned.
///
/// # Error
///
/// Returns a backend-defined error if this fails.
pub fn commit(&mut self) -> Result<Option<C>, B::WriteError> {
if self.stage.is_empty() {
return Ok(None);
}
self.backend
.write_changes(&self.stage)
// if written successfully, take and return `self.stage`
.map(|_| Some(core::mem::take(&mut self.stage)))
}
}
/// A persistence backend for [`Persist`].
///
/// `C` represents the changeset; a datatype that records changes made to in-memory data structures
/// that are to be persisted, or retrieved from persistence.
pub trait PersistBackend<C> {
/// The error the backend returns when it fails to write.
type WriteError: core::fmt::Debug;
/// The error the backend returns when it fails to load changesets `C`.
type LoadError: core::fmt::Debug;
/// Writes a changeset to the persistence backend.
///
/// It is up to the backend what it does with this. It could store every changeset in a list or
/// it inserts the actual changes into a more structured database. All it needs to guarantee is
/// that [`load_from_persistence`] restores a keychain tracker to what it should be if all
/// changesets had been applied sequentially.
///
/// [`load_from_persistence`]: Self::load_from_persistence
fn write_changes(&mut self, changeset: &C) -> Result<(), Self::WriteError>;
/// Return the aggregate changeset `C` from persistence.
fn load_from_persistence(&mut self) -> Result<C, Self::LoadError>;
}
impl<C: Default> PersistBackend<C> for () {
type WriteError = Infallible;
type LoadError = Infallible;
fn write_changes(&mut self, _changeset: &C) -> Result<(), Self::WriteError> {
Ok(())
}
fn load_from_persistence(&mut self) -> Result<C, Self::LoadError> {
Ok(C::default())
}
}

View File

@@ -1,215 +0,0 @@
use crate::{
bitcoin::{secp256k1::Secp256k1, Script},
miniscript::{Descriptor, DescriptorPublicKey},
};
use core::{borrow::Borrow, ops::Bound, ops::RangeBounds};
/// Maximum [BIP32](https://bips.xyz/32) derivation index.
pub const BIP32_MAX_INDEX: u32 = (1 << 31) - 1;
/// An iterator for derived script pubkeys.
///
/// [`SpkIterator`] is an implementation of the [`Iterator`] trait which possesses its own `next()`
/// and `nth()` functions, both of which circumvent the unnecessary intermediate derivations required
/// when using their default implementations.
///
/// ## Examples
///
/// ```
/// use bdk_chain::SpkIterator;
/// # use miniscript::{Descriptor, DescriptorPublicKey};
/// # use bitcoin::{secp256k1::Secp256k1};
/// # use std::str::FromStr;
/// # let secp = bitcoin::secp256k1::Secp256k1::signing_only();
/// # let (descriptor, _) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "wpkh([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/1/0)").unwrap();
/// # let external_spk_0 = descriptor.at_derivation_index(0).script_pubkey();
/// # let external_spk_3 = descriptor.at_derivation_index(3).script_pubkey();
/// # let external_spk_4 = descriptor.at_derivation_index(4).script_pubkey();
///
/// // Creates a new script pubkey iterator starting at 0 from a descriptor.
/// let mut spk_iter = SpkIterator::new(&descriptor);
/// assert_eq!(spk_iter.next(), Some((0, external_spk_0)));
/// assert_eq!(spk_iter.next(), None);
/// ```
#[derive(Clone)]
pub struct SpkIterator<D> {
next_index: u32,
end: u32,
descriptor: D,
secp: Secp256k1<bitcoin::secp256k1::VerifyOnly>,
}
impl<D> SpkIterator<D>
where
D: Borrow<Descriptor<DescriptorPublicKey>>,
{
/// Creates a new script pubkey iterator starting at 0 from a descriptor.
pub fn new(descriptor: D) -> Self {
let end = if descriptor.borrow().has_wildcard() {
BIP32_MAX_INDEX
} else {
0
};
SpkIterator::new_with_range(descriptor, 0..=end)
}
// Creates a new script pubkey iterator from a descriptor with a given range.
pub(crate) fn new_with_range<R>(descriptor: D, range: R) -> Self
where
R: RangeBounds<u32>,
{
let mut end = match range.end_bound() {
Bound::Included(end) => *end + 1,
Bound::Excluded(end) => *end,
Bound::Unbounded => u32::MAX,
};
// Because `end` is exclusive, we want the maximum value to be BIP32_MAX_INDEX + 1.
end = end.min(BIP32_MAX_INDEX + 1);
Self {
next_index: match range.start_bound() {
Bound::Included(start) => *start,
Bound::Excluded(start) => *start + 1,
Bound::Unbounded => u32::MIN,
},
end,
descriptor,
secp: Secp256k1::verification_only(),
}
}
}
impl<D> Iterator for SpkIterator<D>
where
D: Borrow<Descriptor<DescriptorPublicKey>>,
{
type Item = (u32, Script);
fn next(&mut self) -> Option<Self::Item> {
// For non-wildcard descriptors, we expect the first element to be Some((0, spk)), then None after.
// For wildcard descriptors, we expect it to keep iterating until exhausted.
if self.next_index >= self.end {
return None;
}
let script = self
.descriptor
.borrow()
.at_derivation_index(self.next_index)
.derived_descriptor(&self.secp)
.expect("the descriptor cannot need hardened derivation")
.script_pubkey();
let output = (self.next_index, script);
self.next_index += 1;
Some(output)
}
fn nth(&mut self, n: usize) -> Option<Self::Item> {
self.next_index = self
.next_index
.saturating_add(u32::try_from(n).unwrap_or(u32::MAX));
self.next()
}
}
#[cfg(test)]
mod test {
use crate::{
bitcoin::secp256k1::Secp256k1,
keychain::KeychainTxOutIndex,
miniscript::{Descriptor, DescriptorPublicKey},
spk_iter::{SpkIterator, BIP32_MAX_INDEX},
};
#[derive(Clone, Debug, PartialEq, Eq, Ord, PartialOrd)]
enum TestKeychain {
External,
Internal,
}
fn init_txout_index() -> (
KeychainTxOutIndex<TestKeychain>,
Descriptor<DescriptorPublicKey>,
Descriptor<DescriptorPublicKey>,
) {
let mut txout_index = KeychainTxOutIndex::<TestKeychain>::default();
let secp = Secp256k1::signing_only();
let (external_descriptor,_) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "tr([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/0/*)").unwrap();
let (internal_descriptor,_) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "tr([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/1/*)").unwrap();
txout_index.add_keychain(TestKeychain::External, external_descriptor.clone());
txout_index.add_keychain(TestKeychain::Internal, internal_descriptor.clone());
(txout_index, external_descriptor, internal_descriptor)
}
#[test]
#[allow(clippy::iter_nth_zero)]
fn test_spkiterator_wildcard() {
let (_, external_desc, _) = init_txout_index();
let external_spk_0 = external_desc.at_derivation_index(0).script_pubkey();
let external_spk_16 = external_desc.at_derivation_index(16).script_pubkey();
let external_spk_20 = external_desc.at_derivation_index(20).script_pubkey();
let external_spk_21 = external_desc.at_derivation_index(21).script_pubkey();
let external_spk_max = external_desc
.at_derivation_index(BIP32_MAX_INDEX)
.script_pubkey();
let mut external_spk = SpkIterator::new(&external_desc);
let max_index = BIP32_MAX_INDEX - 22;
assert_eq!(external_spk.next().unwrap(), (0, external_spk_0));
assert_eq!(external_spk.nth(15).unwrap(), (16, external_spk_16));
assert_eq!(external_spk.nth(3).unwrap(), (20, external_spk_20.clone()));
assert_eq!(external_spk.next().unwrap(), (21, external_spk_21));
assert_eq!(
external_spk.nth(max_index as usize).unwrap(),
(BIP32_MAX_INDEX, external_spk_max)
);
assert_eq!(external_spk.nth(0), None);
let mut external_spk = SpkIterator::new_with_range(&external_desc, 0..21);
assert_eq!(external_spk.nth(20).unwrap(), (20, external_spk_20));
assert_eq!(external_spk.next(), None);
let mut external_spk = SpkIterator::new_with_range(&external_desc, 0..21);
assert_eq!(external_spk.nth(21), None);
}
#[test]
#[allow(clippy::iter_nth_zero)]
fn test_spkiterator_non_wildcard() {
let secp = bitcoin::secp256k1::Secp256k1::signing_only();
let (no_wildcard_descriptor, _) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "wpkh([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/1/0)").unwrap();
let external_spk_0 = no_wildcard_descriptor
.at_derivation_index(0)
.script_pubkey();
let mut external_spk = SpkIterator::new(&no_wildcard_descriptor);
assert_eq!(external_spk.next().unwrap(), (0, external_spk_0.clone()));
assert_eq!(external_spk.next(), None);
let mut external_spk = SpkIterator::new(&no_wildcard_descriptor);
assert_eq!(external_spk.nth(0).unwrap(), (0, external_spk_0));
assert_eq!(external_spk.nth(0), None);
}
// The following dummy traits were created to test if SpkIterator is working properly.
trait TestSendStatic: Send + 'static {
fn test(&self) -> u32 {
20
}
}
impl TestSendStatic for SpkIterator<Descriptor<DescriptorPublicKey>> {
fn test(&self) -> u32 {
20
}
}
}

View File

@@ -1,337 +0,0 @@
use core::ops::RangeBounds;
use crate::{
collections::{hash_map::Entry, BTreeMap, BTreeSet, HashMap},
indexed_tx_graph::Indexer,
ForEachTxOut,
};
use bitcoin::{self, OutPoint, Script, Transaction, TxOut, Txid};
/// An index storing [`TxOut`]s that have a script pubkey that matches those in a list.
///
/// The basic idea is that you insert script pubkeys you care about into the index with
/// [`insert_spk`] and then when you call [`scan`], the index will look at any txouts you pass in and
/// store and index any txouts matching one of its script pubkeys.
///
/// Each script pubkey is associated with an application-defined index script index `I`, which must be
/// [`Ord`]. Usually, this is used to associate the derivation index of the script pubkey or even a
/// combination of `(keychain, derivation_index)`.
///
/// Note there is no harm in scanning transactions that disappear from the blockchain or were never
/// in there in the first place. `SpkTxOutIndex` is intentionally *monotone* -- you cannot delete or
/// modify txouts that have been indexed. To find out which txouts from the index are actually in the
/// chain or unspent, you must use other sources of information like a [`TxGraph`].
///
/// [`TxOut`]: bitcoin::TxOut
/// [`insert_spk`]: Self::insert_spk
/// [`Ord`]: core::cmp::Ord
/// [`scan`]: Self::scan
/// [`TxGraph`]: crate::tx_graph::TxGraph
#[derive(Clone, Debug)]
pub struct SpkTxOutIndex<I> {
/// script pubkeys ordered by index
spks: BTreeMap<I, Script>,
/// A reverse lookup from spk to spk index
spk_indices: HashMap<Script, I>,
/// The set of unused indexes.
unused: BTreeSet<I>,
/// Lookup index and txout by outpoint.
txouts: BTreeMap<OutPoint, (I, TxOut)>,
/// Lookup from spk index to outpoints that had that spk
spk_txouts: BTreeSet<(I, OutPoint)>,
}
impl<I> Default for SpkTxOutIndex<I> {
fn default() -> Self {
Self {
txouts: Default::default(),
spks: Default::default(),
spk_indices: Default::default(),
spk_txouts: Default::default(),
unused: Default::default(),
}
}
}
impl<I: Clone + Ord> Indexer for SpkTxOutIndex<I> {
type Additions = ();
fn index_txout(&mut self, outpoint: OutPoint, txout: &TxOut) -> Self::Additions {
self.scan_txout(outpoint, txout);
Default::default()
}
fn index_tx(&mut self, tx: &Transaction) -> Self::Additions {
self.scan(tx);
Default::default()
}
fn apply_additions(&mut self, _additions: Self::Additions) {
// This applies nothing.
}
fn is_tx_relevant(&self, tx: &Transaction) -> bool {
self.is_relevant(tx)
}
}
/// This macro is used instead of a member function of `SpkTxOutIndex`, which would result in a
/// compiler error[E0521]: "borrowed data escapes out of closure" when we attempt to take a
/// reference out of the `ForEachTxOut` closure during scanning.
macro_rules! scan_txout {
($self:ident, $op:expr, $txout:expr) => {{
let spk_i = $self.spk_indices.get(&$txout.script_pubkey);
if let Some(spk_i) = spk_i {
$self.txouts.insert($op, (spk_i.clone(), $txout.clone()));
$self.spk_txouts.insert((spk_i.clone(), $op));
$self.unused.remove(&spk_i);
}
spk_i
}};
}
impl<I: Clone + Ord> SpkTxOutIndex<I> {
/// Scans an object containing many txouts.
///
/// Typically, this is used in two situations:
///
/// 1. After loading transaction data from the disk, you may scan over all the txouts to restore all
/// your txouts.
/// 2. When getting new data from the chain, you usually scan it before incorporating it into your chain state.
///
/// See [`ForEachTxout`] for the types that support this.
///
/// [`ForEachTxout`]: crate::ForEachTxOut
pub fn scan(&mut self, txouts: &impl ForEachTxOut) -> BTreeSet<I> {
let mut scanned_indices = BTreeSet::new();
txouts.for_each_txout(|(op, txout)| {
if let Some(spk_i) = scan_txout!(self, op, txout) {
scanned_indices.insert(spk_i.clone());
}
});
scanned_indices
}
/// Scan a single `TxOut` for a matching script pubkey and returns the index that matches the
/// script pubkey (if any).
pub fn scan_txout(&mut self, op: OutPoint, txout: &TxOut) -> Option<&I> {
scan_txout!(self, op, txout)
}
/// Get a reference to the set of indexed outpoints.
pub fn outpoints(&self) -> &BTreeSet<(I, OutPoint)> {
&self.spk_txouts
}
/// Iterate over all known txouts that spend to tracked script pubkeys.
pub fn txouts(
&self,
) -> impl DoubleEndedIterator<Item = (&I, OutPoint, &TxOut)> + ExactSizeIterator {
self.txouts
.iter()
.map(|(op, (index, txout))| (index, *op, txout))
}
/// Finds all txouts on a transaction that has previously been scanned and indexed.
pub fn txouts_in_tx(
&self,
txid: Txid,
) -> impl DoubleEndedIterator<Item = (&I, OutPoint, &TxOut)> {
self.txouts
.range(OutPoint::new(txid, u32::MIN)..=OutPoint::new(txid, u32::MAX))
.map(|(op, (index, txout))| (index, *op, txout))
}
/// Iterates over all the outputs with script pubkeys in an index range.
pub fn outputs_in_range(
&self,
range: impl RangeBounds<I>,
) -> impl DoubleEndedIterator<Item = (&I, OutPoint)> {
use bitcoin::hashes::Hash;
use core::ops::Bound::*;
let min_op = OutPoint {
txid: Txid::from_inner([0x00; 32]),
vout: u32::MIN,
};
let max_op = OutPoint {
txid: Txid::from_inner([0xff; 32]),
vout: u32::MAX,
};
let start = match range.start_bound() {
Included(index) => Included((index.clone(), min_op)),
Excluded(index) => Excluded((index.clone(), max_op)),
Unbounded => Unbounded,
};
let end = match range.end_bound() {
Included(index) => Included((index.clone(), max_op)),
Excluded(index) => Excluded((index.clone(), min_op)),
Unbounded => Unbounded,
};
self.spk_txouts.range((start, end)).map(|(i, op)| (i, *op))
}
/// Returns the txout and script pubkey index of the `TxOut` at `OutPoint`.
///
/// Returns `None` if the `TxOut` hasn't been scanned or if nothing matching was found there.
pub fn txout(&self, outpoint: OutPoint) -> Option<(&I, &TxOut)> {
self.txouts
.get(&outpoint)
.map(|(spk_i, txout)| (spk_i, txout))
}
/// Returns the script that has been inserted at the `index`.
///
/// If that index hasn't been inserted yet, it will return `None`.
pub fn spk_at_index(&self, index: &I) -> Option<&Script> {
self.spks.get(index)
}
/// The script pubkeys that are being tracked by the index.
pub fn all_spks(&self) -> &BTreeMap<I, Script> {
&self.spks
}
/// Adds a script pubkey to scan for. Returns `false` and does nothing if spk already exists in the map
///
/// the index will look for outputs spending to this spk whenever it scans new data.
pub fn insert_spk(&mut self, index: I, spk: Script) -> bool {
match self.spk_indices.entry(spk.clone()) {
Entry::Vacant(value) => {
value.insert(index.clone());
self.spks.insert(index.clone(), spk);
self.unused.insert(index);
true
}
Entry::Occupied(_) => false,
}
}
/// Iterates over all unused script pubkeys in an index range.
///
/// Here, "unused" means that after the script pubkey was stored in the index, the index has
/// never scanned a transaction output with it.
///
/// # Example
///
/// ```rust
/// # use bdk_chain::SpkTxOutIndex;
///
/// // imagine our spks are indexed like (keychain, derivation_index).
/// let txout_index = SpkTxOutIndex::<(u32, u32)>::default();
/// let all_unused_spks = txout_index.unused_spks(..);
/// let change_index = 1;
/// let unused_change_spks =
/// txout_index.unused_spks((change_index, u32::MIN)..(change_index, u32::MAX));
/// ```
pub fn unused_spks<R>(&self, range: R) -> impl DoubleEndedIterator<Item = (&I, &Script)>
where
R: RangeBounds<I>,
{
self.unused
.range(range)
.map(move |index| (index, self.spk_at_index(index).expect("must exist")))
}
/// Returns whether the script pubkey at `index` has been used or not.
///
/// Here, "unused" means that after the script pubkey was stored in the index, the index has
/// never scanned a transaction output with it.
pub fn is_used(&self, index: &I) -> bool {
self.unused.get(index).is_none()
}
/// Marks the script pubkey at `index` as used even though it hasn't seen an output spending to it.
/// This only affects when the `index` had already been added to `self` and was unused.
///
/// Returns whether the `index` was initially present as `unused`.
///
/// This is useful when you want to reserve a script pubkey for something but don't want to add
/// the transaction output using it to the index yet. Other callers will consider the `index` used
/// until you call [`unmark_used`].
///
/// [`unmark_used`]: Self::unmark_used
pub fn mark_used(&mut self, index: &I) -> bool {
self.unused.remove(index)
}
/// Undoes the effect of [`mark_used`]. Returns whether the `index` is inserted back into
/// `unused`.
///
/// Note that if `self` has scanned an output with this script pubkey then this will have no
/// effect.
///
/// [`mark_used`]: Self::mark_used
pub fn unmark_used(&mut self, index: &I) -> bool {
// we cannot set the index as unused when it does not exist
if !self.spks.contains_key(index) {
return false;
}
// we cannot set the index as unused when txouts are indexed under it
if self.outputs_in_range(index..=index).next().is_some() {
return false;
}
self.unused.insert(index.clone())
}
/// Returns the index associated with the script pubkey.
pub fn index_of_spk(&self, script: &Script) -> Option<&I> {
self.spk_indices.get(script)
}
/// Computes total input value going from script pubkeys in the index (sent) and the total output
/// value going to script pubkeys in the index (received) in `tx`. For the `sent` to be computed
/// correctly, the output being spent must have already been scanned by the index. Calculating
/// received just uses the transaction outputs directly, so it will be correct even if it has not
/// been scanned.
pub fn sent_and_received(&self, tx: &Transaction) -> (u64, u64) {
let mut sent = 0;
let mut received = 0;
for txin in &tx.input {
if let Some((_, txout)) = self.txout(txin.previous_output) {
sent += txout.value;
}
}
for txout in &tx.output {
if self.index_of_spk(&txout.script_pubkey).is_some() {
received += txout.value;
}
}
(sent, received)
}
/// Computes the net value that this transaction gives to the script pubkeys in the index and
/// *takes* from the transaction outputs in the index. Shorthand for calling
/// [`sent_and_received`] and subtracting sent from received.
///
/// [`sent_and_received`]: Self::sent_and_received
pub fn net_value(&self, tx: &Transaction) -> i64 {
let (sent, received) = self.sent_and_received(tx);
received as i64 - sent as i64
}
/// Whether any of the inputs of this transaction spend a txout tracked or whether any output
/// matches one of our script pubkeys.
///
/// It is easily possible to misuse this method and get false negatives by calling it before you
/// have scanned the `TxOut`s the transaction is spending. For example, if you want to filter out
/// all the transactions in a block that are irrelevant, you **must first scan all the
/// transactions in the block** and only then use this method.
pub fn is_relevant(&self, tx: &Transaction) -> bool {
let input_matches = tx
.input
.iter()
.any(|input| self.txouts.contains_key(&input.previous_output));
let output_matches = tx
.output
.iter()
.any(|output| self.spk_indices.contains_key(&output.script_pubkey));
input_matches || output_matches
}
}

View File

@@ -1,120 +0,0 @@
use crate::collections::BTreeMap;
use crate::collections::BTreeSet;
use crate::BlockId;
use alloc::vec::Vec;
use bitcoin::{Block, OutPoint, Transaction, TxOut};
/// Trait to do something with every txout contained in a structure.
///
/// We would prefer to just work with things that can give us an `Iterator<Item=(OutPoint, &TxOut)>`
/// here, but rust's type system makes it extremely hard to do this (without trait objects).
pub trait ForEachTxOut {
/// The provided closure `f` will be called with each `outpoint/txout` pair.
fn for_each_txout(&self, f: impl FnMut((OutPoint, &TxOut)));
}
impl ForEachTxOut for Block {
fn for_each_txout(&self, mut f: impl FnMut((OutPoint, &TxOut))) {
for tx in self.txdata.iter() {
tx.for_each_txout(&mut f)
}
}
}
impl ForEachTxOut for Transaction {
fn for_each_txout(&self, mut f: impl FnMut((OutPoint, &TxOut))) {
let txid = self.txid();
for (i, txout) in self.output.iter().enumerate() {
f((
OutPoint {
txid,
vout: i as u32,
},
txout,
))
}
}
}
/// Trait that "anchors" blockchain data to a specific block of height and hash.
///
/// I.e. If transaction A is anchored in block B, then if block B is in the best chain, we can
/// assume that transaction A is also confirmed in the best chain. This does not necessarily mean
/// that transaction A is confirmed in block B. It could also mean transaction A is confirmed in a
/// parent block of B.
pub trait Anchor: core::fmt::Debug + Clone + Eq + PartialOrd + Ord + core::hash::Hash {
/// Returns the [`BlockId`] that the associated blockchain data is "anchored" in.
fn anchor_block(&self) -> BlockId;
/// Get the upper bound of the chain data's confirmation height.
///
/// The default definition gives a pessimistic answer. This can be overridden by the `Anchor`
/// implementation for a more accurate value.
fn confirmation_height_upper_bound(&self) -> u32 {
self.anchor_block().height
}
}
impl<A: Anchor> Anchor for &'static A {
fn anchor_block(&self) -> BlockId {
<A as Anchor>::anchor_block(self)
}
}
/// Trait that makes an object appendable.
pub trait Append {
/// Append another object of the same type onto `self`.
fn append(&mut self, other: Self);
/// Returns whether the structure is considered empty.
fn is_empty(&self) -> bool;
}
impl Append for () {
fn append(&mut self, _other: Self) {}
fn is_empty(&self) -> bool {
true
}
}
impl<K: Ord, V> Append for BTreeMap<K, V> {
fn append(&mut self, mut other: Self) {
BTreeMap::append(self, &mut other)
}
fn is_empty(&self) -> bool {
BTreeMap::is_empty(self)
}
}
impl<T: Ord> Append for BTreeSet<T> {
fn append(&mut self, mut other: Self) {
BTreeSet::append(self, &mut other)
}
fn is_empty(&self) -> bool {
BTreeSet::is_empty(self)
}
}
impl<T> Append for Vec<T> {
fn append(&mut self, mut other: Self) {
Vec::append(self, &mut other)
}
fn is_empty(&self) -> bool {
Vec::is_empty(self)
}
}
impl<A: Append, B: Append> Append for (A, B) {
fn append(&mut self, other: Self) {
Append::append(&mut self.0, other.0);
Append::append(&mut self.1, other.1);
}
fn is_empty(&self) -> bool {
Append::is_empty(&self.0) && Append::is_empty(&self.1)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,68 +0,0 @@
#[allow(unused_macros)]
macro_rules! h {
($index:literal) => {{
bitcoin::hashes::Hash::hash($index.as_bytes())
}};
}
#[allow(unused_macros)]
macro_rules! local_chain {
[ $(($height:expr, $block_hash:expr)), * ] => {{
#[allow(unused_mut)]
bdk_chain::local_chain::LocalChain::from_blocks([$(($height, $block_hash).into()),*])
}};
}
#[allow(unused_macros)]
macro_rules! chain {
($([$($tt:tt)*]),*) => { chain!( checkpoints: [$([$($tt)*]),*] ) };
(checkpoints: $($tail:tt)*) => { chain!( index: TxHeight, checkpoints: $($tail)*) };
(index: $ind:ty, checkpoints: [ $([$height:expr, $block_hash:expr]),* ] $(,txids: [$(($txid:expr, $tx_height:expr)),*])?) => {{
#[allow(unused_mut)]
let mut chain = bdk_chain::sparse_chain::SparseChain::<$ind>::from_checkpoints([$(($height, $block_hash).into()),*]);
$(
$(
let _ = chain.insert_tx($txid, $tx_height).expect("should succeed");
)*
)?
chain
}};
}
#[allow(unused_macros)]
macro_rules! changeset {
(checkpoints: $($tail:tt)*) => { changeset!(index: TxHeight, checkpoints: $($tail)*) };
(
index: $ind:ty,
checkpoints: [ $(( $height:expr, $cp_to:expr )),* ]
$(,txids: [ $(( $txid:expr, $tx_to:expr )),* ])?
) => {{
use bdk_chain::collections::BTreeMap;
#[allow(unused_mut)]
bdk_chain::sparse_chain::ChangeSet::<$ind> {
checkpoints: {
let mut changes = BTreeMap::default();
$(changes.insert($height, $cp_to);)*
changes
},
txids: {
let mut changes = BTreeMap::default();
$($(changes.insert($txid, $tx_to.map(|h: TxHeight| h.into()));)*)?
changes
}
}
}};
}
#[allow(unused)]
pub fn new_tx(lt: u32) -> bitcoin::Transaction {
bitcoin::Transaction {
version: 0x00,
lock_time: bitcoin::PackedLockTime(lt),
input: vec![],
output: vec![],
}
}

View File

@@ -1,467 +0,0 @@
#[macro_use]
mod common;
use std::collections::{BTreeMap, BTreeSet};
use bdk_chain::{
indexed_tx_graph::{IndexedAdditions, IndexedTxGraph},
keychain::{Balance, DerivationAdditions, KeychainTxOutIndex},
local_chain::LocalChain,
tx_graph::Additions,
BlockId, ChainPosition, ConfirmationHeightAnchor,
};
use bitcoin::{secp256k1::Secp256k1, BlockHash, OutPoint, Script, Transaction, TxIn, TxOut};
use miniscript::Descriptor;
/// Ensure [`IndexedTxGraph::insert_relevant_txs`] can successfully index transactions NOT presented
/// in topological order.
///
/// Given 3 transactions (A, B, C), where A has 2 owned outputs. B and C spends an output each of A.
/// Typically, we would only know whether B and C are relevant if we have indexed A (A's outpoints
/// are associated with owned spks in the index). Ensure insertion and indexing is topological-
/// agnostic.
#[test]
fn insert_relevant_txs() {
const DESCRIPTOR: &str = "tr([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/0/*)";
let (descriptor, _) = Descriptor::parse_descriptor(&Secp256k1::signing_only(), DESCRIPTOR)
.expect("must be valid");
let spk_0 = descriptor.at_derivation_index(0).script_pubkey();
let spk_1 = descriptor.at_derivation_index(9).script_pubkey();
let mut graph = IndexedTxGraph::<ConfirmationHeightAnchor, KeychainTxOutIndex<()>>::default();
graph.index.add_keychain((), descriptor);
graph.index.set_lookahead(&(), 10);
let tx_a = Transaction {
output: vec![
TxOut {
value: 10_000,
script_pubkey: spk_0,
},
TxOut {
value: 20_000,
script_pubkey: spk_1,
},
],
..common::new_tx(0)
};
let tx_b = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_a.txid(), 0),
..Default::default()
}],
..common::new_tx(1)
};
let tx_c = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_a.txid(), 1),
..Default::default()
}],
..common::new_tx(2)
};
let txs = [tx_c, tx_b, tx_a];
assert_eq!(
graph.insert_relevant_txs(txs.iter().map(|tx| (tx, None)), None),
IndexedAdditions {
graph_additions: Additions {
txs: txs.into(),
..Default::default()
},
index_additions: DerivationAdditions([((), 9_u32)].into()),
}
)
}
#[test]
/// Ensure consistency IndexedTxGraph list_* and balance methods. These methods lists
/// relevant txouts and utxos from the information fetched from a ChainOracle (here a LocalChain).
///
/// Test Setup:
///
/// Local Chain => <0> ----- <1> ----- <2> ----- <3> ---- ... ---- <150>
///
/// Keychains:
///
/// keychain_1: Trusted
/// keychain_2: Untrusted
///
/// Transactions:
///
/// tx1: A Coinbase, sending 70000 sats to "trusted" address. [Block 0]
/// tx2: A external Receive, sending 30000 sats to "untrusted" address. [Block 1]
/// tx3: Internal Spend. Spends tx2 and returns change of 10000 to "trusted" address. [Block 2]
/// tx4: Mempool tx, sending 20000 sats to "trusted" address.
/// tx5: Mempool tx, sending 15000 sats to "untested" address.
/// tx6: Complete unrelated tx. [Block 3]
///
/// Different transactions are added via `insert_relevant_txs`.
/// `list_owned_txout`, `list_owned_utxos` and `balance` method is asserted
/// with expected values at Block height 0, 1, and 2.
///
/// Finally Add more blocks to local chain until tx1 coinbase maturity hits.
/// Assert maturity at coinbase maturity inflection height. Block height 98 and 99.
fn test_list_owned_txouts() {
// Create Local chains
let local_chain = (0..150)
.map(|i| (i as u32, h!("random")))
.collect::<BTreeMap<u32, BlockHash>>();
let local_chain = LocalChain::from(local_chain);
// Initiate IndexedTxGraph
let (desc_1, _) = Descriptor::parse_descriptor(&Secp256k1::signing_only(), "tr(tprv8ZgxMBicQKsPd3krDUsBAmtnRsK3rb8u5yi1zhQgMhF1tR8MW7xfE4rnrbbsrbPR52e7rKapu6ztw1jXveJSCGHEriUGZV7mCe88duLp5pj/86'/1'/0'/0/*)").unwrap();
let (desc_2, _) = Descriptor::parse_descriptor(&Secp256k1::signing_only(), "tr(tprv8ZgxMBicQKsPd3krDUsBAmtnRsK3rb8u5yi1zhQgMhF1tR8MW7xfE4rnrbbsrbPR52e7rKapu6ztw1jXveJSCGHEriUGZV7mCe88duLp5pj/86'/1'/0'/1/*)").unwrap();
let mut graph =
IndexedTxGraph::<ConfirmationHeightAnchor, KeychainTxOutIndex<String>>::default();
graph.index.add_keychain("keychain_1".into(), desc_1);
graph.index.add_keychain("keychain_2".into(), desc_2);
graph.index.set_lookahead_for_all(10);
// Get trusted and untrusted addresses
let mut trusted_spks = Vec::new();
let mut untrusted_spks = Vec::new();
{
// we need to scope here to take immutanble reference of the graph
for _ in 0..10 {
let ((_, script), _) = graph.index.reveal_next_spk(&"keychain_1".to_string());
// TODO Assert indexes
trusted_spks.push(script.clone());
}
}
{
for _ in 0..10 {
let ((_, script), _) = graph.index.reveal_next_spk(&"keychain_2".to_string());
untrusted_spks.push(script.clone());
}
}
// Create test transactions
// tx1 is the genesis coinbase
let tx1 = Transaction {
input: vec![TxIn {
previous_output: OutPoint::null(),
..Default::default()
}],
output: vec![TxOut {
value: 70000,
script_pubkey: trusted_spks[0].clone(),
}],
..common::new_tx(0)
};
// tx2 is an incoming transaction received at untrusted keychain at block 1.
let tx2 = Transaction {
output: vec![TxOut {
value: 30000,
script_pubkey: untrusted_spks[0].clone(),
}],
..common::new_tx(0)
};
// tx3 spends tx2 and gives a change back in trusted keychain. Confirmed at Block 2.
let tx3 = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx2.txid(), 0),
..Default::default()
}],
output: vec![TxOut {
value: 10000,
script_pubkey: trusted_spks[1].clone(),
}],
..common::new_tx(0)
};
// tx4 is an external transaction receiving at untrusted keychain, unconfirmed.
let tx4 = Transaction {
output: vec![TxOut {
value: 20000,
script_pubkey: untrusted_spks[1].clone(),
}],
..common::new_tx(0)
};
// tx5 is spending tx3 and receiving change at trusted keychain, unconfirmed.
let tx5 = Transaction {
output: vec![TxOut {
value: 15000,
script_pubkey: trusted_spks[2].clone(),
}],
..common::new_tx(0)
};
// tx6 is an unrelated transaction confirmed at 3.
let tx6 = common::new_tx(0);
// Insert transactions into graph with respective anchors
// For unconfirmed txs we pass in `None`.
let _ = graph.insert_relevant_txs(
[&tx1, &tx2, &tx3, &tx6].iter().enumerate().map(|(i, tx)| {
let height = i as u32;
(
*tx,
local_chain
.blocks()
.get(&height)
.map(|&hash| BlockId { height, hash })
.map(|anchor_block| ConfirmationHeightAnchor {
anchor_block,
confirmation_height: anchor_block.height,
}),
)
}),
None,
);
let _ = graph.insert_relevant_txs([&tx4, &tx5].iter().map(|tx| (*tx, None)), Some(100));
// A helper lambda to extract and filter data from the graph.
let fetch =
|height: u32,
graph: &IndexedTxGraph<ConfirmationHeightAnchor, KeychainTxOutIndex<String>>| {
let chain_tip = local_chain
.blocks()
.get(&height)
.map(|&hash| BlockId { height, hash })
.expect("block must exist");
let txouts = graph
.graph()
.filter_chain_txouts(
&local_chain,
chain_tip,
graph.index.outpoints().iter().cloned(),
)
.collect::<Vec<_>>();
let utxos = graph
.graph()
.filter_chain_unspents(
&local_chain,
chain_tip,
graph.index.outpoints().iter().cloned(),
)
.collect::<Vec<_>>();
let balance = graph.graph().balance(
&local_chain,
chain_tip,
graph.index.outpoints().iter().cloned(),
|_, spk: &Script| trusted_spks.contains(spk),
);
assert_eq!(txouts.len(), 5);
assert_eq!(utxos.len(), 4);
let confirmed_txouts_txid = txouts
.iter()
.filter_map(|(_, full_txout)| {
if matches!(full_txout.chain_position, ChainPosition::Confirmed(_)) {
Some(full_txout.outpoint.txid)
} else {
None
}
})
.collect::<BTreeSet<_>>();
let unconfirmed_txouts_txid = txouts
.iter()
.filter_map(|(_, full_txout)| {
if matches!(full_txout.chain_position, ChainPosition::Unconfirmed(_)) {
Some(full_txout.outpoint.txid)
} else {
None
}
})
.collect::<BTreeSet<_>>();
let confirmed_utxos_txid = utxos
.iter()
.filter_map(|(_, full_txout)| {
if matches!(full_txout.chain_position, ChainPosition::Confirmed(_)) {
Some(full_txout.outpoint.txid)
} else {
None
}
})
.collect::<BTreeSet<_>>();
let unconfirmed_utxos_txid = utxos
.iter()
.filter_map(|(_, full_txout)| {
if matches!(full_txout.chain_position, ChainPosition::Unconfirmed(_)) {
Some(full_txout.outpoint.txid)
} else {
None
}
})
.collect::<BTreeSet<_>>();
(
confirmed_txouts_txid,
unconfirmed_txouts_txid,
confirmed_utxos_txid,
unconfirmed_utxos_txid,
balance,
)
};
// ----- TEST BLOCK -----
// AT Block 0
{
let (
confirmed_txouts_txid,
unconfirmed_txouts_txid,
confirmed_utxos_txid,
unconfirmed_utxos_txid,
balance,
) = fetch(0, &graph);
assert_eq!(confirmed_txouts_txid, [tx1.txid()].into());
assert_eq!(
unconfirmed_txouts_txid,
[tx2.txid(), tx3.txid(), tx4.txid(), tx5.txid()].into()
);
assert_eq!(confirmed_utxos_txid, [tx1.txid()].into());
assert_eq!(
unconfirmed_utxos_txid,
[tx3.txid(), tx4.txid(), tx5.txid()].into()
);
assert_eq!(
balance,
Balance {
immature: 70000, // immature coinbase
trusted_pending: 25000, // tx3 + tx5
untrusted_pending: 20000, // tx4
confirmed: 0 // Nothing is confirmed yet
}
);
}
// AT Block 1
{
let (
confirmed_txouts_txid,
unconfirmed_txouts_txid,
confirmed_utxos_txid,
unconfirmed_utxos_txid,
balance,
) = fetch(1, &graph);
// tx2 gets into confirmed txout set
assert_eq!(confirmed_txouts_txid, [tx1.txid(), tx2.txid()].into());
assert_eq!(
unconfirmed_txouts_txid,
[tx3.txid(), tx4.txid(), tx5.txid()].into()
);
// tx2 doesn't get into confirmed utxos set
assert_eq!(confirmed_utxos_txid, [tx1.txid()].into());
assert_eq!(
unconfirmed_utxos_txid,
[tx3.txid(), tx4.txid(), tx5.txid()].into()
);
assert_eq!(
balance,
Balance {
immature: 70000, // immature coinbase
trusted_pending: 25000, // tx3 + tx5
untrusted_pending: 20000, // tx4
confirmed: 0 // Nothing is confirmed yet
}
);
}
// AT Block 2
{
let (
confirmed_txouts_txid,
unconfirmed_txouts_txid,
confirmed_utxos_txid,
unconfirmed_utxos_txid,
balance,
) = fetch(2, &graph);
// tx3 now gets into the confirmed txout set
assert_eq!(
confirmed_txouts_txid,
[tx1.txid(), tx2.txid(), tx3.txid()].into()
);
assert_eq!(unconfirmed_txouts_txid, [tx4.txid(), tx5.txid()].into());
// tx3 also gets into confirmed utxo set
assert_eq!(confirmed_utxos_txid, [tx1.txid(), tx3.txid()].into());
assert_eq!(unconfirmed_utxos_txid, [tx4.txid(), tx5.txid()].into());
assert_eq!(
balance,
Balance {
immature: 70000, // immature coinbase
trusted_pending: 15000, // tx5
untrusted_pending: 20000, // tx4
confirmed: 10000 // tx3 got confirmed
}
);
}
// AT Block 98
{
let (
confirmed_txouts_txid,
unconfirmed_txouts_txid,
confirmed_utxos_txid,
unconfirmed_utxos_txid,
balance,
) = fetch(98, &graph);
assert_eq!(
confirmed_txouts_txid,
[tx1.txid(), tx2.txid(), tx3.txid()].into()
);
assert_eq!(unconfirmed_txouts_txid, [tx4.txid(), tx5.txid()].into());
assert_eq!(confirmed_utxos_txid, [tx1.txid(), tx3.txid()].into());
assert_eq!(unconfirmed_utxos_txid, [tx4.txid(), tx5.txid()].into());
// Coinbase is still immature
assert_eq!(
balance,
Balance {
immature: 70000, // immature coinbase
trusted_pending: 15000, // tx5
untrusted_pending: 20000, // tx4
confirmed: 10000 // tx1 got matured
}
);
}
// AT Block 99
{
let (_, _, _, _, balance) = fetch(100, &graph);
// Coinbase maturity hits
assert_eq!(
balance,
Balance {
immature: 0, // coinbase matured
trusted_pending: 15000, // tx5
untrusted_pending: 20000, // tx4
confirmed: 80000 // tx1 + tx3
}
);
}
}

View File

@@ -1,368 +0,0 @@
#![cfg(feature = "miniscript")]
#[macro_use]
mod common;
use bdk_chain::{
collections::BTreeMap,
keychain::{DerivationAdditions, KeychainTxOutIndex},
};
use bitcoin::{secp256k1::Secp256k1, OutPoint, Script, Transaction, TxOut};
use miniscript::{Descriptor, DescriptorPublicKey};
#[derive(Clone, Debug, PartialEq, Eq, Ord, PartialOrd)]
enum TestKeychain {
External,
Internal,
}
fn init_txout_index() -> (
bdk_chain::keychain::KeychainTxOutIndex<TestKeychain>,
Descriptor<DescriptorPublicKey>,
Descriptor<DescriptorPublicKey>,
) {
let mut txout_index = bdk_chain::keychain::KeychainTxOutIndex::<TestKeychain>::default();
let secp = bdk_chain::bitcoin::secp256k1::Secp256k1::signing_only();
let (external_descriptor,_) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "tr([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/0/*)").unwrap();
let (internal_descriptor,_) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "tr([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/1/*)").unwrap();
txout_index.add_keychain(TestKeychain::External, external_descriptor.clone());
txout_index.add_keychain(TestKeychain::Internal, internal_descriptor.clone());
(txout_index, external_descriptor, internal_descriptor)
}
fn spk_at_index(descriptor: &Descriptor<DescriptorPublicKey>, index: u32) -> Script {
descriptor
.derived_descriptor(&Secp256k1::verification_only(), index)
.expect("must derive")
.script_pubkey()
}
#[test]
fn test_set_all_derivation_indices() {
let (mut txout_index, _, _) = init_txout_index();
let derive_to: BTreeMap<_, _> =
[(TestKeychain::External, 12), (TestKeychain::Internal, 24)].into();
assert_eq!(
txout_index.reveal_to_target_multi(&derive_to).1.as_inner(),
&derive_to
);
assert_eq!(txout_index.last_revealed_indices(), &derive_to);
assert_eq!(
txout_index.reveal_to_target_multi(&derive_to).1,
DerivationAdditions::default(),
"no changes if we set to the same thing"
);
}
#[test]
fn test_lookahead() {
let (mut txout_index, external_desc, internal_desc) = init_txout_index();
// ensure it does not break anything if lookahead is set multiple times
(0..=10).for_each(|lookahead| txout_index.set_lookahead(&TestKeychain::External, lookahead));
(0..=20)
.filter(|v| v % 2 == 0)
.for_each(|lookahead| txout_index.set_lookahead(&TestKeychain::Internal, lookahead));
assert_eq!(txout_index.inner().all_spks().len(), 30);
// given:
// - external lookahead set to 10
// - internal lookahead set to 20
// when:
// - set external derivation index to value higher than last, but within the lookahead value
// expect:
// - scripts cached in spk_txout_index should increase correctly
// - stored scripts of external keychain should be of expected counts
for index in (0..20).skip_while(|i| i % 2 == 1) {
let (revealed_spks, revealed_additions) =
txout_index.reveal_to_target(&TestKeychain::External, index);
assert_eq!(
revealed_spks.collect::<Vec<_>>(),
vec![(index, spk_at_index(&external_desc, index))],
);
assert_eq!(
revealed_additions.as_inner(),
&[(TestKeychain::External, index)].into()
);
assert_eq!(
txout_index.inner().all_spks().len(),
10 /* external lookahead */ +
20 /* internal lookahead */ +
index as usize + 1 /* `derived` count */
);
assert_eq!(
txout_index
.revealed_spks_of_keychain(&TestKeychain::External)
.count(),
index as usize + 1,
);
assert_eq!(
txout_index
.revealed_spks_of_keychain(&TestKeychain::Internal)
.count(),
0,
);
assert_eq!(
txout_index
.unused_spks_of_keychain(&TestKeychain::External)
.count(),
index as usize + 1,
);
assert_eq!(
txout_index
.unused_spks_of_keychain(&TestKeychain::Internal)
.count(),
0,
);
}
// given:
// - internal lookahead is 20
// - internal derivation index is `None`
// when:
// - derivation index is set ahead of current derivation index + lookahead
// expect:
// - scripts cached in spk_txout_index should increase correctly, a.k.a. no scripts are skipped
let (revealed_spks, revealed_additions) =
txout_index.reveal_to_target(&TestKeychain::Internal, 24);
assert_eq!(
revealed_spks.collect::<Vec<_>>(),
(0..=24)
.map(|index| (index, spk_at_index(&internal_desc, index)))
.collect::<Vec<_>>(),
);
assert_eq!(
revealed_additions.as_inner(),
&[(TestKeychain::Internal, 24)].into()
);
assert_eq!(
txout_index.inner().all_spks().len(),
10 /* external lookahead */ +
20 /* internal lookahead */ +
20 /* external stored index count */ +
25 /* internal stored index count */
);
assert_eq!(
txout_index
.revealed_spks_of_keychain(&TestKeychain::Internal)
.count(),
25,
);
// ensure derivation indices are expected for each keychain
let last_external_index = txout_index
.last_revealed_index(&TestKeychain::External)
.expect("already derived");
let last_internal_index = txout_index
.last_revealed_index(&TestKeychain::Internal)
.expect("already derived");
assert_eq!(last_external_index, 19);
assert_eq!(last_internal_index, 24);
// when:
// - scanning txouts with spks within stored indexes
// expect:
// - no changes to stored index counts
let external_iter = 0..=last_external_index;
let internal_iter = last_internal_index - last_external_index..=last_internal_index;
for (external_index, internal_index) in external_iter.zip(internal_iter) {
let tx = Transaction {
output: vec![
TxOut {
script_pubkey: external_desc
.at_derivation_index(external_index)
.script_pubkey(),
value: 10_000,
},
TxOut {
script_pubkey: internal_desc
.at_derivation_index(internal_index)
.script_pubkey(),
value: 10_000,
},
],
..common::new_tx(external_index)
};
assert_eq!(txout_index.scan(&tx), DerivationAdditions::default());
assert_eq!(
txout_index.last_revealed_index(&TestKeychain::External),
Some(last_external_index)
);
assert_eq!(
txout_index.last_revealed_index(&TestKeychain::Internal),
Some(last_internal_index)
);
assert_eq!(
txout_index
.revealed_spks_of_keychain(&TestKeychain::External)
.count(),
last_external_index as usize + 1,
);
assert_eq!(
txout_index
.revealed_spks_of_keychain(&TestKeychain::Internal)
.count(),
last_internal_index as usize + 1,
);
}
}
// when:
// - scanning txouts with spks above last stored index
// expect:
// - last revealed index should increase as expected
// - last used index should change as expected
#[test]
fn test_scan_with_lookahead() {
let (mut txout_index, external_desc, _) = init_txout_index();
txout_index.set_lookahead_for_all(10);
let spks: BTreeMap<u32, Script> = [0, 10, 20, 30]
.into_iter()
.map(|i| (i, external_desc.at_derivation_index(i).script_pubkey()))
.collect();
for (&spk_i, spk) in &spks {
let op = OutPoint::new(h!("fake tx"), spk_i);
let txout = TxOut {
script_pubkey: spk.clone(),
value: 0,
};
let additions = txout_index.scan_txout(op, &txout);
assert_eq!(
additions.as_inner(),
&[(TestKeychain::External, spk_i)].into()
);
assert_eq!(
txout_index.last_revealed_index(&TestKeychain::External),
Some(spk_i)
);
assert_eq!(
txout_index.last_used_index(&TestKeychain::External),
Some(spk_i)
);
}
// now try with index 41 (lookahead surpassed), we expect that the txout to not be indexed
let spk_41 = external_desc.at_derivation_index(41).script_pubkey();
let op = OutPoint::new(h!("fake tx"), 41);
let txout = TxOut {
script_pubkey: spk_41,
value: 0,
};
let additions = txout_index.scan_txout(op, &txout);
assert!(additions.is_empty());
}
#[test]
fn test_wildcard_derivations() {
let (mut txout_index, external_desc, _) = init_txout_index();
let external_spk_0 = external_desc.at_derivation_index(0).script_pubkey();
let external_spk_16 = external_desc.at_derivation_index(16).script_pubkey();
let external_spk_26 = external_desc.at_derivation_index(26).script_pubkey();
let external_spk_27 = external_desc.at_derivation_index(27).script_pubkey();
// - nothing is derived
// - unused list is also empty
//
// - next_derivation_index() == (0, true)
// - derive_new() == ((0, <spk>), DerivationAdditions)
// - next_unused() == ((0, <spk>), DerivationAdditions:is_empty())
assert_eq!(txout_index.next_index(&TestKeychain::External), (0, true));
let (spk, changeset) = txout_index.reveal_next_spk(&TestKeychain::External);
assert_eq!(spk, (0_u32, &external_spk_0));
assert_eq!(changeset.as_inner(), &[(TestKeychain::External, 0)].into());
let (spk, changeset) = txout_index.next_unused_spk(&TestKeychain::External);
assert_eq!(spk, (0_u32, &external_spk_0));
assert_eq!(changeset.as_inner(), &[].into());
// - derived till 25
// - used all spks till 15.
// - used list : [0..=15, 17, 20, 23]
// - unused list: [16, 18, 19, 21, 22, 24, 25]
// - next_derivation_index() = (26, true)
// - derive_new() = ((26, <spk>), DerivationAdditions)
// - next_unused() == ((16, <spk>), DerivationAdditions::is_empty())
let _ = txout_index.reveal_to_target(&TestKeychain::External, 25);
(0..=15)
.chain(vec![17, 20, 23].into_iter())
.for_each(|index| assert!(txout_index.mark_used(&TestKeychain::External, index)));
assert_eq!(txout_index.next_index(&TestKeychain::External), (26, true));
let (spk, changeset) = txout_index.reveal_next_spk(&TestKeychain::External);
assert_eq!(spk, (26, &external_spk_26));
assert_eq!(changeset.as_inner(), &[(TestKeychain::External, 26)].into());
let (spk, changeset) = txout_index.next_unused_spk(&TestKeychain::External);
assert_eq!(spk, (16, &external_spk_16));
assert_eq!(changeset.as_inner(), &[].into());
// - Use all the derived till 26.
// - next_unused() = ((27, <spk>), DerivationAdditions)
(0..=26).for_each(|index| {
txout_index.mark_used(&TestKeychain::External, index);
});
let (spk, changeset) = txout_index.next_unused_spk(&TestKeychain::External);
assert_eq!(spk, (27, &external_spk_27));
assert_eq!(changeset.as_inner(), &[(TestKeychain::External, 27)].into());
}
#[test]
fn test_non_wildcard_derivations() {
let mut txout_index = KeychainTxOutIndex::<TestKeychain>::default();
let secp = bitcoin::secp256k1::Secp256k1::signing_only();
let (no_wildcard_descriptor, _) = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, "wpkh([73c5da0a/86'/0'/0']xprv9xgqHN7yz9MwCkxsBPN5qetuNdQSUttZNKw1dcYTV4mkaAFiBVGQziHs3NRSWMkCzvgjEe3n9xV8oYywvM8at9yRqyaZVz6TYYhX98VjsUk/1/0)").unwrap();
let external_spk = no_wildcard_descriptor
.at_derivation_index(0)
.script_pubkey();
txout_index.add_keychain(TestKeychain::External, no_wildcard_descriptor);
// given:
// - `txout_index` with no stored scripts
// expect:
// - next derivation index should be new
// - when we derive a new script, script @ index 0
// - when we get the next unused script, script @ index 0
assert_eq!(txout_index.next_index(&TestKeychain::External), (0, true));
let (spk, changeset) = txout_index.reveal_next_spk(&TestKeychain::External);
assert_eq!(spk, (0, &external_spk));
assert_eq!(changeset.as_inner(), &[(TestKeychain::External, 0)].into());
let (spk, changeset) = txout_index.next_unused_spk(&TestKeychain::External);
assert_eq!(spk, (0, &external_spk));
assert_eq!(changeset.as_inner(), &[].into());
// given:
// - the non-wildcard descriptor already has a stored and used script
// expect:
// - next derivation index should not be new
// - derive new and next unused should return the old script
// - store_up_to should not panic and return empty additions
assert_eq!(txout_index.next_index(&TestKeychain::External), (0, false));
txout_index.mark_used(&TestKeychain::External, 0);
let (spk, changeset) = txout_index.reveal_next_spk(&TestKeychain::External);
assert_eq!(spk, (0, &external_spk));
assert_eq!(changeset.as_inner(), &[].into());
let (spk, changeset) = txout_index.next_unused_spk(&TestKeychain::External);
assert_eq!(spk, (0, &external_spk));
assert_eq!(changeset.as_inner(), &[].into());
let (revealed_spks, revealed_additions) =
txout_index.reveal_to_target(&TestKeychain::External, 200);
assert_eq!(revealed_spks.count(), 0);
assert!(revealed_additions.is_empty());
}

View File

@@ -1,228 +0,0 @@
use bdk_chain::local_chain::{
ChangeSet, InsertBlockNotMatchingError, LocalChain, UpdateNotConnectedError,
};
use bitcoin::BlockHash;
#[macro_use]
mod common;
#[test]
fn add_first_tip() {
let chain = LocalChain::default();
assert_eq!(
chain.determine_changeset(&local_chain![(0, h!("A"))]),
Ok([(0, Some(h!("A")))].into()),
"add first tip"
);
}
#[test]
fn add_second_tip() {
let chain = local_chain![(0, h!("A"))];
assert_eq!(
chain.determine_changeset(&local_chain![(0, h!("A")), (1, h!("B"))]),
Ok([(1, Some(h!("B")))].into())
);
}
#[test]
fn two_disjoint_chains_cannot_merge() {
let chain1 = local_chain![(0, h!("A"))];
let chain2 = local_chain![(1, h!("B"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Err(UpdateNotConnectedError(0))
);
}
#[test]
fn duplicate_chains_should_merge() {
let chain1 = local_chain![(0, h!("A"))];
let chain2 = local_chain![(0, h!("A"))];
assert_eq!(chain1.determine_changeset(&chain2), Ok(Default::default()));
}
#[test]
fn can_introduce_older_checkpoints() {
let chain1 = local_chain![(2, h!("C")), (3, h!("D"))];
let chain2 = local_chain![(1, h!("B")), (2, h!("C"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Ok([(1, Some(h!("B")))].into())
);
}
#[test]
fn fix_blockhash_before_agreement_point() {
let chain1 = local_chain![(0, h!("im-wrong")), (1, h!("we-agree"))];
let chain2 = local_chain![(0, h!("fix")), (1, h!("we-agree"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Ok([(0, Some(h!("fix")))].into())
)
}
/// B and C are in both chain and update
/// ```
/// | 0 | 1 | 2 | 3 | 4
/// chain | B C
/// update | A B C D
/// ```
/// This should succeed with the point of agreement being C and A should be added in addition.
#[test]
fn two_points_of_agreement() {
let chain1 = local_chain![(1, h!("B")), (2, h!("C"))];
let chain2 = local_chain![(0, h!("A")), (1, h!("B")), (2, h!("C")), (3, h!("D"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Ok([(0, Some(h!("A"))), (3, Some(h!("D")))].into()),
);
}
/// Update and chain does not connect:
/// ```
/// | 0 | 1 | 2 | 3 | 4
/// chain | B C
/// update | A B D
/// ```
/// This should fail as we cannot figure out whether C & D are on the same chain
#[test]
fn update_and_chain_does_not_connect() {
let chain1 = local_chain![(1, h!("B")), (2, h!("C"))];
let chain2 = local_chain![(0, h!("A")), (1, h!("B")), (3, h!("D"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Err(UpdateNotConnectedError(2)),
);
}
/// Transient invalidation:
/// ```
/// | 0 | 1 | 2 | 3 | 4 | 5
/// chain | A B C E
/// update | A B' C' D
/// ```
/// This should succeed and invalidate B,C and E with point of agreement being A.
#[test]
fn transitive_invalidation_applies_to_checkpoints_higher_than_invalidation() {
let chain1 = local_chain![(0, h!("A")), (2, h!("B")), (3, h!("C")), (5, h!("E"))];
let chain2 = local_chain![(0, h!("A")), (2, h!("B'")), (3, h!("C'")), (4, h!("D"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Ok([
(2, Some(h!("B'"))),
(3, Some(h!("C'"))),
(4, Some(h!("D"))),
(5, None),
]
.into())
);
}
/// Transient invalidation:
/// ```
/// | 0 | 1 | 2 | 3 | 4
/// chain | B C E
/// update | B' C' D
/// ```
///
/// This should succeed and invalidate B, C and E with no point of agreement
#[test]
fn transitive_invalidation_applies_to_checkpoints_higher_than_invalidation_no_point_of_agreement() {
let chain1 = local_chain![(1, h!("B")), (2, h!("C")), (4, h!("E"))];
let chain2 = local_chain![(1, h!("B'")), (2, h!("C'")), (3, h!("D"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Ok([
(1, Some(h!("B'"))),
(2, Some(h!("C'"))),
(3, Some(h!("D"))),
(4, None)
]
.into())
)
}
/// Transient invalidation:
/// ```
/// | 0 | 1 | 2 | 3 | 4
/// chain | A B C E
/// update | B' C' D
/// ```
///
/// This should fail since although it tells us that B and C are invalid it doesn't tell us whether
/// A was invalid.
#[test]
fn invalidation_but_no_connection() {
let chain1 = local_chain![(0, h!("A")), (1, h!("B")), (2, h!("C")), (4, h!("E"))];
let chain2 = local_chain![(1, h!("B'")), (2, h!("C'")), (3, h!("D"))];
assert_eq!(
chain1.determine_changeset(&chain2),
Err(UpdateNotConnectedError(0))
)
}
#[test]
fn insert_block() {
struct TestCase {
original: LocalChain,
insert: (u32, BlockHash),
expected_result: Result<ChangeSet, InsertBlockNotMatchingError>,
expected_final: LocalChain,
}
let test_cases = [
TestCase {
original: local_chain![],
insert: (5, h!("block5")),
expected_result: Ok([(5, Some(h!("block5")))].into()),
expected_final: local_chain![(5, h!("block5"))],
},
TestCase {
original: local_chain![(3, h!("A"))],
insert: (4, h!("B")),
expected_result: Ok([(4, Some(h!("B")))].into()),
expected_final: local_chain![(3, h!("A")), (4, h!("B"))],
},
TestCase {
original: local_chain![(4, h!("B"))],
insert: (3, h!("A")),
expected_result: Ok([(3, Some(h!("A")))].into()),
expected_final: local_chain![(3, h!("A")), (4, h!("B"))],
},
TestCase {
original: local_chain![(2, h!("K"))],
insert: (2, h!("K")),
expected_result: Ok([].into()),
expected_final: local_chain![(2, h!("K"))],
},
TestCase {
original: local_chain![(2, h!("K"))],
insert: (2, h!("J")),
expected_result: Err(InsertBlockNotMatchingError {
height: 2,
original_hash: h!("K"),
update_hash: h!("J"),
}),
expected_final: local_chain![(2, h!("K"))],
},
];
for (i, t) in test_cases.into_iter().enumerate() {
let mut chain = t.original;
assert_eq!(
chain.insert_block(t.insert.into()),
t.expected_result,
"[{}] unexpected result when inserting block",
i,
);
assert_eq!(chain, t.expected_final, "[{}] unexpected final chain", i,);
}
}

View File

@@ -1,100 +0,0 @@
use bdk_chain::SpkTxOutIndex;
use bitcoin::{hashes::hex::FromHex, OutPoint, PackedLockTime, Script, Transaction, TxIn, TxOut};
#[test]
fn spk_txout_sent_and_received() {
let spk1 = Script::from_hex("001404f1e52ce2bab3423c6a8c63b7cd730d8f12542c").unwrap();
let spk2 = Script::from_hex("00142b57404ae14f08c3a0c903feb2af7830605eb00f").unwrap();
let mut index = SpkTxOutIndex::default();
index.insert_spk(0, spk1.clone());
index.insert_spk(1, spk2.clone());
let tx1 = Transaction {
version: 0x02,
lock_time: PackedLockTime(0),
input: vec![],
output: vec![TxOut {
value: 42_000,
script_pubkey: spk1.clone(),
}],
};
assert_eq!(index.sent_and_received(&tx1), (0, 42_000));
assert_eq!(index.net_value(&tx1), 42_000);
index.scan(&tx1);
assert_eq!(
index.sent_and_received(&tx1),
(0, 42_000),
"shouldn't change after scanning"
);
let tx2 = Transaction {
version: 0x1,
lock_time: PackedLockTime(0),
input: vec![TxIn {
previous_output: OutPoint {
txid: tx1.txid(),
vout: 0,
},
..Default::default()
}],
output: vec![
TxOut {
value: 20_000,
script_pubkey: spk2,
},
TxOut {
script_pubkey: spk1,
value: 30_000,
},
],
};
assert_eq!(index.sent_and_received(&tx2), (42_000, 50_000));
assert_eq!(index.net_value(&tx2), 8_000);
}
#[test]
fn mark_used() {
let spk1 = Script::from_hex("001404f1e52ce2bab3423c6a8c63b7cd730d8f12542c").unwrap();
let spk2 = Script::from_hex("00142b57404ae14f08c3a0c903feb2af7830605eb00f").unwrap();
let mut spk_index = SpkTxOutIndex::default();
spk_index.insert_spk(1, spk1.clone());
spk_index.insert_spk(2, spk2);
assert!(!spk_index.is_used(&1));
spk_index.mark_used(&1);
assert!(spk_index.is_used(&1));
spk_index.unmark_used(&1);
assert!(!spk_index.is_used(&1));
spk_index.mark_used(&1);
assert!(spk_index.is_used(&1));
let tx1 = Transaction {
version: 0x02,
lock_time: PackedLockTime(0),
input: vec![],
output: vec![TxOut {
value: 42_000,
script_pubkey: spk1,
}],
};
spk_index.scan(&tx1);
spk_index.unmark_used(&1);
assert!(
spk_index.is_used(&1),
"even though we unmark_used it doesn't matter because there was a tx scanned that used it"
);
}
#[test]
fn unmark_used_does_not_result_in_invalid_representation() {
let mut spk_index = SpkTxOutIndex::default();
assert!(!spk_index.unmark_used(&0));
assert!(!spk_index.unmark_used(&1));
assert!(!spk_index.unmark_used(&2));
assert!(spk_index.unused_spks(..).collect::<Vec<_>>().is_empty());
}

View File

@@ -1,824 +0,0 @@
#[macro_use]
mod common;
use bdk_chain::{
collections::*,
local_chain::LocalChain,
tx_graph::{Additions, TxGraph},
Append, BlockId, ChainPosition, ConfirmationHeightAnchor,
};
use bitcoin::{
hashes::Hash, BlockHash, OutPoint, PackedLockTime, Script, Transaction, TxIn, TxOut, Txid,
};
use core::iter;
use std::vec;
#[test]
fn insert_txouts() {
// 2 (Outpoint, TxOut) tupples that denotes original data in the graph, as partial transactions.
let original_ops = [
(
OutPoint::new(h!("tx1"), 1),
TxOut {
value: 10_000,
script_pubkey: Script::new(),
},
),
(
OutPoint::new(h!("tx1"), 2),
TxOut {
value: 20_000,
script_pubkey: Script::new(),
},
),
];
// Another (OutPoint, TxOut) tupple to be used as update as partial transaction.
let update_ops = [(
OutPoint::new(h!("tx2"), 0),
TxOut {
value: 20_000,
script_pubkey: Script::new(),
},
)];
// One full transaction to be included in the update
let update_txs = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![TxIn {
previous_output: OutPoint::null(),
..Default::default()
}],
output: vec![TxOut {
value: 30_000,
script_pubkey: Script::new(),
}],
};
// Conf anchor used to mark the full transaction as confirmed.
let conf_anchor = ChainPosition::Confirmed(BlockId {
height: 100,
hash: h!("random blockhash"),
});
// Unconfirmed anchor to mark the partial transactions as unconfirmed
let unconf_anchor = ChainPosition::<BlockId>::Unconfirmed(1000000);
// Make the original graph
let mut graph = {
let mut graph = TxGraph::<ChainPosition<BlockId>>::default();
for (outpoint, txout) in &original_ops {
assert_eq!(
graph.insert_txout(*outpoint, txout.clone()),
Additions {
txouts: [(*outpoint, txout.clone())].into(),
..Default::default()
}
);
}
graph
};
// Make the update graph
let update = {
let mut graph = TxGraph::default();
for (outpoint, txout) in &update_ops {
// Insert partials transactions
assert_eq!(
graph.insert_txout(*outpoint, txout.clone()),
Additions {
txouts: [(*outpoint, txout.clone())].into(),
..Default::default()
}
);
// Mark them unconfirmed.
assert_eq!(
graph.insert_anchor(outpoint.txid, unconf_anchor),
Additions {
txs: [].into(),
txouts: [].into(),
anchors: [(unconf_anchor, outpoint.txid)].into(),
last_seen: [].into()
}
);
// Mark them last seen at.
assert_eq!(
graph.insert_seen_at(outpoint.txid, 1000000),
Additions {
txs: [].into(),
txouts: [].into(),
anchors: [].into(),
last_seen: [(outpoint.txid, 1000000)].into()
}
);
}
// Insert the full transaction
assert_eq!(
graph.insert_tx(update_txs.clone()),
Additions {
txs: [update_txs.clone()].into(),
..Default::default()
}
);
// Mark it as confirmed.
assert_eq!(
graph.insert_anchor(update_txs.txid(), conf_anchor),
Additions {
txs: [].into(),
txouts: [].into(),
anchors: [(conf_anchor, update_txs.txid())].into(),
last_seen: [].into()
}
);
graph
};
// Check the resulting addition.
let additions = graph.determine_additions(&update);
assert_eq!(
additions,
Additions {
txs: [update_txs.clone()].into(),
txouts: update_ops.into(),
anchors: [(conf_anchor, update_txs.txid()), (unconf_anchor, h!("tx2"))].into(),
last_seen: [(h!("tx2"), 1000000)].into()
}
);
// Apply addition and check the new graph counts.
graph.apply_additions(additions);
assert_eq!(graph.all_txouts().count(), 4);
assert_eq!(graph.full_txs().count(), 1);
assert_eq!(graph.floating_txouts().count(), 3);
// Check TxOuts are fetched correctly from the graph.
assert_eq!(
graph.tx_outputs(h!("tx1")).expect("should exists"),
[
(
1u32,
&TxOut {
value: 10_000,
script_pubkey: Script::new(),
}
),
(
2u32,
&TxOut {
value: 20_000,
script_pubkey: Script::new(),
}
)
]
.into()
);
assert_eq!(
graph.tx_outputs(update_txs.txid()).expect("should exists"),
[(
0u32,
&TxOut {
value: 30_000,
script_pubkey: Script::new()
}
)]
.into()
);
}
#[test]
fn insert_tx_graph_doesnt_count_coinbase_as_spent() {
let tx = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![TxIn {
previous_output: OutPoint::null(),
..Default::default()
}],
output: vec![],
};
let mut graph = TxGraph::<()>::default();
let _ = graph.insert_tx(tx);
assert!(graph.outspends(OutPoint::null()).is_empty());
assert!(graph.tx_spends(Txid::all_zeros()).next().is_none());
}
#[test]
fn insert_tx_graph_keeps_track_of_spend() {
let tx1 = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![],
output: vec![TxOut::default()],
};
let op = OutPoint {
txid: tx1.txid(),
vout: 0,
};
let tx2 = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![TxIn {
previous_output: op,
..Default::default()
}],
output: vec![],
};
let mut graph1 = TxGraph::<()>::default();
let mut graph2 = TxGraph::<()>::default();
// insert in different order
let _ = graph1.insert_tx(tx1.clone());
let _ = graph1.insert_tx(tx2.clone());
let _ = graph2.insert_tx(tx2.clone());
let _ = graph2.insert_tx(tx1);
assert_eq!(
graph1.outspends(op),
&iter::once(tx2.txid()).collect::<HashSet<_>>()
);
assert_eq!(graph2.outspends(op), graph1.outspends(op));
}
#[test]
fn insert_tx_can_retrieve_full_tx_from_graph() {
let tx = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![TxIn {
previous_output: OutPoint::null(),
..Default::default()
}],
output: vec![TxOut::default()],
};
let mut graph = TxGraph::<()>::default();
let _ = graph.insert_tx(tx.clone());
assert_eq!(graph.get_tx(tx.txid()), Some(&tx));
}
#[test]
fn insert_tx_displaces_txouts() {
let mut tx_graph = TxGraph::<()>::default();
let tx = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![],
output: vec![TxOut {
value: 42_000,
script_pubkey: Script::default(),
}],
};
let _ = tx_graph.insert_txout(
OutPoint {
txid: tx.txid(),
vout: 0,
},
TxOut {
value: 1_337_000,
script_pubkey: Script::default(),
},
);
let _ = tx_graph.insert_txout(
OutPoint {
txid: tx.txid(),
vout: 0,
},
TxOut {
value: 1_000_000_000,
script_pubkey: Script::default(),
},
);
let _additions = tx_graph.insert_tx(tx.clone());
assert_eq!(
tx_graph
.get_txout(OutPoint {
txid: tx.txid(),
vout: 0
})
.unwrap()
.value,
42_000
);
assert_eq!(
tx_graph.get_txout(OutPoint {
txid: tx.txid(),
vout: 1
}),
None
);
}
#[test]
fn insert_txout_does_not_displace_tx() {
let mut tx_graph = TxGraph::<()>::default();
let tx = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![],
output: vec![TxOut {
value: 42_000,
script_pubkey: Script::default(),
}],
};
let _additions = tx_graph.insert_tx(tx.clone());
let _ = tx_graph.insert_txout(
OutPoint {
txid: tx.txid(),
vout: 0,
},
TxOut {
value: 1_337_000,
script_pubkey: Script::default(),
},
);
let _ = tx_graph.insert_txout(
OutPoint {
txid: tx.txid(),
vout: 0,
},
TxOut {
value: 1_000_000_000,
script_pubkey: Script::default(),
},
);
assert_eq!(
tx_graph
.get_txout(OutPoint {
txid: tx.txid(),
vout: 0
})
.unwrap()
.value,
42_000
);
assert_eq!(
tx_graph.get_txout(OutPoint {
txid: tx.txid(),
vout: 1
}),
None
);
}
#[test]
fn test_calculate_fee() {
let mut graph = TxGraph::<()>::default();
let intx1 = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![],
output: vec![TxOut {
value: 100,
..Default::default()
}],
};
let intx2 = Transaction {
version: 0x02,
lock_time: PackedLockTime(0),
input: vec![],
output: vec![TxOut {
value: 200,
..Default::default()
}],
};
let intxout1 = (
OutPoint {
txid: h!("dangling output"),
vout: 0,
},
TxOut {
value: 300,
..Default::default()
},
);
let _ = graph.insert_tx(intx1.clone());
let _ = graph.insert_tx(intx2.clone());
let _ = graph.insert_txout(intxout1.0, intxout1.1);
let mut tx = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![
TxIn {
previous_output: OutPoint {
txid: intx1.txid(),
vout: 0,
},
..Default::default()
},
TxIn {
previous_output: OutPoint {
txid: intx2.txid(),
vout: 0,
},
..Default::default()
},
TxIn {
previous_output: intxout1.0,
..Default::default()
},
],
output: vec![TxOut {
value: 500,
..Default::default()
}],
};
assert_eq!(graph.calculate_fee(&tx), Some(100));
tx.input.remove(2);
// fee would be negative
assert_eq!(graph.calculate_fee(&tx), Some(-200));
// If we have an unknown outpoint, fee should return None.
tx.input.push(TxIn {
previous_output: OutPoint {
txid: h!("unknown_txid"),
vout: 0,
},
..Default::default()
});
assert_eq!(graph.calculate_fee(&tx), None);
}
#[test]
fn test_calculate_fee_on_coinbase() {
let tx = Transaction {
version: 0x01,
lock_time: PackedLockTime(0),
input: vec![TxIn {
previous_output: OutPoint::null(),
..Default::default()
}],
output: vec![TxOut::default()],
};
let graph = TxGraph::<()>::default();
assert_eq!(graph.calculate_fee(&tx), Some(0));
}
#[test]
fn test_conflicting_descendants() {
let previous_output = OutPoint::new(h!("op"), 2);
// tx_a spends previous_output
let tx_a = Transaction {
input: vec![TxIn {
previous_output,
..TxIn::default()
}],
output: vec![TxOut::default()],
..common::new_tx(0)
};
// tx_a2 spends previous_output and conflicts with tx_a
let tx_a2 = Transaction {
input: vec![TxIn {
previous_output,
..TxIn::default()
}],
output: vec![TxOut::default(), TxOut::default()],
..common::new_tx(1)
};
// tx_b spends tx_a
let tx_b = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_a.txid(), 0),
..TxIn::default()
}],
output: vec![TxOut::default()],
..common::new_tx(2)
};
let txid_a = tx_a.txid();
let txid_b = tx_b.txid();
let mut graph = TxGraph::<()>::default();
let _ = graph.insert_tx(tx_a);
let _ = graph.insert_tx(tx_b);
assert_eq!(
graph
.walk_conflicts(&tx_a2, |depth, txid| Some((depth, txid)))
.collect::<Vec<_>>(),
vec![(0_usize, txid_a), (1_usize, txid_b),],
);
}
#[test]
fn test_descendants_no_repeat() {
let tx_a = Transaction {
output: vec![TxOut::default(), TxOut::default(), TxOut::default()],
..common::new_tx(0)
};
let txs_b = (0..3)
.map(|vout| Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_a.txid(), vout),
..TxIn::default()
}],
output: vec![TxOut::default()],
..common::new_tx(1)
})
.collect::<Vec<_>>();
let txs_c = (0..2)
.map(|vout| Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(txs_b[vout as usize].txid(), vout),
..TxIn::default()
}],
output: vec![TxOut::default()],
..common::new_tx(2)
})
.collect::<Vec<_>>();
let tx_d = Transaction {
input: vec![
TxIn {
previous_output: OutPoint::new(txs_c[0].txid(), 0),
..TxIn::default()
},
TxIn {
previous_output: OutPoint::new(txs_c[1].txid(), 0),
..TxIn::default()
},
],
output: vec![TxOut::default()],
..common::new_tx(3)
};
let tx_e = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_d.txid(), 0),
..TxIn::default()
}],
output: vec![TxOut::default()],
..common::new_tx(4)
};
let txs_not_connected = (10..20)
.map(|v| Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(h!("tx_does_not_exist"), v),
..TxIn::default()
}],
output: vec![TxOut::default()],
..common::new_tx(v)
})
.collect::<Vec<_>>();
let mut graph = TxGraph::<()>::default();
let mut expected_txids = BTreeSet::new();
// these are NOT descendants of `tx_a`
for tx in txs_not_connected {
let _ = graph.insert_tx(tx.clone());
}
// these are the expected descendants of `tx_a`
for tx in txs_b
.iter()
.chain(&txs_c)
.chain(core::iter::once(&tx_d))
.chain(core::iter::once(&tx_e))
{
let _ = graph.insert_tx(tx.clone());
assert!(expected_txids.insert(tx.txid()));
}
let descendants = graph
.walk_descendants(tx_a.txid(), |_, txid| Some(txid))
.collect::<Vec<_>>();
assert_eq!(descendants.len(), expected_txids.len());
for txid in descendants {
assert!(expected_txids.remove(&txid));
}
assert!(expected_txids.is_empty());
}
#[test]
fn test_chain_spends() {
let local_chain: LocalChain = (0..=100)
.map(|ht| (ht, BlockHash::hash(format!("Block Hash {}", ht).as_bytes())))
.collect::<BTreeMap<u32, BlockHash>>()
.into();
let tip = local_chain.tip().expect("must have tip");
// The parent tx contains 2 outputs. Which are spent by one confirmed and one unconfirmed tx.
// The parent tx is confirmed at block 95.
let tx_0 = Transaction {
input: vec![],
output: vec![
TxOut {
value: 10_000,
script_pubkey: Script::new(),
},
TxOut {
value: 20_000,
script_pubkey: Script::new(),
},
],
..common::new_tx(0)
};
// The first confirmed transaction spends vout: 0. And is confirmed at block 98.
let tx_1 = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_0.txid(), 0),
..TxIn::default()
}],
output: vec![
TxOut {
value: 5_000,
script_pubkey: Script::new(),
},
TxOut {
value: 5_000,
script_pubkey: Script::new(),
},
],
..common::new_tx(0)
};
// The second transactions spends vout:1, and is unconfirmed.
let tx_2 = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_0.txid(), 1),
..TxIn::default()
}],
output: vec![
TxOut {
value: 10_000,
script_pubkey: Script::new(),
},
TxOut {
value: 10_000,
script_pubkey: Script::new(),
},
],
..common::new_tx(0)
};
let mut graph = TxGraph::<ConfirmationHeightAnchor>::default();
let _ = graph.insert_tx(tx_0.clone());
let _ = graph.insert_tx(tx_1.clone());
let _ = graph.insert_tx(tx_2.clone());
[95, 98]
.iter()
.zip([&tx_0, &tx_1].into_iter())
.for_each(|(ht, tx)| {
let _ = graph.insert_anchor(
tx.txid(),
ConfirmationHeightAnchor {
anchor_block: tip,
confirmation_height: *ht,
},
);
});
// Assert that confirmed spends are returned correctly.
assert_eq!(
graph.get_chain_spend(&local_chain, tip, OutPoint::new(tx_0.txid(), 0)),
Some((
ChainPosition::Confirmed(&ConfirmationHeightAnchor {
anchor_block: tip,
confirmation_height: 98
}),
tx_1.txid(),
)),
);
// Check if chain position is returned correctly.
assert_eq!(
graph.get_chain_position(&local_chain, tip, tx_0.txid()),
// Some(ObservedAs::Confirmed(&local_chain.get_block(95).expect("block expected"))),
Some(ChainPosition::Confirmed(&ConfirmationHeightAnchor {
anchor_block: tip,
confirmation_height: 95
}))
);
// Even if unconfirmed tx has a last_seen of 0, it can still be part of a chain spend.
assert_eq!(
graph.get_chain_spend(&local_chain, tip, OutPoint::new(tx_0.txid(), 1)),
Some((ChainPosition::Unconfirmed(0), tx_2.txid())),
);
// Mark the unconfirmed as seen and check correct ObservedAs status is returned.
let _ = graph.insert_seen_at(tx_2.txid(), 1234567);
// Check chain spend returned correctly.
assert_eq!(
graph
.get_chain_spend(&local_chain, tip, OutPoint::new(tx_0.txid(), 1))
.unwrap(),
(ChainPosition::Unconfirmed(1234567), tx_2.txid())
);
// A conflicting transaction that conflicts with tx_1.
let tx_1_conflict = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_0.txid(), 0),
..Default::default()
}],
..common::new_tx(0)
};
let _ = graph.insert_tx(tx_1_conflict.clone());
// Because this tx conflicts with an already confirmed transaction, chain position should return none.
assert!(graph
.get_chain_position(&local_chain, tip, tx_1_conflict.txid())
.is_none());
// Another conflicting tx that conflicts with tx_2.
let tx_2_conflict = Transaction {
input: vec![TxIn {
previous_output: OutPoint::new(tx_0.txid(), 1),
..Default::default()
}],
..common::new_tx(0)
};
// Insert in graph and mark it as seen.
let _ = graph.insert_tx(tx_2_conflict.clone());
let _ = graph.insert_seen_at(tx_2_conflict.txid(), 1234568);
// This should return a valid observation with correct last seen.
assert_eq!(
graph
.get_chain_position(&local_chain, tip, tx_2_conflict.txid())
.expect("position expected"),
ChainPosition::Unconfirmed(1234568)
);
// Chain_spend now catches the new transaction as the spend.
assert_eq!(
graph
.get_chain_spend(&local_chain, tip, OutPoint::new(tx_0.txid(), 1))
.expect("expect observation"),
(ChainPosition::Unconfirmed(1234568), tx_2_conflict.txid())
);
// Chain position of the `tx_2` is now none, as it is older than `tx_2_conflict`
assert!(graph
.get_chain_position(&local_chain, tip, tx_2.txid())
.is_none());
}
/// Ensure that `last_seen` values only increase during [`Append::append`].
#[test]
fn test_additions_last_seen_append() {
let txid: Txid = h!("test txid");
let test_cases: &[(Option<u64>, Option<u64>)] = &[
(Some(5), Some(6)),
(Some(5), Some(5)),
(Some(6), Some(5)),
(None, Some(5)),
(Some(5), None),
];
for (original_ls, update_ls) in test_cases {
let mut original = Additions::<()> {
last_seen: original_ls.map(|ls| (txid, ls)).into_iter().collect(),
..Default::default()
};
let update = Additions::<()> {
last_seen: update_ls.map(|ls| (txid, ls)).into_iter().collect(),
..Default::default()
};
original.append(update);
assert_eq!(
&original.last_seen.get(&txid).cloned(),
Ord::max(original_ls, update_ls),
);
}
}

View File

@@ -1,16 +0,0 @@
[package]
name = "bdk_electrum"
version = "0.3.0"
edition = "2021"
homepage = "https://bitcoindevkit.org"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk_electrum"
description = "Fetch data from electrum in the form BDK accepts"
license = "MIT OR Apache-2.0"
readme = "README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk_chain = { path = "../chain", version = "0.5.0", features = ["serde", "miniscript"] }
electrum-client = { version = "0.12" }

View File

@@ -1,3 +0,0 @@
# BDK Electrum
BDK Electrum client library for updating the keychain tracker.

View File

@@ -1,486 +0,0 @@
use bdk_chain::{
bitcoin::{hashes::hex::FromHex, BlockHash, OutPoint, Script, Transaction, Txid},
keychain::LocalUpdate,
local_chain::LocalChain,
tx_graph::{self, TxGraph},
Anchor, BlockId, ConfirmationHeightAnchor, ConfirmationTimeAnchor,
};
use electrum_client::{Client, ElectrumApi, Error};
use std::{
collections::{BTreeMap, BTreeSet, HashMap, HashSet},
fmt::Debug,
};
#[derive(Debug, Clone)]
pub struct ElectrumUpdate<K, A> {
pub graph_update: HashMap<Txid, BTreeSet<A>>,
pub chain_update: LocalChain,
pub keychain_update: BTreeMap<K, u32>,
}
impl<K, A> Default for ElectrumUpdate<K, A> {
fn default() -> Self {
Self {
graph_update: Default::default(),
chain_update: Default::default(),
keychain_update: Default::default(),
}
}
}
impl<K, A: Anchor> ElectrumUpdate<K, A> {
pub fn missing_full_txs<A2>(&self, graph: &TxGraph<A2>) -> Vec<Txid> {
self.graph_update
.keys()
.filter(move |&&txid| graph.as_ref().get_tx(txid).is_none())
.cloned()
.collect()
}
pub fn finalize(
self,
client: &Client,
seen_at: Option<u64>,
missing: Vec<Txid>,
) -> Result<LocalUpdate<K, A>, Error> {
let new_txs = client.batch_transaction_get(&missing)?;
let mut graph_update = TxGraph::<A>::new(new_txs);
for (txid, anchors) in self.graph_update {
if let Some(seen_at) = seen_at {
let _ = graph_update.insert_seen_at(txid, seen_at);
}
for anchor in anchors {
let _ = graph_update.insert_anchor(txid, anchor);
}
}
Ok(LocalUpdate {
keychain: self.keychain_update,
graph: graph_update,
chain: self.chain_update,
})
}
}
impl<K> ElectrumUpdate<K, ConfirmationHeightAnchor> {
/// Finalizes the [`ElectrumUpdate`] with `new_txs` and anchors of type
/// [`ConfirmationTimeAnchor`].
///
/// **Note:** The confirmation time might not be precisely correct if there has been a reorg.
/// Electrum's API intends that we use the merkle proof API, we should change `bdk_electrum` to
/// use it.
pub fn finalize_as_confirmation_time(
self,
client: &Client,
seen_at: Option<u64>,
missing: Vec<Txid>,
) -> Result<LocalUpdate<K, ConfirmationTimeAnchor>, Error> {
let update = self.finalize(client, seen_at, missing)?;
let relevant_heights = {
let mut visited_heights = HashSet::new();
update
.graph
.all_anchors()
.iter()
.map(|(a, _)| a.confirmation_height_upper_bound())
.filter(move |&h| visited_heights.insert(h))
.collect::<Vec<_>>()
};
let height_to_time = relevant_heights
.clone()
.into_iter()
.zip(
client
.batch_block_header(relevant_heights)?
.into_iter()
.map(|bh| bh.time as u64),
)
.collect::<HashMap<u32, u64>>();
let graph_additions = {
let old_additions = TxGraph::default().determine_additions(&update.graph);
tx_graph::Additions {
txs: old_additions.txs,
txouts: old_additions.txouts,
last_seen: old_additions.last_seen,
anchors: old_additions
.anchors
.into_iter()
.map(|(height_anchor, txid)| {
let confirmation_height = height_anchor.confirmation_height;
let confirmation_time = height_to_time[&confirmation_height];
let time_anchor = ConfirmationTimeAnchor {
anchor_block: height_anchor.anchor_block,
confirmation_height,
confirmation_time,
};
(time_anchor, txid)
})
.collect(),
}
};
Ok(LocalUpdate {
keychain: update.keychain,
graph: {
let mut graph = TxGraph::default();
graph.apply_additions(graph_additions);
graph
},
chain: update.chain,
})
}
}
pub trait ElectrumExt<A> {
fn get_tip(&self) -> Result<(u32, BlockHash), Error>;
fn scan<K: Ord + Clone>(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
keychain_spks: BTreeMap<K, impl IntoIterator<Item = (u32, Script)>>,
txids: impl IntoIterator<Item = Txid>,
outpoints: impl IntoIterator<Item = OutPoint>,
stop_gap: usize,
batch_size: usize,
) -> Result<ElectrumUpdate<K, A>, Error>;
fn scan_without_keychain(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
misc_spks: impl IntoIterator<Item = Script>,
txids: impl IntoIterator<Item = Txid>,
outpoints: impl IntoIterator<Item = OutPoint>,
batch_size: usize,
) -> Result<ElectrumUpdate<(), A>, Error> {
let spk_iter = misc_spks
.into_iter()
.enumerate()
.map(|(i, spk)| (i as u32, spk));
self.scan(
local_chain,
[((), spk_iter)].into(),
txids,
outpoints,
usize::MAX,
batch_size,
)
}
}
impl ElectrumExt<ConfirmationHeightAnchor> for Client {
fn get_tip(&self) -> Result<(u32, BlockHash), Error> {
// TODO: unsubscribe when added to the client, or is there a better call to use here?
self.block_headers_subscribe()
.map(|data| (data.height as u32, data.header.block_hash()))
}
fn scan<K: Ord + Clone>(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
keychain_spks: BTreeMap<K, impl IntoIterator<Item = (u32, Script)>>,
txids: impl IntoIterator<Item = Txid>,
outpoints: impl IntoIterator<Item = OutPoint>,
stop_gap: usize,
batch_size: usize,
) -> Result<ElectrumUpdate<K, ConfirmationHeightAnchor>, Error> {
let mut request_spks = keychain_spks
.into_iter()
.map(|(k, s)| (k, s.into_iter()))
.collect::<BTreeMap<K, _>>();
let mut scanned_spks = BTreeMap::<(K, u32), (Script, bool)>::new();
let txids = txids.into_iter().collect::<Vec<_>>();
let outpoints = outpoints.into_iter().collect::<Vec<_>>();
let update = loop {
let mut update = ElectrumUpdate::<K, ConfirmationHeightAnchor> {
chain_update: prepare_chain_update(self, local_chain)?,
..Default::default()
};
let anchor_block = update
.chain_update
.tip()
.expect("must have atleast one block");
if !request_spks.is_empty() {
if !scanned_spks.is_empty() {
scanned_spks.append(&mut populate_with_spks(
self,
anchor_block,
&mut update,
&mut scanned_spks
.iter()
.map(|(i, (spk, _))| (i.clone(), spk.clone())),
stop_gap,
batch_size,
)?);
}
for (keychain, keychain_spks) in &mut request_spks {
scanned_spks.extend(
populate_with_spks(
self,
anchor_block,
&mut update,
keychain_spks,
stop_gap,
batch_size,
)?
.into_iter()
.map(|(spk_i, spk)| ((keychain.clone(), spk_i), spk)),
);
}
}
populate_with_txids(self, anchor_block, &mut update, &mut txids.iter().cloned())?;
let _txs = populate_with_outpoints(
self,
anchor_block,
&mut update,
&mut outpoints.iter().cloned(),
)?;
// check for reorgs during scan process
let server_blockhash = self
.block_header(anchor_block.height as usize)?
.block_hash();
if anchor_block.hash != server_blockhash {
continue; // reorg
}
update.keychain_update = request_spks
.into_keys()
.filter_map(|k| {
scanned_spks
.range((k.clone(), u32::MIN)..=(k.clone(), u32::MAX))
.rev()
.find(|(_, (_, active))| *active)
.map(|((_, i), _)| (k, *i))
})
.collect::<BTreeMap<_, _>>();
break update;
};
Ok(update)
}
}
/// Prepare an update "template" based on the checkpoints of the `local_chain`.
fn prepare_chain_update(
client: &Client,
local_chain: &BTreeMap<u32, BlockHash>,
) -> Result<LocalChain, Error> {
let mut update = LocalChain::default();
// Find the local chain block that is still there so our update can connect to the local chain.
for (&existing_height, &existing_hash) in local_chain.iter().rev() {
// TODO: a batch request may be safer, as a reorg that happens when we are obtaining
// `block_header`s will result in inconsistencies
let current_hash = client.block_header(existing_height as usize)?.block_hash();
let _ = update
.insert_block(BlockId {
height: existing_height,
hash: current_hash,
})
.expect("This never errors because we are working with a fresh chain");
if current_hash == existing_hash {
break;
}
}
// Insert the new tip so new transactions will be accepted into the sparsechain.
let tip = {
let (height, hash) = crate::get_tip(client)?;
BlockId { height, hash }
};
if update.insert_block(tip).is_err() {
// There has been a re-org before we even begin scanning addresses.
// Just recursively call (this should never happen).
return prepare_chain_update(client, local_chain);
}
Ok(update)
}
fn determine_tx_anchor(
anchor_block: BlockId,
raw_height: i32,
txid: Txid,
) -> Option<ConfirmationHeightAnchor> {
// The electrum API has a weird quirk where an unconfirmed transaction is presented with a
// height of 0. To avoid invalid representation in our data structures, we manually set
// transactions residing in the genesis block to have height 0, then interpret a height of 0 as
// unconfirmed for all other transactions.
if txid
== Txid::from_hex("4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b")
.expect("must deserialize genesis coinbase txid")
{
return Some(ConfirmationHeightAnchor {
anchor_block,
confirmation_height: 0,
});
}
match raw_height {
h if h <= 0 => {
debug_assert!(h == 0 || h == -1, "unexpected height ({}) from electrum", h);
None
}
h => {
let h = h as u32;
if h > anchor_block.height {
None
} else {
Some(ConfirmationHeightAnchor {
anchor_block,
confirmation_height: h,
})
}
}
}
}
fn populate_with_outpoints<K>(
client: &Client,
anchor_block: BlockId,
update: &mut ElectrumUpdate<K, ConfirmationHeightAnchor>,
outpoints: &mut impl Iterator<Item = OutPoint>,
) -> Result<HashMap<Txid, Transaction>, Error> {
let mut full_txs = HashMap::new();
for outpoint in outpoints {
let txid = outpoint.txid;
let tx = client.transaction_get(&txid)?;
debug_assert_eq!(tx.txid(), txid);
let txout = match tx.output.get(outpoint.vout as usize) {
Some(txout) => txout,
None => continue,
};
// attempt to find the following transactions (alongside their chain positions), and
// add to our sparsechain `update`:
let mut has_residing = false; // tx in which the outpoint resides
let mut has_spending = false; // tx that spends the outpoint
for res in client.script_get_history(&txout.script_pubkey)? {
if has_residing && has_spending {
break;
}
if res.tx_hash == txid {
if has_residing {
continue;
}
has_residing = true;
full_txs.insert(res.tx_hash, tx.clone());
} else {
if has_spending {
continue;
}
let res_tx = match full_txs.get(&res.tx_hash) {
Some(tx) => tx,
None => {
let res_tx = client.transaction_get(&res.tx_hash)?;
full_txs.insert(res.tx_hash, res_tx);
full_txs.get(&res.tx_hash).expect("just inserted")
}
};
has_spending = res_tx
.input
.iter()
.any(|txin| txin.previous_output == outpoint);
if !has_spending {
continue;
}
};
let anchor = determine_tx_anchor(anchor_block, res.height, res.tx_hash);
let tx_entry = update.graph_update.entry(res.tx_hash).or_default();
if let Some(anchor) = anchor {
tx_entry.insert(anchor);
}
}
}
Ok(full_txs)
}
fn populate_with_txids<K>(
client: &Client,
anchor_block: BlockId,
update: &mut ElectrumUpdate<K, ConfirmationHeightAnchor>,
txids: &mut impl Iterator<Item = Txid>,
) -> Result<(), Error> {
for txid in txids {
let tx = match client.transaction_get(&txid) {
Ok(tx) => tx,
Err(electrum_client::Error::Protocol(_)) => continue,
Err(other_err) => return Err(other_err),
};
let spk = tx
.output
.get(0)
.map(|txo| &txo.script_pubkey)
.expect("tx must have an output");
let anchor = match client
.script_get_history(spk)?
.into_iter()
.find(|r| r.tx_hash == txid)
{
Some(r) => determine_tx_anchor(anchor_block, r.height, txid),
None => continue,
};
let tx_entry = update.graph_update.entry(txid).or_default();
if let Some(anchor) = anchor {
tx_entry.insert(anchor);
}
}
Ok(())
}
fn populate_with_spks<K, I: Ord + Clone>(
client: &Client,
anchor_block: BlockId,
update: &mut ElectrumUpdate<K, ConfirmationHeightAnchor>,
spks: &mut impl Iterator<Item = (I, Script)>,
stop_gap: usize,
batch_size: usize,
) -> Result<BTreeMap<I, (Script, bool)>, Error> {
let mut unused_spk_count = 0_usize;
let mut scanned_spks = BTreeMap::new();
loop {
let spks = (0..batch_size)
.map_while(|_| spks.next())
.collect::<Vec<_>>();
if spks.is_empty() {
return Ok(scanned_spks);
}
let spk_histories = client.batch_script_get_history(spks.iter().map(|(_, s)| s))?;
for ((spk_index, spk), spk_history) in spks.into_iter().zip(spk_histories) {
if spk_history.is_empty() {
scanned_spks.insert(spk_index, (spk, false));
unused_spk_count += 1;
if unused_spk_count > stop_gap {
return Ok(scanned_spks);
}
continue;
} else {
scanned_spks.insert(spk_index, (spk, true));
unused_spk_count = 0;
}
for tx in spk_history {
let tx_entry = update.graph_update.entry(tx.tx_hash).or_default();
if let Some(anchor) = determine_tx_anchor(anchor_block, tx.height, tx.tx_hash) {
tx_entry.insert(anchor);
}
}
}
}
}

View File

@@ -1,35 +0,0 @@
//! This crate is used for updating structures of the [`bdk_chain`] crate with data from electrum.
//!
//! The star of the show is the [`ElectrumExt::scan`] method, which scans for relevant blockchain
//! data (via electrum) and outputs an [`ElectrumUpdate`].
//!
//! An [`ElectrumUpdate`] only includes `txid`s and no full transactions. The caller is responsible
//! for obtaining full transactions before applying. This can be done with
//! these steps:
//!
//! 1. Determine which full transactions are missing. The method [`missing_full_txs`] of
//! [`ElectrumUpdate`] can be used.
//!
//! 2. Obtaining the full transactions. To do this via electrum, the method
//! [`batch_transaction_get`] can be used.
//!
//! Refer to [`bdk_electrum_example`] for a complete example.
//!
//! [`ElectrumClient::scan`]: ElectrumClient::scan
//! [`missing_full_txs`]: ElectrumUpdate::missing_full_txs
//! [`batch_transaction_get`]: ElectrumApi::batch_transaction_get
//! [`bdk_electrum_example`]: https://github.com/LLFourn/bdk_core_staging/tree/master/bdk_electrum_example
use bdk_chain::bitcoin::BlockHash;
use electrum_client::{Client, ElectrumApi, Error};
mod electrum_ext;
pub use bdk_chain;
pub use electrum_client;
pub use electrum_ext::*;
fn get_tip(client: &Client) -> Result<(u32, BlockHash), Error> {
// TODO: unsubscribe when added to the client, or is there a better call to use here?
client
.block_headers_subscribe()
.map(|data| (data.height as u32, data.header.block_hash()))
}

View File

@@ -1,29 +0,0 @@
[package]
name = "bdk_esplora"
version = "0.3.0"
edition = "2021"
homepage = "https://bitcoindevkit.org"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk_esplora"
description = "Fetch data from esplora in the form that accepts"
license = "MIT OR Apache-2.0"
readme = "README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk_chain = { path = "../chain", version = "0.5.0", default-features = false, features = ["serde", "miniscript"] }
esplora-client = { version = "0.5", default-features = false }
async-trait = { version = "0.1.66", optional = true }
futures = { version = "0.3.26", optional = true }
# use these dependencies if you need to enable their /no-std features
bitcoin = { version = "0.29", optional = true, default-features = false }
miniscript = { version = "9.0.0", optional = true, default-features = false }
[features]
default = ["std", "async-https", "blocking"]
std = ["bdk_chain/std"]
async = ["async-trait", "futures", "esplora-client/async"]
async-https = ["async", "esplora-client/async-https"]
blocking = ["esplora-client/blocking"]

View File

@@ -1,33 +0,0 @@
# BDK Esplora
BDK Esplora extends [`esplora_client`](crate::esplora_client) to update [`bdk_chain`] structures
from an Esplora server.
## Usage
There are two versions of the extension trait (blocking and async).
For blocking-only:
```toml
bdk_esplora = { version = "0.3", features = ["blocking"] }
```
For async-only:
```toml
bdk_esplora = { version = "0.3", features = ["async"] }
```
For async-only (with https):
```toml
bdk_esplora = { version = "0.3", features = ["async-https"] }
```
To use the extension traits:
```rust
// for blocking
use bdk_esplora::EsploraExt;
// for async
// use bdk_esplora::EsploraAsyncExt;
```
For full examples, refer to [`example-crates/wallet_esplora`](https://github.com/bitcoindevkit/bdk/tree/master/example-crates/wallet_esplora) (blocking) and [`example-crates/wallet_esplora_async`](https://github.com/bitcoindevkit/bdk/tree/master/example-crates/wallet_esplora_async).

View File

@@ -1,269 +0,0 @@
use async_trait::async_trait;
use bdk_chain::{
bitcoin::{BlockHash, OutPoint, Script, Txid},
collections::BTreeMap,
keychain::LocalUpdate,
BlockId, ConfirmationTimeAnchor,
};
use esplora_client::{Error, OutputStatus, TxStatus};
use futures::{stream::FuturesOrdered, TryStreamExt};
use crate::map_confirmation_time_anchor;
/// Trait to extend [`esplora_client::AsyncClient`] functionality.
///
/// This is the async version of [`EsploraExt`]. Refer to
/// [crate-level documentation] for more.
///
/// [`EsploraExt`]: crate::EsploraExt
/// [crate-level documentation]: crate
#[cfg_attr(target_arch = "wasm32", async_trait(?Send))]
#[cfg_attr(not(target_arch = "wasm32"), async_trait)]
pub trait EsploraAsyncExt {
/// Scan the blockchain (via esplora) for the data specified and returns a
/// [`LocalUpdate<K, ConfirmationTimeAnchor>`].
///
/// - `local_chain`: the most recent block hashes present locally
/// - `keychain_spks`: keychains that we want to scan transactions for
/// - `txids`: transactions for which we want updated [`ConfirmationTimeAnchor`]s
/// - `outpoints`: transactions associated with these outpoints (residing, spending) that we
/// want to included in the update
///
/// The scan for each keychain stops after a gap of `stop_gap` script pubkeys with no associated
/// transactions. `parallel_requests` specifies the max number of HTTP requests to make in
/// parallel.
#[allow(clippy::result_large_err)] // FIXME
async fn scan<K: Ord + Clone + Send>(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
keychain_spks: BTreeMap<
K,
impl IntoIterator<IntoIter = impl Iterator<Item = (u32, Script)> + Send> + Send,
>,
txids: impl IntoIterator<IntoIter = impl Iterator<Item = Txid> + Send> + Send,
outpoints: impl IntoIterator<IntoIter = impl Iterator<Item = OutPoint> + Send> + Send,
stop_gap: usize,
parallel_requests: usize,
) -> Result<LocalUpdate<K, ConfirmationTimeAnchor>, Error>;
/// Convenience method to call [`scan`] without requiring a keychain.
///
/// [`scan`]: EsploraAsyncExt::scan
#[allow(clippy::result_large_err)] // FIXME
async fn scan_without_keychain(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
misc_spks: impl IntoIterator<IntoIter = impl Iterator<Item = Script> + Send> + Send,
txids: impl IntoIterator<IntoIter = impl Iterator<Item = Txid> + Send> + Send,
outpoints: impl IntoIterator<IntoIter = impl Iterator<Item = OutPoint> + Send> + Send,
parallel_requests: usize,
) -> Result<LocalUpdate<(), ConfirmationTimeAnchor>, Error> {
self.scan(
local_chain,
[(
(),
misc_spks
.into_iter()
.enumerate()
.map(|(i, spk)| (i as u32, spk)),
)]
.into(),
txids,
outpoints,
usize::MAX,
parallel_requests,
)
.await
}
}
#[cfg_attr(target_arch = "wasm32", async_trait(?Send))]
#[cfg_attr(not(target_arch = "wasm32"), async_trait)]
impl EsploraAsyncExt for esplora_client::AsyncClient {
#[allow(clippy::result_large_err)] // FIXME
async fn scan<K: Ord + Clone + Send>(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
keychain_spks: BTreeMap<
K,
impl IntoIterator<IntoIter = impl Iterator<Item = (u32, Script)> + Send> + Send,
>,
txids: impl IntoIterator<IntoIter = impl Iterator<Item = Txid> + Send> + Send,
outpoints: impl IntoIterator<IntoIter = impl Iterator<Item = OutPoint> + Send> + Send,
stop_gap: usize,
parallel_requests: usize,
) -> Result<LocalUpdate<K, ConfirmationTimeAnchor>, Error> {
let parallel_requests = Ord::max(parallel_requests, 1);
let (mut update, tip_at_start) = loop {
let mut update = LocalUpdate::<K, ConfirmationTimeAnchor>::default();
for (&height, &original_hash) in local_chain.iter().rev() {
let update_block_id = BlockId {
height,
hash: self.get_block_hash(height).await?,
};
let _ = update
.chain
.insert_block(update_block_id)
.expect("cannot repeat height here");
if update_block_id.hash == original_hash {
break;
}
}
let tip_at_start = BlockId {
height: self.get_height().await?,
hash: self.get_tip_hash().await?,
};
if update.chain.insert_block(tip_at_start).is_ok() {
break (update, tip_at_start);
}
};
for (keychain, spks) in keychain_spks {
let mut spks = spks.into_iter();
let mut last_active_index = None;
let mut empty_scripts = 0;
type IndexWithTxs = (u32, Vec<esplora_client::Tx>);
loop {
let futures = (0..parallel_requests)
.filter_map(|_| {
let (index, script) = spks.next()?;
let client = self.clone();
Some(async move {
let mut related_txs = client.scripthash_txs(&script, None).await?;
let n_confirmed =
related_txs.iter().filter(|tx| tx.status.confirmed).count();
// esplora pages on 25 confirmed transactions. If there are 25 or more we
// keep requesting to see if there's more.
if n_confirmed >= 25 {
loop {
let new_related_txs = client
.scripthash_txs(
&script,
Some(related_txs.last().unwrap().txid),
)
.await?;
let n = new_related_txs.len();
related_txs.extend(new_related_txs);
// we've reached the end
if n < 25 {
break;
}
}
}
Result::<_, esplora_client::Error>::Ok((index, related_txs))
})
})
.collect::<FuturesOrdered<_>>();
let n_futures = futures.len();
for (index, related_txs) in futures.try_collect::<Vec<IndexWithTxs>>().await? {
if related_txs.is_empty() {
empty_scripts += 1;
} else {
last_active_index = Some(index);
empty_scripts = 0;
}
for tx in related_txs {
let anchor = map_confirmation_time_anchor(&tx.status, tip_at_start);
let _ = update.graph.insert_tx(tx.to_tx());
if let Some(anchor) = anchor {
let _ = update.graph.insert_anchor(tx.txid, anchor);
}
}
}
if n_futures == 0 || empty_scripts >= stop_gap {
break;
}
}
if let Some(last_active_index) = last_active_index {
update.keychain.insert(keychain, last_active_index);
}
}
for txid in txids.into_iter() {
if update.graph.get_tx(txid).is_none() {
match self.get_tx(&txid).await? {
Some(tx) => {
let _ = update.graph.insert_tx(tx);
}
None => continue,
}
}
match self.get_tx_status(&txid).await? {
tx_status if tx_status.confirmed => {
if let Some(anchor) = map_confirmation_time_anchor(&tx_status, tip_at_start) {
let _ = update.graph.insert_anchor(txid, anchor);
}
}
_ => continue,
}
}
for op in outpoints.into_iter() {
let mut op_txs = Vec::with_capacity(2);
if let (
Some(tx),
tx_status @ TxStatus {
confirmed: true, ..
},
) = (
self.get_tx(&op.txid).await?,
self.get_tx_status(&op.txid).await?,
) {
op_txs.push((tx, tx_status));
if let Some(OutputStatus {
txid: Some(txid),
status: Some(spend_status),
..
}) = self.get_output_status(&op.txid, op.vout as _).await?
{
if let Some(spend_tx) = self.get_tx(&txid).await? {
op_txs.push((spend_tx, spend_status));
}
}
}
for (tx, status) in op_txs {
let txid = tx.txid();
let anchor = map_confirmation_time_anchor(&status, tip_at_start);
let _ = update.graph.insert_tx(tx);
if let Some(anchor) = anchor {
let _ = update.graph.insert_anchor(txid, anchor);
}
}
}
if tip_at_start.hash != self.get_block_hash(tip_at_start.height).await? {
// A reorg occurred, so let's find out where all the txids we found are now in the chain
let txids_found = update
.graph
.full_txs()
.map(|tx_node| tx_node.txid)
.collect::<Vec<_>>();
update.chain = EsploraAsyncExt::scan_without_keychain(
self,
local_chain,
[],
txids_found,
[],
parallel_requests,
)
.await?
.chain;
}
Ok(update)
}
}

View File

@@ -1,251 +0,0 @@
use bdk_chain::bitcoin::{BlockHash, OutPoint, Script, Txid};
use bdk_chain::collections::BTreeMap;
use bdk_chain::BlockId;
use bdk_chain::{keychain::LocalUpdate, ConfirmationTimeAnchor};
use esplora_client::{Error, OutputStatus, TxStatus};
use crate::map_confirmation_time_anchor;
/// Trait to extend [`esplora_client::BlockingClient`] functionality.
///
/// Refer to [crate-level documentation] for more.
///
/// [crate-level documentation]: crate
pub trait EsploraExt {
/// Scan the blockchain (via esplora) for the data specified and returns a
/// [`LocalUpdate<K, ConfirmationTimeAnchor>`].
///
/// - `local_chain`: the most recent block hashes present locally
/// - `keychain_spks`: keychains that we want to scan transactions for
/// - `txids`: transactions for which we want updated [`ConfirmationTimeAnchor`]s
/// - `outpoints`: transactions associated with these outpoints (residing, spending) that we
/// want to included in the update
///
/// The scan for each keychain stops after a gap of `stop_gap` script pubkeys with no associated
/// transactions. `parallel_requests` specifies the max number of HTTP requests to make in
/// parallel.
#[allow(clippy::result_large_err)] // FIXME
fn scan<K: Ord + Clone>(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
keychain_spks: BTreeMap<K, impl IntoIterator<Item = (u32, Script)>>,
txids: impl IntoIterator<Item = Txid>,
outpoints: impl IntoIterator<Item = OutPoint>,
stop_gap: usize,
parallel_requests: usize,
) -> Result<LocalUpdate<K, ConfirmationTimeAnchor>, Error>;
/// Convenience method to call [`scan`] without requiring a keychain.
///
/// [`scan`]: EsploraExt::scan
#[allow(clippy::result_large_err)] // FIXME
fn scan_without_keychain(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
misc_spks: impl IntoIterator<Item = Script>,
txids: impl IntoIterator<Item = Txid>,
outpoints: impl IntoIterator<Item = OutPoint>,
parallel_requests: usize,
) -> Result<LocalUpdate<(), ConfirmationTimeAnchor>, Error> {
self.scan(
local_chain,
[(
(),
misc_spks
.into_iter()
.enumerate()
.map(|(i, spk)| (i as u32, spk)),
)]
.into(),
txids,
outpoints,
usize::MAX,
parallel_requests,
)
}
}
impl EsploraExt for esplora_client::BlockingClient {
fn scan<K: Ord + Clone>(
&self,
local_chain: &BTreeMap<u32, BlockHash>,
keychain_spks: BTreeMap<K, impl IntoIterator<Item = (u32, Script)>>,
txids: impl IntoIterator<Item = Txid>,
outpoints: impl IntoIterator<Item = OutPoint>,
stop_gap: usize,
parallel_requests: usize,
) -> Result<LocalUpdate<K, ConfirmationTimeAnchor>, Error> {
let parallel_requests = Ord::max(parallel_requests, 1);
let (mut update, tip_at_start) = loop {
let mut update = LocalUpdate::<K, ConfirmationTimeAnchor>::default();
for (&height, &original_hash) in local_chain.iter().rev() {
let update_block_id = BlockId {
height,
hash: self.get_block_hash(height)?,
};
let _ = update
.chain
.insert_block(update_block_id)
.expect("cannot repeat height here");
if update_block_id.hash == original_hash {
break;
}
}
let tip_at_start = BlockId {
height: self.get_height()?,
hash: self.get_tip_hash()?,
};
if update.chain.insert_block(tip_at_start).is_ok() {
break (update, tip_at_start);
}
};
for (keychain, spks) in keychain_spks {
let mut spks = spks.into_iter();
let mut last_active_index = None;
let mut empty_scripts = 0;
type IndexWithTxs = (u32, Vec<esplora_client::Tx>);
loop {
let handles = (0..parallel_requests)
.filter_map(
|_| -> Option<std::thread::JoinHandle<Result<IndexWithTxs, _>>> {
let (index, script) = spks.next()?;
let client = self.clone();
Some(std::thread::spawn(move || {
let mut related_txs = client.scripthash_txs(&script, None)?;
let n_confirmed =
related_txs.iter().filter(|tx| tx.status.confirmed).count();
// esplora pages on 25 confirmed transactions. If there are 25 or more we
// keep requesting to see if there's more.
if n_confirmed >= 25 {
loop {
let new_related_txs = client.scripthash_txs(
&script,
Some(related_txs.last().unwrap().txid),
)?;
let n = new_related_txs.len();
related_txs.extend(new_related_txs);
// we've reached the end
if n < 25 {
break;
}
}
}
Result::<_, esplora_client::Error>::Ok((index, related_txs))
}))
},
)
.collect::<Vec<_>>();
let n_handles = handles.len();
for handle in handles {
let (index, related_txs) = handle.join().unwrap()?; // TODO: don't unwrap
if related_txs.is_empty() {
empty_scripts += 1;
} else {
last_active_index = Some(index);
empty_scripts = 0;
}
for tx in related_txs {
let anchor = map_confirmation_time_anchor(&tx.status, tip_at_start);
let _ = update.graph.insert_tx(tx.to_tx());
if let Some(anchor) = anchor {
let _ = update.graph.insert_anchor(tx.txid, anchor);
}
}
}
if n_handles == 0 || empty_scripts >= stop_gap {
break;
}
}
if let Some(last_active_index) = last_active_index {
update.keychain.insert(keychain, last_active_index);
}
}
for txid in txids.into_iter() {
if update.graph.get_tx(txid).is_none() {
match self.get_tx(&txid)? {
Some(tx) => {
let _ = update.graph.insert_tx(tx);
}
None => continue,
}
}
match self.get_tx_status(&txid)? {
tx_status @ TxStatus {
confirmed: true, ..
} => {
if let Some(anchor) = map_confirmation_time_anchor(&tx_status, tip_at_start) {
let _ = update.graph.insert_anchor(txid, anchor);
}
}
_ => continue,
}
}
for op in outpoints.into_iter() {
let mut op_txs = Vec::with_capacity(2);
if let (
Some(tx),
tx_status @ TxStatus {
confirmed: true, ..
},
) = (self.get_tx(&op.txid)?, self.get_tx_status(&op.txid)?)
{
op_txs.push((tx, tx_status));
if let Some(OutputStatus {
txid: Some(txid),
status: Some(spend_status),
..
}) = self.get_output_status(&op.txid, op.vout as _)?
{
if let Some(spend_tx) = self.get_tx(&txid)? {
op_txs.push((spend_tx, spend_status));
}
}
}
for (tx, status) in op_txs {
let txid = tx.txid();
let anchor = map_confirmation_time_anchor(&status, tip_at_start);
let _ = update.graph.insert_tx(tx);
if let Some(anchor) = anchor {
let _ = update.graph.insert_anchor(txid, anchor);
}
}
}
if tip_at_start.hash != self.get_block_hash(tip_at_start.height)? {
// A reorg occurred, so let's find out where all the txids we found are now in the chain
let txids_found = update
.graph
.full_txs()
.map(|tx_node| tx_node.txid)
.collect::<Vec<_>>();
update.chain = EsploraExt::scan_without_keychain(
self,
local_chain,
[],
txids_found,
[],
parallel_requests,
)?
.chain;
}
Ok(update)
}
}

View File

@@ -1,29 +0,0 @@
#![doc = include_str!("../README.md")]
use bdk_chain::{BlockId, ConfirmationTimeAnchor};
use esplora_client::TxStatus;
pub use esplora_client;
#[cfg(feature = "blocking")]
mod blocking_ext;
#[cfg(feature = "blocking")]
pub use blocking_ext::*;
#[cfg(feature = "async")]
mod async_ext;
#[cfg(feature = "async")]
pub use async_ext::*;
pub(crate) fn map_confirmation_time_anchor(
tx_status: &TxStatus,
tip_at_start: BlockId,
) -> Option<ConfirmationTimeAnchor> {
match (tx_status.block_time, tx_status.block_height) {
(Some(confirmation_time), Some(confirmation_height)) => Some(ConfirmationTimeAnchor {
anchor_block: tip_at_start,
confirmation_height,
confirmation_time,
}),
_ => None,
}
}

View File

@@ -1,19 +0,0 @@
[package]
name = "bdk_file_store"
version = "0.2.0"
edition = "2021"
license = "MIT OR Apache-2.0"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk_file_store"
description = "A simple append-only flat file implementation of Persist for Bitcoin Dev Kit."
keywords = ["bitcoin", "persist", "persistence", "bdk", "file"]
authors = ["Bitcoin Dev Kit Developers"]
readme = "README.md"
[dependencies]
bdk_chain = { path = "../chain", version = "0.5.0", features = [ "serde", "miniscript" ] }
bincode = { version = "1" }
serde = { version = "1", features = ["derive"] }
[dev-dependencies]
tempfile = "3"

View File

@@ -1,10 +0,0 @@
# BDK File Store
This is a simple append-only flat file implementation of
[`Persist`](`bdk_chain::Persist`).
The main structure is [`Store`](`crate::Store`), which can be used with [`bdk`]'s
`Wallet` to persist wallet data into a flat file.
[`bdk`]: https://docs.rs/bdk/latest
[`bdk_chain`]: https://docs.rs/bdk_chain/latest

View File

@@ -1,100 +0,0 @@
use bincode::Options;
use std::{
fs::File,
io::{self, Seek},
marker::PhantomData,
};
use crate::bincode_options;
/// Iterator over entries in a file store.
///
/// Reads and returns an entry each time [`next`] is called. If an error occurs while reading the
/// iterator will yield a `Result::Err(_)` instead and then `None` for the next call to `next`.
///
/// [`next`]: Self::next
pub struct EntryIter<'t, T> {
db_file: Option<&'t mut File>,
/// The file position for the first read of `db_file`.
start_pos: Option<u64>,
types: PhantomData<T>,
}
impl<'t, T> EntryIter<'t, T> {
pub fn new(start_pos: u64, db_file: &'t mut File) -> Self {
Self {
db_file: Some(db_file),
start_pos: Some(start_pos),
types: PhantomData,
}
}
}
impl<'t, T> Iterator for EntryIter<'t, T>
where
T: serde::de::DeserializeOwned,
{
type Item = Result<T, IterError>;
fn next(&mut self) -> Option<Self::Item> {
// closure which reads a single entry starting from `self.pos`
let read_one = |f: &mut File, start_pos: Option<u64>| -> Result<Option<T>, IterError> {
let pos = match start_pos {
Some(pos) => f.seek(io::SeekFrom::Start(pos))?,
None => f.stream_position()?,
};
match bincode_options().deserialize_from(&*f) {
Ok(changeset) => {
f.stream_position()?;
Ok(Some(changeset))
}
Err(e) => {
if let bincode::ErrorKind::Io(inner) = &*e {
if inner.kind() == io::ErrorKind::UnexpectedEof {
let eof = f.seek(io::SeekFrom::End(0))?;
if pos == eof {
return Ok(None);
}
}
}
f.seek(io::SeekFrom::Start(pos))?;
Err(IterError::Bincode(*e))
}
}
};
let result = read_one(self.db_file.as_mut()?, self.start_pos.take());
if result.is_err() {
self.db_file = None;
}
result.transpose()
}
}
impl From<io::Error> for IterError {
fn from(value: io::Error) -> Self {
IterError::Io(value)
}
}
/// Error type for [`EntryIter`].
#[derive(Debug)]
pub enum IterError {
/// Failure to read from the file.
Io(io::Error),
/// Failure to decode data from the file.
Bincode(bincode::ErrorKind),
}
impl core::fmt::Display for IterError {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
IterError::Io(e) => write!(f, "io error trying to read entry {}", e),
IterError::Bincode(e) => write!(f, "bincode error while reading entry {}", e),
}
}
}
impl std::error::Error for IterError {}

View File

@@ -1,42 +0,0 @@
#![doc = include_str!("../README.md")]
mod entry_iter;
mod store;
use std::io;
use bincode::{DefaultOptions, Options};
pub use entry_iter::*;
pub use store::*;
pub(crate) fn bincode_options() -> impl bincode::Options {
DefaultOptions::new().with_varint_encoding()
}
/// Error that occurs due to problems encountered with the file.
#[derive(Debug)]
pub enum FileError<'a> {
/// IO error, this may mean that the file is too short.
Io(io::Error),
/// Magic bytes do not match what is expected.
InvalidMagicBytes { got: Vec<u8>, expected: &'a [u8] },
}
impl<'a> core::fmt::Display for FileError<'a> {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
Self::Io(e) => write!(f, "io error trying to read file: {}", e),
Self::InvalidMagicBytes { got, expected } => write!(
f,
"file has invalid magic bytes: expected={:?} got={:?}",
expected, got,
),
}
}
}
impl<'a> From<io::Error> for FileError<'a> {
fn from(value: io::Error) -> Self {
Self::Io(value)
}
}
impl<'a> std::error::Error for FileError<'a> {}

View File

@@ -1,255 +0,0 @@
use std::{
fmt::Debug,
fs::{File, OpenOptions},
io::{self, Read, Seek, Write},
marker::PhantomData,
path::Path,
};
use bdk_chain::{Append, PersistBackend};
use bincode::Options;
use crate::{bincode_options, EntryIter, FileError, IterError};
/// Persists an append-only list of changesets (`C`) to a single file.
///
/// The changesets are the results of altering a tracker implementation (`T`).
#[derive(Debug)]
pub struct Store<'a, C> {
magic: &'a [u8],
db_file: File,
marker: PhantomData<C>,
}
impl<'a, C> PersistBackend<C> for Store<'a, C>
where
C: Default + Append + serde::Serialize + serde::de::DeserializeOwned,
{
type WriteError = std::io::Error;
type LoadError = IterError;
fn write_changes(&mut self, changeset: &C) -> Result<(), Self::WriteError> {
self.append_changeset(changeset)
}
fn load_from_persistence(&mut self) -> Result<C, Self::LoadError> {
let (changeset, result) = self.aggregate_changesets();
result.map(|_| changeset)
}
}
impl<'a, C> Store<'a, C>
where
C: Default + Append + serde::Serialize + serde::de::DeserializeOwned,
{
/// Creates a new store from a [`File`].
///
/// The file must have been opened with read and write permissions.
///
/// `magic` is the expected prefixed bytes of the file. If this does not match, an error will be
/// returned.
///
/// [`File`]: std::fs::File
pub fn new(magic: &'a [u8], mut db_file: File) -> Result<Self, FileError> {
db_file.rewind()?;
let mut magic_buf = vec![0_u8; magic.len()];
db_file.read_exact(magic_buf.as_mut())?;
if magic_buf != magic {
return Err(FileError::InvalidMagicBytes {
got: magic_buf,
expected: magic,
});
}
Ok(Self {
magic,
db_file,
marker: Default::default(),
})
}
/// Creates or loads a store from `db_path`.
///
/// If no file exists there, it will be created.
///
/// Refer to [`new`] for documentation on the `magic` input.
///
/// [`new`]: Self::new
pub fn new_from_path<P>(magic: &'a [u8], db_path: P) -> Result<Self, FileError>
where
P: AsRef<Path>,
{
let already_exists = db_path.as_ref().exists();
let mut db_file = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(db_path)?;
if !already_exists {
db_file.write_all(magic)?;
}
Self::new(magic, db_file)
}
/// Iterates over the stored changeset from first to last, changing the seek position at each
/// iteration.
///
/// The iterator may fail to read an entry and therefore return an error. However, the first time
/// it returns an error will be the last. After doing so, the iterator will always yield `None`.
///
/// **WARNING**: This method changes the write position in the underlying file. You should
/// always iterate over all entries until `None` is returned if you want your next write to go
/// at the end; otherwise, you will write over existing entries.
pub fn iter_changesets(&mut self) -> EntryIter<C> {
EntryIter::new(self.magic.len() as u64, &mut self.db_file)
}
/// Loads all the changesets that have been stored as one giant changeset.
///
/// This function returns a tuple of the aggregate changeset and a result that indicates
/// whether an error occurred while reading or deserializing one of the entries. If so the
/// changeset will consist of all of those it was able to read.
///
/// You should usually check the error. In many applications, it may make sense to do a full
/// wallet scan with a stop-gap after getting an error, since it is likely that one of the
/// changesets it was unable to read changed the derivation indices of the tracker.
///
/// **WARNING**: This method changes the write position of the underlying file. The next
/// changeset will be written over the erroring entry (or the end of the file if none existed).
pub fn aggregate_changesets(&mut self) -> (C, Result<(), IterError>) {
let mut changeset = C::default();
let result = (|| {
for next_changeset in self.iter_changesets() {
changeset.append(next_changeset?);
}
Ok(())
})();
(changeset, result)
}
/// Append a new changeset to the file and truncate the file to the end of the appended
/// changeset.
///
/// The truncation is to avoid the possibility of having a valid but inconsistent changeset
/// directly after the appended changeset.
pub fn append_changeset(&mut self, changeset: &C) -> Result<(), io::Error> {
// no need to write anything if changeset is empty
if changeset.is_empty() {
return Ok(());
}
bincode_options()
.serialize_into(&mut self.db_file, changeset)
.map_err(|e| match *e {
bincode::ErrorKind::Io(inner) => inner,
unexpected_err => panic!("unexpected bincode error: {}", unexpected_err),
})?;
// truncate file after this changeset addition
// if this is not done, data after this changeset may represent valid changesets, however
// applying those changesets on top of this one may result in an inconsistent state
let pos = self.db_file.stream_position()?;
self.db_file.set_len(pos)?;
Ok(())
}
}
#[cfg(test)]
mod test {
use super::*;
use bincode::DefaultOptions;
use std::{
io::{Read, Write},
vec::Vec,
};
use tempfile::NamedTempFile;
const TEST_MAGIC_BYTES_LEN: usize = 12;
const TEST_MAGIC_BYTES: [u8; TEST_MAGIC_BYTES_LEN] =
[98, 100, 107, 102, 115, 49, 49, 49, 49, 49, 49, 49];
type TestChangeSet = Vec<String>;
#[derive(Debug)]
struct TestTracker;
#[test]
fn new_fails_if_file_is_too_short() {
let mut file = NamedTempFile::new().unwrap();
file.write_all(&TEST_MAGIC_BYTES[..TEST_MAGIC_BYTES_LEN - 1])
.expect("should write");
match Store::<TestChangeSet>::new(&TEST_MAGIC_BYTES, file.reopen().unwrap()) {
Err(FileError::Io(e)) => assert_eq!(e.kind(), std::io::ErrorKind::UnexpectedEof),
unexpected => panic!("unexpected result: {:?}", unexpected),
};
}
#[test]
fn new_fails_if_magic_bytes_are_invalid() {
let invalid_magic_bytes = "ldkfs0000000";
let mut file = NamedTempFile::new().unwrap();
file.write_all(invalid_magic_bytes.as_bytes())
.expect("should write");
match Store::<TestChangeSet>::new(&TEST_MAGIC_BYTES, file.reopen().unwrap()) {
Err(FileError::InvalidMagicBytes { got, .. }) => {
assert_eq!(got, invalid_magic_bytes.as_bytes())
}
unexpected => panic!("unexpected result: {:?}", unexpected),
};
}
#[test]
fn append_changeset_truncates_invalid_bytes() {
// initial data to write to file (magic bytes + invalid data)
let mut data = [255_u8; 2000];
data[..TEST_MAGIC_BYTES_LEN].copy_from_slice(&TEST_MAGIC_BYTES);
let changeset = vec!["one".into(), "two".into(), "three!".into()];
let mut file = NamedTempFile::new().unwrap();
file.write_all(&data).expect("should write");
let mut store = Store::<TestChangeSet>::new(&TEST_MAGIC_BYTES, file.reopen().unwrap())
.expect("should open");
match store.iter_changesets().next() {
Some(Err(IterError::Bincode(_))) => {}
unexpected_res => panic!("unexpected result: {:?}", unexpected_res),
}
store.append_changeset(&changeset).expect("should append");
drop(store);
let got_bytes = {
let mut buf = Vec::new();
file.reopen()
.unwrap()
.read_to_end(&mut buf)
.expect("should read");
buf
};
let expected_bytes = {
let mut buf = TEST_MAGIC_BYTES.to_vec();
DefaultOptions::new()
.with_varint_encoding()
.serialize_into(&mut buf, &changeset)
.expect("should encode");
buf
};
assert_eq!(got_bytes, expected_bytes);
}
}

View File

@@ -1 +0,0 @@

View File

@@ -1,17 +0,0 @@
[package]
name = "example_cli"
version = "0.2.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk_chain = { path = "../../crates/chain", features = ["serde", "miniscript"]}
bdk_file_store = { path = "../../crates/file_store" }
bdk_tmp_plan = { path = "../../nursery/tmp_plan" }
bdk_coin_select = { path = "../../nursery/coin_select" }
clap = { version = "3.2.23", features = ["derive", "env"] }
anyhow = "1"
serde = { version = "1", features = ["derive"] }
serde_json = { version = "^1.0" }

View File

@@ -1,736 +0,0 @@
pub use anyhow;
use anyhow::Context;
use bdk_coin_select::{coin_select_bnb, CoinSelector, CoinSelectorOpt, WeightedValue};
use bdk_file_store::Store;
use serde::{de::DeserializeOwned, Serialize};
use std::{cmp::Reverse, collections::HashMap, path::PathBuf, sync::Mutex, time::Duration};
use bdk_chain::{
bitcoin::{
psbt::Prevouts, secp256k1::Secp256k1, util::sighash::SighashCache, Address, LockTime,
Network, Sequence, Transaction, TxIn, TxOut,
},
indexed_tx_graph::{IndexedAdditions, IndexedTxGraph},
keychain::{DerivationAdditions, KeychainTxOutIndex},
miniscript::{
descriptor::{DescriptorSecretKey, KeyMap},
Descriptor, DescriptorPublicKey,
},
Anchor, Append, ChainOracle, DescriptorExt, FullTxOut, Persist, PersistBackend,
};
pub use bdk_file_store;
pub use clap;
use clap::{Parser, Subcommand};
pub type KeychainTxGraph<A> = IndexedTxGraph<A, KeychainTxOutIndex<Keychain>>;
pub type KeychainAdditions<A> = IndexedAdditions<A, DerivationAdditions<Keychain>>;
pub type Database<'m, C> = Persist<Store<'m, C>, C>;
#[derive(Parser)]
#[clap(author, version, about, long_about = None)]
#[clap(propagate_version = true)]
pub struct Args<S: clap::Subcommand> {
#[clap(env = "DESCRIPTOR")]
pub descriptor: String,
#[clap(env = "CHANGE_DESCRIPTOR")]
pub change_descriptor: Option<String>,
#[clap(env = "BITCOIN_NETWORK", long, default_value = "signet")]
pub network: Network,
#[clap(env = "BDK_DB_PATH", long, default_value = ".bdk_example_db")]
pub db_path: PathBuf,
#[clap(env = "BDK_CP_LIMIT", long, default_value = "20")]
pub cp_limit: usize,
#[clap(subcommand)]
pub command: Commands<S>,
}
#[allow(clippy::almost_swapped)]
#[derive(Subcommand, Debug, Clone)]
pub enum Commands<S: clap::Subcommand> {
#[clap(flatten)]
ChainSpecific(S),
/// Address generation and inspection.
Address {
#[clap(subcommand)]
addr_cmd: AddressCmd,
},
/// Get the wallet balance.
Balance,
/// TxOut related commands.
#[clap(name = "txout")]
TxOut {
#[clap(subcommand)]
txout_cmd: TxOutCmd,
},
/// Send coins to an address.
Send {
value: u64,
address: Address,
#[clap(short, default_value = "bnb")]
coin_select: CoinSelectionAlgo,
},
}
#[derive(Clone, Debug)]
pub enum CoinSelectionAlgo {
LargestFirst,
SmallestFirst,
OldestFirst,
NewestFirst,
BranchAndBound,
}
impl Default for CoinSelectionAlgo {
fn default() -> Self {
Self::LargestFirst
}
}
impl core::str::FromStr for CoinSelectionAlgo {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
use CoinSelectionAlgo::*;
Ok(match s {
"largest-first" => LargestFirst,
"smallest-first" => SmallestFirst,
"oldest-first" => OldestFirst,
"newest-first" => NewestFirst,
"bnb" => BranchAndBound,
unknown => {
return Err(anyhow::anyhow!(
"unknown coin selection algorithm '{}'",
unknown
))
}
})
}
}
impl core::fmt::Display for CoinSelectionAlgo {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
use CoinSelectionAlgo::*;
write!(
f,
"{}",
match self {
LargestFirst => "largest-first",
SmallestFirst => "smallest-first",
OldestFirst => "oldest-first",
NewestFirst => "newest-first",
BranchAndBound => "bnb",
}
)
}
}
#[allow(clippy::almost_swapped)]
#[derive(Subcommand, Debug, Clone)]
pub enum AddressCmd {
/// Get the next unused address.
Next,
/// Get a new address regardless of the existing unused addresses.
New,
/// List all addresses
List {
#[clap(long)]
change: bool,
},
Index,
}
#[derive(Subcommand, Debug, Clone)]
pub enum TxOutCmd {
List {
/// Return only spent outputs.
#[clap(short, long)]
spent: bool,
/// Return only unspent outputs.
#[clap(short, long)]
unspent: bool,
/// Return only confirmed outputs.
#[clap(long)]
confirmed: bool,
/// Return only unconfirmed outputs.
#[clap(long)]
unconfirmed: bool,
},
}
#[derive(
Debug, Clone, Copy, PartialOrd, Ord, PartialEq, Eq, serde::Deserialize, serde::Serialize,
)]
pub enum Keychain {
External,
Internal,
}
impl core::fmt::Display for Keychain {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Keychain::External => write!(f, "external"),
Keychain::Internal => write!(f, "internal"),
}
}
}
pub fn run_address_cmd<A, C>(
graph: &mut KeychainTxGraph<A>,
db: &Mutex<Database<C>>,
network: Network,
cmd: AddressCmd,
) -> anyhow::Result<()>
where
C: Default + Append + DeserializeOwned + Serialize + From<KeychainAdditions<A>>,
{
let index = &mut graph.index;
match cmd {
AddressCmd::Next | AddressCmd::New => {
let spk_chooser = match cmd {
AddressCmd::Next => KeychainTxOutIndex::next_unused_spk,
AddressCmd::New => KeychainTxOutIndex::reveal_next_spk,
_ => unreachable!("only these two variants exist in match arm"),
};
let ((spk_i, spk), index_additions) = spk_chooser(index, &Keychain::External);
let db = &mut *db.lock().unwrap();
db.stage(C::from(KeychainAdditions::from(index_additions)));
db.commit()?;
let addr = Address::from_script(spk, network).context("failed to derive address")?;
println!("[address @ {}] {}", spk_i, addr);
Ok(())
}
AddressCmd::Index => {
for (keychain, derivation_index) in index.last_revealed_indices() {
println!("{:?}: {}", keychain, derivation_index);
}
Ok(())
}
AddressCmd::List { change } => {
let target_keychain = match change {
true => Keychain::Internal,
false => Keychain::External,
};
for (spk_i, spk) in index.revealed_spks_of_keychain(&target_keychain) {
let address = Address::from_script(spk, network)
.expect("should always be able to derive address");
println!(
"{:?} {} used:{}",
spk_i,
address,
index.is_used(&(target_keychain, spk_i))
);
}
Ok(())
}
}
}
pub fn run_balance_cmd<A: Anchor, O: ChainOracle>(
graph: &KeychainTxGraph<A>,
chain: &O,
) -> Result<(), O::Error> {
fn print_balances<'a>(title_str: &'a str, items: impl IntoIterator<Item = (&'a str, u64)>) {
println!("{}:", title_str);
for (name, amount) in items.into_iter() {
println!(" {:<10} {:>12} sats", name, amount)
}
}
let balance = graph.graph().try_balance(
chain,
chain.get_chain_tip()?.unwrap_or_default(),
graph.index.outpoints().iter().cloned(),
|(k, _), _| k == &Keychain::Internal,
)?;
let confirmed_total = balance.confirmed + balance.immature;
let unconfirmed_total = balance.untrusted_pending + balance.trusted_pending;
print_balances(
"confirmed",
[
("total", confirmed_total),
("spendable", balance.confirmed),
("immature", balance.immature),
],
);
print_balances(
"unconfirmed",
[
("total", unconfirmed_total),
("trusted", balance.trusted_pending),
("untrusted", balance.untrusted_pending),
],
);
Ok(())
}
pub fn run_txo_cmd<A: Anchor, O: ChainOracle>(
graph: &KeychainTxGraph<A>,
chain: &O,
network: Network,
cmd: TxOutCmd,
) -> anyhow::Result<()>
where
O::Error: std::error::Error + Send + Sync + 'static,
{
let chain_tip = chain.get_chain_tip()?.unwrap_or_default();
let outpoints = graph.index.outpoints().iter().cloned();
match cmd {
TxOutCmd::List {
spent,
unspent,
confirmed,
unconfirmed,
} => {
let txouts = graph
.graph()
.try_filter_chain_txouts(chain, chain_tip, outpoints)
.filter(|r| match r {
Ok((_, full_txo)) => match (spent, unspent) {
(true, false) => full_txo.spent_by.is_some(),
(false, true) => full_txo.spent_by.is_none(),
_ => true,
},
// always keep errored items
Err(_) => true,
})
.filter(|r| match r {
Ok((_, full_txo)) => match (confirmed, unconfirmed) {
(true, false) => full_txo.chain_position.is_confirmed(),
(false, true) => !full_txo.chain_position.is_confirmed(),
_ => true,
},
// always keep errored items
Err(_) => true,
})
.collect::<Result<Vec<_>, _>>()?;
for (spk_i, full_txo) in txouts {
let addr = Address::from_script(&full_txo.txout.script_pubkey, network)?;
println!(
"{:?} {} {} {} spent:{:?}",
spk_i, full_txo.txout.value, full_txo.outpoint, addr, full_txo.spent_by
)
}
Ok(())
}
}
}
#[allow(clippy::too_many_arguments)]
pub fn run_send_cmd<A: Anchor, O: ChainOracle, C>(
graph: &Mutex<KeychainTxGraph<A>>,
db: &Mutex<Database<'_, C>>,
chain: &O,
keymap: &HashMap<DescriptorPublicKey, DescriptorSecretKey>,
cs_algorithm: CoinSelectionAlgo,
address: Address,
value: u64,
broadcast: impl FnOnce(&Transaction) -> anyhow::Result<()>,
) -> anyhow::Result<()>
where
O::Error: std::error::Error + Send + Sync + 'static,
C: Default + Append + DeserializeOwned + Serialize + From<KeychainAdditions<A>>,
{
let (transaction, change_index) = {
let graph = &mut *graph.lock().unwrap();
// take mutable ref to construct tx -- it is only open for a short time while building it.
let (tx, change_info) = create_tx(graph, chain, keymap, cs_algorithm, address, value)?;
if let Some((index_additions, (change_keychain, index))) = change_info {
// We must first persist to disk the fact that we've got a new address from the
// change keychain so future scans will find the tx we're about to broadcast.
// If we're unable to persist this, then we don't want to broadcast.
{
let db = &mut *db.lock().unwrap();
db.stage(C::from(KeychainAdditions::from(index_additions)));
db.commit()?;
}
// We don't want other callers/threads to use this address while we're using it
// but we also don't want to scan the tx we just created because it's not
// technically in the blockchain yet.
graph.index.mark_used(&change_keychain, index);
(tx, Some((change_keychain, index)))
} else {
(tx, None)
}
};
match (broadcast)(&transaction) {
Ok(_) => {
println!("Broadcasted Tx : {}", transaction.txid());
let keychain_additions = graph.lock().unwrap().insert_tx(&transaction, None, None);
// We know the tx is at least unconfirmed now. Note if persisting here fails,
// it's not a big deal since we can always find it again form
// blockchain.
db.lock().unwrap().stage(C::from(keychain_additions));
Ok(())
}
Err(e) => {
if let Some((keychain, index)) = change_index {
// We failed to broadcast, so allow our change address to be used in the future
graph.lock().unwrap().index.unmark_used(&keychain, index);
}
Err(e)
}
}
}
#[allow(clippy::type_complexity)]
pub fn create_tx<A: Anchor, O: ChainOracle>(
graph: &mut KeychainTxGraph<A>,
chain: &O,
keymap: &HashMap<DescriptorPublicKey, DescriptorSecretKey>,
cs_algorithm: CoinSelectionAlgo,
address: Address,
value: u64,
) -> anyhow::Result<(
Transaction,
Option<(DerivationAdditions<Keychain>, (Keychain, u32))>,
)>
where
O::Error: std::error::Error + Send + Sync + 'static,
{
let mut additions = DerivationAdditions::default();
let assets = bdk_tmp_plan::Assets {
keys: keymap.iter().map(|(pk, _)| pk.clone()).collect(),
..Default::default()
};
// TODO use planning module
let mut candidates = planned_utxos(graph, chain, &assets)?;
// apply coin selection algorithm
match cs_algorithm {
CoinSelectionAlgo::LargestFirst => {
candidates.sort_by_key(|(_, utxo)| Reverse(utxo.txout.value))
}
CoinSelectionAlgo::SmallestFirst => candidates.sort_by_key(|(_, utxo)| utxo.txout.value),
CoinSelectionAlgo::OldestFirst => {
candidates.sort_by_key(|(_, utxo)| utxo.chain_position.clone())
}
CoinSelectionAlgo::NewestFirst => {
candidates.sort_by_key(|(_, utxo)| Reverse(utxo.chain_position.clone()))
}
CoinSelectionAlgo::BranchAndBound => {}
}
// turn the txos we chose into weight and value
let wv_candidates = candidates
.iter()
.map(|(plan, utxo)| {
WeightedValue::new(
utxo.txout.value,
plan.expected_weight() as _,
plan.witness_version().is_some(),
)
})
.collect();
let mut outputs = vec![TxOut {
value,
script_pubkey: address.script_pubkey(),
}];
let internal_keychain = if graph.index.keychains().get(&Keychain::Internal).is_some() {
Keychain::Internal
} else {
Keychain::External
};
let ((change_index, change_script), change_additions) =
graph.index.next_unused_spk(&internal_keychain);
additions.append(change_additions);
// Clone to drop the immutable reference.
let change_script = change_script.clone();
let change_plan = bdk_tmp_plan::plan_satisfaction(
&graph
.index
.keychains()
.get(&internal_keychain)
.expect("must exist")
.at_derivation_index(change_index),
&assets,
)
.expect("failed to obtain change plan");
let mut change_output = TxOut {
value: 0,
script_pubkey: change_script,
};
let cs_opts = CoinSelectorOpt {
target_feerate: 0.5,
min_drain_value: graph
.index
.keychains()
.get(&internal_keychain)
.expect("must exist")
.dust_value(),
..CoinSelectorOpt::fund_outputs(
&outputs,
&change_output,
change_plan.expected_weight() as u32,
)
};
// TODO: How can we make it easy to shuffle in order of inputs and outputs here?
// apply coin selection by saying we need to fund these outputs
let mut coin_selector = CoinSelector::new(&wv_candidates, &cs_opts);
// just select coins in the order provided until we have enough
// only use the first result (least waste)
let selection = match cs_algorithm {
CoinSelectionAlgo::BranchAndBound => {
coin_select_bnb(Duration::from_secs(10), coin_selector.clone())
.map_or_else(|| coin_selector.select_until_finished(), |cs| cs.finish())?
}
_ => coin_selector.select_until_finished()?,
};
let (_, selection_meta) = selection.best_strategy();
// get the selected utxos
let selected_txos = selection.apply_selection(&candidates).collect::<Vec<_>>();
if let Some(drain_value) = selection_meta.drain_value {
change_output.value = drain_value;
// if the selection tells us to use change and the change value is sufficient, we add it as an output
outputs.push(change_output)
}
let mut transaction = Transaction {
version: 0x02,
// because the temporary planning module does not support timelocks, we can use the chain
// tip as the `lock_time` for anti-fee-sniping purposes
lock_time: chain
.get_chain_tip()?
.and_then(|block_id| LockTime::from_height(block_id.height).ok())
.unwrap_or(LockTime::ZERO)
.into(),
input: selected_txos
.iter()
.map(|(_, utxo)| TxIn {
previous_output: utxo.outpoint,
sequence: Sequence::ENABLE_RBF_NO_LOCKTIME,
..Default::default()
})
.collect(),
output: outputs,
};
let prevouts = selected_txos
.iter()
.map(|(_, utxo)| utxo.txout.clone())
.collect::<Vec<_>>();
let sighash_prevouts = Prevouts::All(&prevouts);
// first, set tx values for the plan so that we don't change them while signing
for (i, (plan, _)) in selected_txos.iter().enumerate() {
if let Some(sequence) = plan.required_sequence() {
transaction.input[i].sequence = sequence
}
}
// create a short lived transaction
let _sighash_tx = transaction.clone();
let mut sighash_cache = SighashCache::new(&_sighash_tx);
for (i, (plan, _)) in selected_txos.iter().enumerate() {
let requirements = plan.requirements();
let mut auth_data = bdk_tmp_plan::SatisfactionMaterial::default();
assert!(
!requirements.requires_hash_preimages(),
"can't have hash pre-images since we didn't provide any."
);
assert!(
requirements.signatures.sign_with_keymap(
i,
keymap,
&sighash_prevouts,
None,
None,
&mut sighash_cache,
&mut auth_data,
&Secp256k1::default(),
)?,
"we should have signed with this input."
);
match plan.try_complete(&auth_data) {
bdk_tmp_plan::PlanState::Complete {
final_script_sig,
final_script_witness,
} => {
if let Some(witness) = final_script_witness {
transaction.input[i].witness = witness;
}
if let Some(script_sig) = final_script_sig {
transaction.input[i].script_sig = script_sig;
}
}
bdk_tmp_plan::PlanState::Incomplete(_) => {
return Err(anyhow::anyhow!(
"we weren't able to complete the plan with our keys."
));
}
}
}
let change_info = if selection_meta.drain_value.is_some() {
Some((additions, (internal_keychain, change_index)))
} else {
None
};
Ok((transaction, change_info))
}
#[allow(clippy::type_complexity)]
pub fn planned_utxos<A: Anchor, O: ChainOracle, K: Clone + bdk_tmp_plan::CanDerive>(
graph: &KeychainTxGraph<A>,
chain: &O,
assets: &bdk_tmp_plan::Assets<K>,
) -> Result<Vec<(bdk_tmp_plan::Plan<K>, FullTxOut<A>)>, O::Error> {
let chain_tip = chain.get_chain_tip()?.unwrap_or_default();
let outpoints = graph.index.outpoints().iter().cloned();
graph
.graph()
.try_filter_chain_unspents(chain, chain_tip, outpoints)
.filter_map(
#[allow(clippy::type_complexity)]
|r| -> Option<Result<(bdk_tmp_plan::Plan<K>, FullTxOut<A>), _>> {
let (k, i, full_txo) = match r {
Err(err) => return Some(Err(err)),
Ok(((k, i), full_txo)) => (k, i, full_txo),
};
let desc = graph
.index
.keychains()
.get(&k)
.expect("keychain must exist")
.at_derivation_index(i);
let plan = bdk_tmp_plan::plan_satisfaction(&desc, assets)?;
Some(Ok((plan, full_txo)))
},
)
.collect()
}
pub fn handle_commands<S: clap::Subcommand, A: Anchor, O: ChainOracle, C>(
graph: &Mutex<KeychainTxGraph<A>>,
db: &Mutex<Database<C>>,
chain: &Mutex<O>,
keymap: &HashMap<DescriptorPublicKey, DescriptorSecretKey>,
network: Network,
broadcast: impl FnOnce(&Transaction) -> anyhow::Result<()>,
cmd: Commands<S>,
) -> anyhow::Result<()>
where
O::Error: std::error::Error + Send + Sync + 'static,
C: Default + Append + DeserializeOwned + Serialize + From<KeychainAdditions<A>>,
{
match cmd {
Commands::ChainSpecific(_) => unreachable!("example code should handle this!"),
Commands::Address { addr_cmd } => {
let graph = &mut *graph.lock().unwrap();
run_address_cmd(graph, db, network, addr_cmd)
}
Commands::Balance => {
let graph = &*graph.lock().unwrap();
let chain = &*chain.lock().unwrap();
run_balance_cmd(graph, chain).map_err(anyhow::Error::from)
}
Commands::TxOut { txout_cmd } => {
let graph = &*graph.lock().unwrap();
let chain = &*chain.lock().unwrap();
run_txo_cmd(graph, chain, network, txout_cmd)
}
Commands::Send {
value,
address,
coin_select,
} => {
let chain = &*chain.lock().unwrap();
run_send_cmd(
graph,
db,
chain,
keymap,
coin_select,
address,
value,
broadcast,
)
}
}
}
#[allow(clippy::type_complexity)]
pub fn init<'m, S: clap::Subcommand, C>(
db_magic: &'m [u8],
db_default_path: &str,
) -> anyhow::Result<(
Args<S>,
KeyMap,
KeychainTxOutIndex<Keychain>,
Mutex<Database<'m, C>>,
C,
)>
where
C: Default + Append + Serialize + DeserializeOwned,
{
if std::env::var("BDK_DB_PATH").is_err() {
std::env::set_var("BDK_DB_PATH", db_default_path);
}
let args = Args::<S>::parse();
let secp = Secp256k1::default();
let mut index = KeychainTxOutIndex::<Keychain>::default();
let (descriptor, mut keymap) =
Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, &args.descriptor)?;
index.add_keychain(Keychain::External, descriptor);
if let Some((internal_descriptor, internal_keymap)) = args
.change_descriptor
.as_ref()
.map(|desc_str| Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, desc_str))
.transpose()?
{
keymap.extend(internal_keymap);
index.add_keychain(Keychain::Internal, internal_descriptor);
}
let mut db_backend = match Store::<'m, C>::new_from_path(db_magic, &args.db_path) {
Ok(db_backend) => db_backend,
// we cannot return `err` directly as it has lifetime `'m`
Err(err) => return Err(anyhow::anyhow!("failed to init db backend: {:?}", err)),
};
let init_changeset = db_backend.load_from_persistence()?;
Ok((
args,
keymap,
index,
Mutex::new(Database::new(db_backend)),
init_changeset,
))
}

View File

@@ -1,11 +0,0 @@
[package]
name = "example_electrum"
version = "0.2.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk_chain = { path = "../../crates/chain", features = ["serde"] }
bdk_electrum = { path = "../../crates/electrum" }
example_cli = { path = "../example_cli" }

View File

@@ -1,318 +0,0 @@
use std::{
collections::BTreeMap,
io::{self, Write},
sync::Mutex,
};
use bdk_chain::{
bitcoin::{Address, BlockHash, Network, OutPoint, Txid},
indexed_tx_graph::{IndexedAdditions, IndexedTxGraph},
keychain::LocalChangeSet,
local_chain::LocalChain,
Append, ConfirmationHeightAnchor,
};
use bdk_electrum::{
electrum_client::{self, ElectrumApi},
ElectrumExt, ElectrumUpdate,
};
use example_cli::{
anyhow::{self, Context},
clap::{self, Parser, Subcommand},
Keychain,
};
const DB_MAGIC: &[u8] = b"bdk_example_electrum";
const DB_PATH: &str = ".bdk_electrum_example.db";
const ASSUME_FINAL_DEPTH: usize = 10;
#[derive(Subcommand, Debug, Clone)]
enum ElectrumCommands {
/// Scans the addresses in the wallet using the electrum API.
Scan {
/// When a gap this large has been found for a keychain, it will stop.
#[clap(long, default_value = "5")]
stop_gap: usize,
#[clap(flatten)]
scan_options: ScanOptions,
},
/// Scans particular addresses using the electrum API.
Sync {
/// Scan all the unused addresses.
#[clap(long)]
unused_spks: bool,
/// Scan every address that you have derived.
#[clap(long)]
all_spks: bool,
/// Scan unspent outpoints for spends or changes to confirmation status of residing tx.
#[clap(long)]
utxos: bool,
/// Scan unconfirmed transactions for updates.
#[clap(long)]
unconfirmed: bool,
#[clap(flatten)]
scan_options: ScanOptions,
},
}
#[derive(Parser, Debug, Clone, PartialEq)]
pub struct ScanOptions {
/// Set batch size for each script_history call to electrum client.
#[clap(long, default_value = "25")]
pub batch_size: usize,
}
type ChangeSet = LocalChangeSet<Keychain, ConfirmationHeightAnchor>;
fn main() -> anyhow::Result<()> {
let (args, keymap, index, db, init_changeset) =
example_cli::init::<ElectrumCommands, ChangeSet>(DB_MAGIC, DB_PATH)?;
let graph = Mutex::new({
let mut graph = IndexedTxGraph::new(index);
graph.apply_additions(init_changeset.indexed_additions);
graph
});
let chain = Mutex::new({
let mut chain = LocalChain::default();
chain.apply_changeset(init_changeset.chain_changeset);
chain
});
let electrum_url = match args.network {
Network::Bitcoin => "ssl://electrum.blockstream.info:50002",
Network::Testnet => "ssl://electrum.blockstream.info:60002",
Network::Regtest => "tcp://localhost:60401",
Network::Signet => "tcp://signet-electrumx.wakiyamap.dev:50001",
};
let config = electrum_client::Config::builder()
.validate_domain(matches!(args.network, Network::Bitcoin))
.build();
let client = electrum_client::Client::from_config(electrum_url, config)?;
let electrum_cmd = match &args.command {
example_cli::Commands::ChainSpecific(electrum_cmd) => electrum_cmd,
general_cmd => {
let res = example_cli::handle_commands(
&graph,
&db,
&chain,
&keymap,
args.network,
|tx| {
client
.transaction_broadcast(tx)
.map(|_| ())
.map_err(anyhow::Error::from)
},
general_cmd.clone(),
);
db.lock().unwrap().commit()?;
return res;
}
};
let response = match electrum_cmd.clone() {
ElectrumCommands::Scan {
stop_gap,
scan_options,
} => {
let (keychain_spks, local_chain) = {
let graph = &*graph.lock().unwrap();
let chain = &*chain.lock().unwrap();
let keychain_spks = graph
.index
.spks_of_all_keychains()
.into_iter()
.map(|(keychain, iter)| {
let mut first = true;
let spk_iter = iter.inspect(move |(i, _)| {
if first {
eprint!("\nscanning {}: ", keychain);
first = false;
}
eprint!("{} ", i);
let _ = io::stdout().flush();
});
(keychain, spk_iter)
})
.collect::<BTreeMap<_, _>>();
let c = chain
.blocks()
.iter()
.rev()
.take(ASSUME_FINAL_DEPTH)
.map(|(k, v)| (*k, *v))
.collect::<BTreeMap<u32, BlockHash>>();
(keychain_spks, c)
};
client
.scan(
&local_chain,
keychain_spks,
core::iter::empty(),
core::iter::empty(),
stop_gap,
scan_options.batch_size,
)
.context("scanning the blockchain")?
}
ElectrumCommands::Sync {
mut unused_spks,
all_spks,
mut utxos,
mut unconfirmed,
scan_options,
} => {
// Get a short lock on the tracker to get the spks we're interested in
let graph = graph.lock().unwrap();
let chain = chain.lock().unwrap();
let chain_tip = chain.tip().unwrap_or_default();
if !(all_spks || unused_spks || utxos || unconfirmed) {
unused_spks = true;
unconfirmed = true;
utxos = true;
} else if all_spks {
unused_spks = false;
}
let mut spks: Box<dyn Iterator<Item = bdk_chain::bitcoin::Script>> =
Box::new(core::iter::empty());
if all_spks {
let all_spks = graph
.index
.all_spks()
.iter()
.map(|(k, v)| (*k, v.clone()))
.collect::<Vec<_>>();
spks = Box::new(spks.chain(all_spks.into_iter().map(|(index, script)| {
eprintln!("scanning {:?}", index);
script
})));
}
if unused_spks {
let unused_spks = graph
.index
.unused_spks(..)
.map(|(k, v)| (*k, v.clone()))
.collect::<Vec<_>>();
spks = Box::new(spks.chain(unused_spks.into_iter().map(|(index, script)| {
eprintln!(
"Checking if address {} {:?} has been used",
Address::from_script(&script, args.network).unwrap(),
index
);
script
})));
}
let mut outpoints: Box<dyn Iterator<Item = OutPoint>> = Box::new(core::iter::empty());
if utxos {
let init_outpoints = graph.index.outpoints().iter().cloned();
let utxos = graph
.graph()
.filter_chain_unspents(&*chain, chain_tip, init_outpoints)
.map(|(_, utxo)| utxo)
.collect::<Vec<_>>();
outpoints = Box::new(
utxos
.into_iter()
.inspect(|utxo| {
eprintln!(
"Checking if outpoint {} (value: {}) has been spent",
utxo.outpoint, utxo.txout.value
);
})
.map(|utxo| utxo.outpoint),
);
};
let mut txids: Box<dyn Iterator<Item = Txid>> = Box::new(core::iter::empty());
if unconfirmed {
let unconfirmed_txids = graph
.graph()
.list_chain_txs(&*chain, chain_tip)
.filter(|canonical_tx| !canonical_tx.observed_as.is_confirmed())
.map(|canonical_tx| canonical_tx.node.txid)
.collect::<Vec<Txid>>();
txids = Box::new(unconfirmed_txids.into_iter().inspect(|txid| {
eprintln!("Checking if {} is confirmed yet", txid);
}));
}
let c = chain
.blocks()
.iter()
.rev()
.take(ASSUME_FINAL_DEPTH)
.map(|(k, v)| (*k, *v))
.collect::<BTreeMap<u32, BlockHash>>();
// drop lock on graph and chain
drop((graph, chain));
let update = client
.scan_without_keychain(&c, spks, txids, outpoints, scan_options.batch_size)
.context("scanning the blockchain")?;
ElectrumUpdate {
graph_update: update.graph_update,
chain_update: update.chain_update,
keychain_update: BTreeMap::new(),
}
}
};
let missing_txids = {
let graph = &*graph.lock().unwrap();
response.missing_full_txs(graph.graph())
};
let now = std::time::UNIX_EPOCH
.elapsed()
.expect("must get time")
.as_secs();
let final_update = response.finalize(&client, Some(now), missing_txids)?;
let db_changeset = {
let mut chain = chain.lock().unwrap();
let mut graph = graph.lock().unwrap();
let chain_changeset = chain.apply_update(final_update.chain)?;
let indexed_additions = {
let mut additions = IndexedAdditions::<ConfirmationHeightAnchor, _>::default();
let (_, index_additions) = graph.index.reveal_to_target_multi(&final_update.keychain);
additions.append(IndexedAdditions {
index_additions,
..Default::default()
});
additions.append(graph.apply_update(final_update.graph));
additions
};
ChangeSet {
indexed_additions,
chain_changeset,
}
};
let mut db = db.lock().unwrap();
db.stage(db_changeset);
db.commit()?;
Ok(())
}

View File

@@ -1,9 +0,0 @@
[package]
name = "wallet_electrum_example"
version = "0.2.0"
edition = "2021"
[dependencies]
bdk = { path = "../../crates/bdk" }
bdk_electrum = { path = "../../crates/electrum" }
bdk_file_store = { path = "../../crates/file_store" }

View File

@@ -1,93 +0,0 @@
const DB_MAGIC: &str = "bdk_wallet_electrum_example";
const SEND_AMOUNT: u64 = 5000;
const STOP_GAP: usize = 50;
const BATCH_SIZE: usize = 5;
use std::io::Write;
use std::str::FromStr;
use bdk::bitcoin::Address;
use bdk::SignOptions;
use bdk::{bitcoin::Network, Wallet};
use bdk_electrum::electrum_client::{self, ElectrumApi};
use bdk_electrum::ElectrumExt;
use bdk_file_store::Store;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let db_path = std::env::temp_dir().join("bdk-electrum-example");
let db = Store::<bdk::wallet::ChangeSet>::new_from_path(DB_MAGIC.as_bytes(), db_path)?;
let external_descriptor = "wpkh(tprv8ZgxMBicQKsPdy6LMhUtFHAgpocR8GC6QmwMSFpZs7h6Eziw3SpThFfczTDh5rW2krkqffa11UpX3XkeTTB2FvzZKWXqPY54Y6Rq4AQ5R8L/84'/0'/0'/0/*)";
let internal_descriptor = "wpkh(tprv8ZgxMBicQKsPdy6LMhUtFHAgpocR8GC6QmwMSFpZs7h6Eziw3SpThFfczTDh5rW2krkqffa11UpX3XkeTTB2FvzZKWXqPY54Y6Rq4AQ5R8L/84'/0'/0'/1/*)";
let mut wallet = Wallet::new(
external_descriptor,
Some(internal_descriptor),
db,
Network::Testnet,
)?;
let address = wallet.get_address(bdk::wallet::AddressIndex::New);
println!("Generated Address: {}", address);
let balance = wallet.get_balance();
println!("Wallet balance before syncing: {} sats", balance.total());
print!("Syncing...");
let client = electrum_client::Client::new("ssl://electrum.blockstream.info:60002")?;
let local_chain = wallet.checkpoints();
let keychain_spks = wallet
.spks_of_all_keychains()
.into_iter()
.map(|(k, k_spks)| {
let mut once = Some(());
let mut stdout = std::io::stdout();
let k_spks = k_spks
.inspect(move |(spk_i, _)| match once.take() {
Some(_) => print!("\nScanning keychain [{:?}]", k),
None => print!(" {:<3}", spk_i),
})
.inspect(move |_| stdout.flush().expect("must flush"));
(k, k_spks)
})
.collect();
let electrum_update =
client.scan(local_chain, keychain_spks, None, None, STOP_GAP, BATCH_SIZE)?;
println!();
let missing = electrum_update.missing_full_txs(wallet.as_ref());
let update = electrum_update.finalize_as_confirmation_time(&client, None, missing)?;
wallet.apply_update(update)?;
wallet.commit()?;
let balance = wallet.get_balance();
println!("Wallet balance after syncing: {} sats", balance.total());
if balance.total() < SEND_AMOUNT {
println!(
"Please send at least {} sats to the receiving address",
SEND_AMOUNT
);
std::process::exit(0);
}
let faucet_address = Address::from_str("mkHS9ne12qx9pS9VojpwU5xtRd4T7X7ZUt")?;
let mut tx_builder = wallet.build_tx();
tx_builder
.add_recipient(faucet_address.script_pubkey(), SEND_AMOUNT)
.enable_rbf();
let (mut psbt, _) = tx_builder.finish()?;
let finalized = wallet.sign(&mut psbt, SignOptions::default())?;
assert!(finalized);
let tx = psbt.extract_tx();
client.transaction_broadcast(&tx)?;
println!("Tx broadcasted! Txid: {}", tx.txid());
Ok(())
}

View File

@@ -1,12 +0,0 @@
[package]
name = "wallet_esplora"
version = "0.2.0"
edition = "2021"
publish = false
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk = { path = "../../crates/bdk" }
bdk_esplora = { path = "../../crates/esplora", features = ["blocking"] }
bdk_file_store = { path = "../../crates/file_store" }

View File

@@ -1,94 +0,0 @@
const DB_MAGIC: &str = "bdk_wallet_esplora_example";
const SEND_AMOUNT: u64 = 5000;
const STOP_GAP: usize = 50;
const PARALLEL_REQUESTS: usize = 5;
use std::{io::Write, str::FromStr};
use bdk::{
bitcoin::{Address, Network},
wallet::AddressIndex,
SignOptions, Wallet,
};
use bdk_esplora::{esplora_client, EsploraExt};
use bdk_file_store::Store;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let db_path = std::env::temp_dir().join("bdk-esplora-example");
let db = Store::<bdk::wallet::ChangeSet>::new_from_path(DB_MAGIC.as_bytes(), db_path)?;
let external_descriptor = "wpkh(tprv8ZgxMBicQKsPdy6LMhUtFHAgpocR8GC6QmwMSFpZs7h6Eziw3SpThFfczTDh5rW2krkqffa11UpX3XkeTTB2FvzZKWXqPY54Y6Rq4AQ5R8L/84'/0'/0'/0/*)";
let internal_descriptor = "wpkh(tprv8ZgxMBicQKsPdy6LMhUtFHAgpocR8GC6QmwMSFpZs7h6Eziw3SpThFfczTDh5rW2krkqffa11UpX3XkeTTB2FvzZKWXqPY54Y6Rq4AQ5R8L/84'/0'/0'/1/*)";
let mut wallet = Wallet::new(
external_descriptor,
Some(internal_descriptor),
db,
Network::Testnet,
)?;
let address = wallet.get_address(AddressIndex::New);
println!("Generated Address: {}", address);
let balance = wallet.get_balance();
println!("Wallet balance before syncing: {} sats", balance.total());
print!("Syncing...");
let client =
esplora_client::Builder::new("https://blockstream.info/testnet/api").build_blocking()?;
let local_chain = wallet.checkpoints();
let keychain_spks = wallet
.spks_of_all_keychains()
.into_iter()
.map(|(k, k_spks)| {
let mut once = Some(());
let mut stdout = std::io::stdout();
let k_spks = k_spks
.inspect(move |(spk_i, _)| match once.take() {
Some(_) => print!("\nScanning keychain [{:?}]", k),
None => print!(" {:<3}", spk_i),
})
.inspect(move |_| stdout.flush().expect("must flush"));
(k, k_spks)
})
.collect();
let update = client.scan(
local_chain,
keychain_spks,
None,
None,
STOP_GAP,
PARALLEL_REQUESTS,
)?;
println!();
wallet.apply_update(update)?;
wallet.commit()?;
let balance = wallet.get_balance();
println!("Wallet balance after syncing: {} sats", balance.total());
if balance.total() < SEND_AMOUNT {
println!(
"Please send at least {} sats to the receiving address",
SEND_AMOUNT
);
std::process::exit(0);
}
let faucet_address = Address::from_str("mkHS9ne12qx9pS9VojpwU5xtRd4T7X7ZUt")?;
let mut tx_builder = wallet.build_tx();
tx_builder
.add_recipient(faucet_address.script_pubkey(), SEND_AMOUNT)
.enable_rbf();
let (mut psbt, _) = tx_builder.finish()?;
let finalized = wallet.sign(&mut psbt, SignOptions::default())?;
assert!(finalized);
let tx = psbt.extract_tx();
client.broadcast(&tx)?;
println!("Tx broadcasted! Txid: {}", tx.txid());
Ok(())
}

View File

@@ -1,12 +0,0 @@
[package]
name = "wallet_esplora_async"
version = "0.2.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk = { path = "../../crates/bdk" }
bdk_esplora = { path = "../../crates/esplora", features = ["async-https"] }
bdk_file_store = { path = "../../crates/file_store" }
tokio = { version = "1", features = ["rt", "rt-multi-thread", "macros"] }

View File

@@ -1,97 +0,0 @@
use std::{io::Write, str::FromStr};
use bdk::{
bitcoin::{Address, Network},
wallet::AddressIndex,
SignOptions, Wallet,
};
use bdk_esplora::{esplora_client, EsploraAsyncExt};
use bdk_file_store::Store;
const DB_MAGIC: &str = "bdk_wallet_esplora_async_example";
const SEND_AMOUNT: u64 = 5000;
const STOP_GAP: usize = 50;
const PARALLEL_REQUESTS: usize = 5;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let db_path = std::env::temp_dir().join("bdk-esplora-async-example");
let db = Store::<bdk::wallet::ChangeSet>::new_from_path(DB_MAGIC.as_bytes(), db_path)?;
let external_descriptor = "wpkh(tprv8ZgxMBicQKsPdy6LMhUtFHAgpocR8GC6QmwMSFpZs7h6Eziw3SpThFfczTDh5rW2krkqffa11UpX3XkeTTB2FvzZKWXqPY54Y6Rq4AQ5R8L/84'/0'/0'/0/*)";
let internal_descriptor = "wpkh(tprv8ZgxMBicQKsPdy6LMhUtFHAgpocR8GC6QmwMSFpZs7h6Eziw3SpThFfczTDh5rW2krkqffa11UpX3XkeTTB2FvzZKWXqPY54Y6Rq4AQ5R8L/84'/0'/0'/1/*)";
let mut wallet = Wallet::new(
external_descriptor,
Some(internal_descriptor),
db,
Network::Testnet,
)?;
let address = wallet.get_address(AddressIndex::New);
println!("Generated Address: {}", address);
let balance = wallet.get_balance();
println!("Wallet balance before syncing: {} sats", balance.total());
print!("Syncing...");
let client =
esplora_client::Builder::new("https://blockstream.info/testnet/api").build_async()?;
let local_chain = wallet.checkpoints();
let keychain_spks = wallet
.spks_of_all_keychains()
.into_iter()
.map(|(k, k_spks)| {
let mut once = Some(());
let mut stdout = std::io::stdout();
let k_spks = k_spks
.inspect(move |(spk_i, _)| match once.take() {
Some(_) => print!("\nScanning keychain [{:?}]", k),
None => print!(" {:<3}", spk_i),
})
.inspect(move |_| stdout.flush().expect("must flush"));
(k, k_spks)
})
.collect();
let update = client
.scan(
local_chain,
keychain_spks,
[],
[],
STOP_GAP,
PARALLEL_REQUESTS,
)
.await?;
println!();
wallet.apply_update(update)?;
wallet.commit()?;
let balance = wallet.get_balance();
println!("Wallet balance after syncing: {} sats", balance.total());
if balance.total() < SEND_AMOUNT {
println!(
"Please send at least {} sats to the receiving address",
SEND_AMOUNT
);
std::process::exit(0);
}
let faucet_address = Address::from_str("mkHS9ne12qx9pS9VojpwU5xtRd4T7X7ZUt")?;
let mut tx_builder = wallet.build_tx();
tx_builder
.add_recipient(faucet_address.script_pubkey(), SEND_AMOUNT)
.enable_rbf();
let (mut psbt, _) = tx_builder.finish()?;
let finalized = wallet.sign(&mut psbt, SignOptions::default())?;
assert!(finalized);
let tx = psbt.extract_tx();
client.broadcast(&tx).await?;
println!("Tx broadcasted! Txid: {}", tx.txid());
Ok(())
}

View File

@@ -0,0 +1,41 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use bdk::blockchain::compact_filters::*;
use bdk::database::MemoryDatabase;
use bdk::*;
use bitcoin::*;
use blockchain::compact_filters::CompactFiltersBlockchain;
use blockchain::compact_filters::CompactFiltersError;
use log::info;
use std::sync::Arc;
/// This will return wallet balance using compact filters
/// Requires a synced local bitcoin node 0.21 running on testnet with blockfilterindex=1 and peerblockfilters=1
fn main() -> Result<(), CompactFiltersError> {
env_logger::init();
info!("start");
let num_threads = 4;
let mempool = Arc::new(Mempool::default());
let peers = (0..num_threads)
.map(|_| Peer::connect("localhost:18333", Arc::clone(&mempool), Network::Testnet))
.collect::<Result<_, _>>()?;
let blockchain = CompactFiltersBlockchain::new(peers, "./wallet-filters", Some(500_000))?;
info!("done {:?}", blockchain);
let descriptor = "wpkh(tpubD6NzVbkrYhZ4X2yy78HWrr1M9NT8dKeWfzNiQqDdMqqa9UmmGztGGz6TaLFGsLfdft5iu32gxq1T4eMNxExNNWzVCpf9Y6JZi5TnqoC9wJq/*)";
let database = MemoryDatabase::default();
let wallet = Arc::new(Wallet::new(descriptor, None, Network::Testnet, database).unwrap());
wallet.sync(&blockchain, SyncOptions::default()).unwrap();
info!("balance: {}", wallet.get_balance()?);
Ok(())
}

View File

@@ -24,6 +24,7 @@ use bitcoin::Network;
use miniscript::policy::Concrete;
use miniscript::Descriptor;
use bdk::database::memory::MemoryDatabase;
use bdk::wallet::AddressIndex::New;
use bdk::{KeychainKind, Wallet};
@@ -53,12 +54,14 @@ fn main() -> Result<(), Box<dyn Error>> {
info!("Compiled into following Descriptor: \n{}", descriptor);
let database = MemoryDatabase::new();
// Create a new wallet from this descriptor
let mut wallet = Wallet::new_no_persist(&format!("{}", descriptor), None, Network::Regtest)?;
let wallet = Wallet::new(&format!("{}", descriptor), None, Network::Regtest, database)?;
info!(
"First derived address from the descriptor: \n{}",
wallet.get_address(New)
wallet.get_address(New)?
);
// BDK also has it's own `Policy` structure to represent the spending condition in a more

View File

@@ -0,0 +1,87 @@
use std::str::FromStr;
use bdk::bitcoin::util::bip32::ExtendedPrivKey;
use bdk::bitcoin::Network;
use bdk::blockchain::{Blockchain, ElectrumBlockchain};
use bdk::database::MemoryDatabase;
use bdk::template::Bip84;
use bdk::wallet::export::FullyNodedExport;
use bdk::{KeychainKind, SyncOptions, Wallet};
use bdk::electrum_client::Client;
use bdk::wallet::AddressIndex;
use bitcoin::util::bip32;
pub mod utils;
use crate::utils::tx::build_signed_tx;
/// This will create a wallet from an xpriv and get the balance by connecting to an Electrum server.
/// If enough amount is available, this will send a transaction to an address.
/// Otherwise, this will display a wallet address to receive funds.
///
/// This can be run with `cargo run --example electrum_backend` in the root folder.
fn main() {
let network = Network::Testnet;
let xpriv = "tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy";
let electrum_url = "ssl://electrum.blockstream.info:60002";
run(&network, electrum_url, xpriv);
}
fn create_wallet(network: &Network, xpriv: &ExtendedPrivKey) -> Wallet<MemoryDatabase> {
Wallet::new(
Bip84(*xpriv, KeychainKind::External),
Some(Bip84(*xpriv, KeychainKind::Internal)),
*network,
MemoryDatabase::default(),
)
.unwrap()
}
fn run(network: &Network, electrum_url: &str, xpriv: &str) {
let xpriv = bip32::ExtendedPrivKey::from_str(xpriv).unwrap();
// Apparently it works only with Electrs (not EletrumX)
let blockchain = ElectrumBlockchain::from(Client::new(electrum_url).unwrap());
let wallet = create_wallet(network, &xpriv);
wallet.sync(&blockchain, SyncOptions::default()).unwrap();
let address = wallet.get_address(AddressIndex::New).unwrap().address;
println!("address: {}", address);
let balance = wallet.get_balance().unwrap();
println!("Available coins in BDK wallet : {} sats", balance);
if balance.confirmed > 6500 {
// the wallet sends the amount to itself.
let recipient_address = wallet
.get_address(AddressIndex::New)
.unwrap()
.address
.to_string();
let amount = 5359;
let tx = build_signed_tx(&wallet, &recipient_address, amount);
blockchain.broadcast(&tx).unwrap();
println!("tx id: {}", tx.txid());
} else {
println!("Insufficient Funds. Fund the wallet with the address above");
}
let export = FullyNodedExport::export_wallet(&wallet, "exported wallet", true)
.map_err(ToString::to_string)
.map_err(bdk::Error::Generic)
.unwrap();
println!("------\nWallet Backup: {}", export.to_string());
}

View File

@@ -0,0 +1,93 @@
use std::str::FromStr;
use bdk::blockchain::Blockchain;
use bdk::{
blockchain::esplora::EsploraBlockchain,
database::MemoryDatabase,
template::Bip84,
wallet::{export::FullyNodedExport, AddressIndex},
KeychainKind, SyncOptions, Wallet,
};
use bitcoin::{
util::bip32::{self, ExtendedPrivKey},
Network,
};
pub mod utils;
use crate::utils::tx::build_signed_tx;
/// This will create a wallet from an xpriv and get the balance by connecting to an Esplora server,
/// using non blocking asynchronous calls with `reqwest`.
/// If enough amount is available, this will send a transaction to an address.
/// Otherwise, this will display a wallet address to receive funds.
///
/// This can be run with `cargo run --no-default-features --features="use-esplora-reqwest, reqwest-default-tls, async-interface" --example esplora_backend_asynchronous`
/// in the root folder.
#[tokio::main(flavor = "current_thread")]
async fn main() {
let network = Network::Signet;
let xpriv = "tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy";
let esplora_url = "https://explorer.bc-2.jp/api";
run(&network, esplora_url, xpriv).await;
}
fn create_wallet(network: &Network, xpriv: &ExtendedPrivKey) -> Wallet<MemoryDatabase> {
Wallet::new(
Bip84(*xpriv, KeychainKind::External),
Some(Bip84(*xpriv, KeychainKind::Internal)),
*network,
MemoryDatabase::default(),
)
.unwrap()
}
async fn run(network: &Network, esplora_url: &str, xpriv: &str) {
let xpriv = bip32::ExtendedPrivKey::from_str(xpriv).unwrap();
let blockchain = EsploraBlockchain::new(esplora_url, 20);
let wallet = create_wallet(network, &xpriv);
wallet
.sync(&blockchain, SyncOptions::default())
.await
.unwrap();
let address = wallet.get_address(AddressIndex::New).unwrap().address;
println!("address: {}", address);
let balance = wallet.get_balance().unwrap();
println!("Available coins in BDK wallet : {} sats", balance);
if balance.confirmed > 10500 {
// the wallet sends the amount to itself.
let recipient_address = wallet
.get_address(AddressIndex::New)
.unwrap()
.address
.to_string();
let amount = 9359;
let tx = build_signed_tx(&wallet, &recipient_address, amount);
let _ = blockchain.broadcast(&tx);
println!("tx id: {}", tx.txid());
} else {
println!("Insufficient Funds. Fund the wallet with the address above");
}
let export = FullyNodedExport::export_wallet(&wallet, "exported wallet", true)
.map_err(ToString::to_string)
.map_err(bdk::Error::Generic)
.unwrap();
println!("------\nWallet Backup: {}", export.to_string());
}

View File

@@ -0,0 +1,89 @@
use std::str::FromStr;
use bdk::blockchain::Blockchain;
use bdk::{
blockchain::esplora::EsploraBlockchain,
database::MemoryDatabase,
template::Bip84,
wallet::{export::FullyNodedExport, AddressIndex},
KeychainKind, SyncOptions, Wallet,
};
use bitcoin::{
util::bip32::{self, ExtendedPrivKey},
Network,
};
pub mod utils;
use crate::utils::tx::build_signed_tx;
/// This will create a wallet from an xpriv and get the balance by connecting to an Esplora server,
/// using blocking calls with `ureq`.
/// If enough amount is available, this will send a transaction to an address.
/// Otherwise, this will display a wallet address to receive funds.
///
/// This can be run with `cargo run --features=use-esplora-ureq --example esplora_backend_synchronous`
/// in the root folder.
fn main() {
let network = Network::Signet;
let xpriv = "tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy";
let esplora_url = "https://explorer.bc-2.jp/api";
run(&network, esplora_url, xpriv);
}
fn create_wallet(network: &Network, xpriv: &ExtendedPrivKey) -> Wallet<MemoryDatabase> {
Wallet::new(
Bip84(*xpriv, KeychainKind::External),
Some(Bip84(*xpriv, KeychainKind::Internal)),
*network,
MemoryDatabase::default(),
)
.unwrap()
}
fn run(network: &Network, esplora_url: &str, xpriv: &str) {
let xpriv = bip32::ExtendedPrivKey::from_str(xpriv).unwrap();
let blockchain = EsploraBlockchain::new(esplora_url, 20);
let wallet = create_wallet(network, &xpriv);
wallet.sync(&blockchain, SyncOptions::default()).unwrap();
let address = wallet.get_address(AddressIndex::New).unwrap().address;
println!("address: {}", address);
let balance = wallet.get_balance().unwrap();
println!("Available coins in BDK wallet : {} sats", balance);
if balance.confirmed > 10500 {
// the wallet sends the amount to itself.
let recipient_address = wallet
.get_address(AddressIndex::New)
.unwrap()
.address
.to_string();
let amount = 9359;
let tx = build_signed_tx(&wallet, &recipient_address, amount);
blockchain.broadcast(&tx).unwrap();
println!("tx id: {}", tx.txid());
} else {
println!("Insufficient Funds. Fund the wallet with the address above");
}
let export = FullyNodedExport::export_wallet(&wallet, "exported wallet", true)
.map_err(ToString::to_string)
.map_err(bdk::Error::Generic)
.unwrap();
println!("------\nWallet Backup: {}", export.to_string());
}

105
examples/hardware_signer.rs Normal file
View File

@@ -0,0 +1,105 @@
use bdk::bitcoin::{Address, Network};
use bdk::blockchain::{Blockchain, ElectrumBlockchain};
use bdk::database::MemoryDatabase;
use bdk::hwi::{types::HWIChain, HWIClient};
use bdk::miniscript::{Descriptor, DescriptorPublicKey};
use bdk::signer::SignerOrdering;
use bdk::wallet::{hardwaresigner::HWISigner, AddressIndex};
use bdk::{FeeRate, KeychainKind, SignOptions, SyncOptions, Wallet};
use electrum_client::Client;
use std::str::FromStr;
use std::sync::Arc;
// This example shows how to sync a wallet, create a transaction, sign it
// and broadcast it using an external hardware wallet.
// The hardware wallet must be connected to the computer and unlocked before
// running the example. Also, the `hwi` python package should be installed
// and available in the environment.
//
// To avoid loss of funds, consider using an hardware wallet simulator:
// * Coldcard: https://github.com/Coldcard/firmware
// * Ledger: https://github.com/LedgerHQ/speculos
// * Trezor: https://docs.trezor.io/trezor-firmware/core/emulator/index.html
fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("Hold tight, I'm connecting to your hardware wallet...");
// Listing all the available hardware wallet devices...
let mut devices = HWIClient::enumerate()?;
if devices.is_empty() {
panic!("No devices found. Either plug in a hardware wallet, or start a simulator.");
}
let first_device = devices.remove(0)?;
// ...and creating a client out of the first one
let client = HWIClient::get_client(&first_device, true, HWIChain::Test)?;
println!("Look what I found, a {}!", first_device.model);
// Getting the HW's public descriptors
let descriptors = client.get_descriptors::<Descriptor<DescriptorPublicKey>>(None)?;
println!(
"The hardware wallet's descriptor is: {}",
descriptors.receive[0]
);
// Creating a custom signer from the device
let custom_signer = HWISigner::from_device(&first_device, HWIChain::Test)?;
let mut wallet = Wallet::new(
descriptors.receive[0].clone(),
Some(descriptors.internal[0].clone()),
Network::Testnet,
MemoryDatabase::default(),
)?;
// Adding the hardware signer to the BDK wallet
wallet.add_signer(
KeychainKind::External,
SignerOrdering(200),
Arc::new(custom_signer),
);
// create client for Blockstream's testnet electrum server
let blockchain =
ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?);
println!("Syncing the wallet...");
wallet.sync(&blockchain, SyncOptions::default())?;
// get deposit address
let deposit_address = wallet.get_address(AddressIndex::New)?;
let balance = wallet.get_balance()?;
println!("Wallet balances in SATs: {}", balance);
if balance.get_total() < 10000 {
println!(
"Send some sats from the u01.net testnet faucet to address '{addr}'.\nFaucet URL: https://bitcoinfaucet.uo1.net/?to={addr}",
addr = deposit_address.address
);
return Ok(());
}
let return_address = Address::from_str("tb1ql7w62elx9ucw4pj5lgw4l028hmuw80sndtntxt")?;
let (mut psbt, _details) = {
let mut builder = wallet.build_tx();
builder
.drain_wallet()
.drain_to(return_address.script_pubkey())
.enable_rbf()
.fee_rate(FeeRate::from_sat_per_vb(5.0));
builder.finish()?
};
// `sign` will call the hardware wallet asking for a signature
assert!(
wallet.sign(&mut psbt, SignOptions::default())?,
"The hardware wallet couldn't finalize the transaction :("
);
println!("Let's broadcast your tx...");
let raw_transaction = psbt.extract_tx();
let txid = raw_transaction.txid();
blockchain.broadcast(&raw_transaction)?;
println!("Transaction broadcasted! TXID: {txid}.\nExplorer URL: https://mempool.space/testnet/tx/{txid}", txid = txid);
Ok(())
}

120
examples/psbt_signer.rs Normal file
View File

@@ -0,0 +1,120 @@
// Copyright (c) 2020-2022 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use bdk::blockchain::{Blockchain, ElectrumBlockchain};
use bdk::database::MemoryDatabase;
use bdk::wallet::AddressIndex;
use bdk::{descriptor, SyncOptions};
use bdk::{FeeRate, SignOptions, Wallet};
use bitcoin::secp256k1::Secp256k1;
use bitcoin::{Address, Network};
use electrum_client::Client;
use miniscript::descriptor::DescriptorSecretKey;
use std::error::Error;
use std::str::FromStr;
/// This example shows how to sign and broadcast the transaction for a PSBT (Partially Signed
/// Bitcoin Transaction) for a single key, witness public key hash (WPKH) based descriptor wallet.
/// The electrum protocol is used to sync blockchain data from the testnet bitcoin network and
/// wallet data is stored in an ephemeral in-memory database. The process steps are:
/// 1. Create a "signing" wallet and a "watch-only" wallet based on the same private keys.
/// 2. Deposit testnet funds into the watch only wallet.
/// 3. Sync the watch only wallet and create a spending transaction to return all funds to the testnet faucet.
/// 4. Sync the signing wallet and sign and finalize the PSBT created by the watch only wallet.
/// 5. Broadcast the transactions from the finalized PSBT.
fn main() -> Result<(), Box<dyn Error>> {
// test key created with `bdk-cli key generate` and `bdk-cli key derive` commands
let external_secret_xkey = DescriptorSecretKey::from_str("[e9824965/84'/1'/0']tprv8fvem7qWxY3SGCQczQpRpqTKg455wf1zgixn6MZ4ze8gRfHjov5gXBQTadNfDgqs9ERbZZ3Bi1PNYrCCusFLucT39K525MWLpeURjHwUsfX/0/*").unwrap();
let internal_secret_xkey = DescriptorSecretKey::from_str("[e9824965/84'/1'/0']tprv8fvem7qWxY3SGCQczQpRpqTKg455wf1zgixn6MZ4ze8gRfHjov5gXBQTadNfDgqs9ERbZZ3Bi1PNYrCCusFLucT39K525MWLpeURjHwUsfX/1/*").unwrap();
let secp = Secp256k1::new();
let external_public_xkey = external_secret_xkey.to_public(&secp).unwrap();
let internal_public_xkey = internal_secret_xkey.to_public(&secp).unwrap();
let signing_external_descriptor = descriptor!(wpkh(external_secret_xkey)).unwrap();
let signing_internal_descriptor = descriptor!(wpkh(internal_secret_xkey)).unwrap();
let watch_only_external_descriptor = descriptor!(wpkh(external_public_xkey)).unwrap();
let watch_only_internal_descriptor = descriptor!(wpkh(internal_public_xkey)).unwrap();
// create client for Blockstream's testnet electrum server
let blockchain =
ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?);
// create watch only wallet
let watch_only_wallet: Wallet<MemoryDatabase> = Wallet::new(
watch_only_external_descriptor,
Some(watch_only_internal_descriptor),
Network::Testnet,
MemoryDatabase::default(),
)?;
// create signing wallet
let signing_wallet: Wallet<MemoryDatabase> = Wallet::new(
signing_external_descriptor,
Some(signing_internal_descriptor),
Network::Testnet,
MemoryDatabase::default(),
)?;
println!("Syncing watch only wallet.");
watch_only_wallet.sync(&blockchain, SyncOptions::default())?;
// get deposit address
let deposit_address = watch_only_wallet.get_address(AddressIndex::New)?;
let balance = watch_only_wallet.get_balance()?;
println!("Watch only wallet balances in SATs: {}", balance);
if balance.get_total() < 10000 {
println!(
"Send at least 10000 SATs (0.0001 BTC) from the u01.net testnet faucet to address '{addr}'.\nFaucet URL: https://bitcoinfaucet.uo1.net/?to={addr}",
addr = deposit_address.address
);
} else if balance.get_spendable() < 10000 {
println!(
"Wait for at least 10000 SATs of your wallet transactions to be confirmed...\nBe patient, this could take 10 mins or longer depending on how testnet is behaving."
);
for tx_details in watch_only_wallet
.list_transactions(false)?
.iter()
.filter(|txd| txd.received > 0 && txd.confirmation_time.is_none())
{
println!(
"See unconfirmed tx for {} SATs: https://mempool.space/testnet/tx/{}",
tx_details.received, tx_details.txid
);
}
} else {
println!("Creating a PSBT sending 9800 SATs plus fee to the u01.net testnet faucet return address 'tb1ql7w62elx9ucw4pj5lgw4l028hmuw80sndtntxt'.");
let return_address = Address::from_str("tb1ql7w62elx9ucw4pj5lgw4l028hmuw80sndtntxt")?;
let mut builder = watch_only_wallet.build_tx();
builder
.add_recipient(return_address.script_pubkey(), 9_800)
.enable_rbf()
.fee_rate(FeeRate::from_sat_per_vb(1.0));
let (mut psbt, details) = builder.finish()?;
println!("Transaction details: {:#?}", details);
println!("Unsigned PSBT: {}", psbt);
// Sign and finalize the PSBT with the signing wallet
let finalized = signing_wallet.sign(&mut psbt, SignOptions::default())?;
assert!(finalized, "The PSBT was not finalized!");
println!("The PSBT has been signed and finalized.");
// Broadcast the transaction
let raw_transaction = psbt.extract_tx();
let txid = raw_transaction.txid();
blockchain.broadcast(&raw_transaction)?;
println!("Transaction broadcast! TXID: {txid}.\nExplorer URL: https://mempool.space/testnet/tx/{txid}", txid = txid);
}
Ok(())
}

229
examples/rpcwallet.rs Normal file
View File

@@ -0,0 +1,229 @@
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use bdk::bitcoin::secp256k1::Secp256k1;
use bdk::bitcoin::Amount;
use bdk::bitcoin::Network;
use bdk::bitcoincore_rpc::RpcApi;
use bdk::blockchain::rpc::{Auth, RpcBlockchain, RpcConfig};
use bdk::blockchain::ConfigurableBlockchain;
use bdk::keys::bip39::{Language, Mnemonic, WordCount};
use bdk::keys::{DerivableKey, GeneratableKey, GeneratedKey};
use bdk::miniscript::miniscript::Segwitv0;
use bdk::sled;
use bdk::template::Bip84;
use bdk::wallet::{signer::SignOptions, wallet_name_from_descriptor, AddressIndex, SyncOptions};
use bdk::KeychainKind;
use bdk::Wallet;
use bdk::blockchain::Blockchain;
use electrsd;
use std::error::Error;
use std::path::PathBuf;
use std::str::FromStr;
/// This example demonstrates a typical way to create a wallet and work with bdk.
///
/// This example bdk wallet is connected to a bitcoin core rpc regtest node,
/// and will attempt to receive, create and broadcast transactions.
///
/// To start a bitcoind regtest node programmatically, this example uses
/// `electrsd` library, which is also a bdk dev-dependency.
///
/// But you can start your own bitcoind backend, and the rest of the example should work fine.
fn main() -> Result<(), Box<dyn Error>> {
// -- Setting up background bitcoind process
println!(">> Setting up bitcoind");
// Start the bitcoind process
let bitcoind_conf = electrsd::bitcoind::Conf::default();
// electrsd will automatically download the bitcoin core binaries
let bitcoind_exe =
electrsd::bitcoind::downloaded_exe_path().expect("We should always have downloaded path");
// Launch bitcoind and gather authentication access
let bitcoind = electrsd::bitcoind::BitcoinD::with_conf(bitcoind_exe, &bitcoind_conf).unwrap();
let bitcoind_auth = Auth::Cookie {
file: bitcoind.params.cookie_file.clone(),
};
// Get a new core address
let core_address = bitcoind.client.get_new_address(None, None)?;
// Generate 101 blocks and use the above address as coinbase
bitcoind.client.generate_to_address(101, &core_address)?;
println!(">> bitcoind setup complete");
println!(
"Available coins in Core wallet : {}",
bitcoind.client.get_balance(None, None)?
);
// -- Setting up the Wallet
println!("\n>> Setting up BDK wallet");
// Get a random private key
let xprv = generate_random_ext_privkey()?;
// Use the derived descriptors from the privatekey to
// create unique wallet name.
// This is a special utility function exposed via `bdk::wallet_name_from_descriptor()`
let wallet_name = wallet_name_from_descriptor(
Bip84(xprv.clone(), KeychainKind::External),
Some(Bip84(xprv.clone(), KeychainKind::Internal)),
Network::Regtest,
&Secp256k1::new(),
)?;
// Create a database (using default sled type) to store wallet data
let mut datadir = PathBuf::from_str("/tmp/")?;
datadir.push(".bdk-example");
let database = sled::open(datadir)?;
let database = database.open_tree(wallet_name.clone())?;
// Create a RPC configuration of the running bitcoind backend we created in last step
// Note: If you are using custom regtest node, use the appropriate url and auth
let rpc_config = RpcConfig {
url: bitcoind.params.rpc_socket.to_string(),
auth: bitcoind_auth,
network: Network::Regtest,
wallet_name,
sync_params: None,
};
// Use the above configuration to create a RPC blockchain backend
let blockchain = RpcBlockchain::from_config(&rpc_config)?;
// Combine Database + Descriptor to create the final wallet
let wallet = Wallet::new(
Bip84(xprv.clone(), KeychainKind::External),
Some(Bip84(xprv.clone(), KeychainKind::Internal)),
Network::Regtest,
database,
)?;
// The `wallet` and the `blockchain` are independent structs.
// The wallet will be used to do all wallet level actions
// The blockchain can be used to do all blockchain level actions.
// For certain actions (like sync) the wallet will ask for a blockchain.
// Sync the wallet
// The first sync is important as this will instantiate the
// wallet files.
wallet.sync(&blockchain, SyncOptions::default())?;
println!(">> BDK wallet setup complete.");
println!(
"Available initial coins in BDK wallet : {} sats",
wallet.get_balance()?
);
// -- Wallet transaction demonstration
println!("\n>> Sending coins: Core --> BDK, 10 BTC");
// Get a new address to receive coins
let bdk_new_addr = wallet.get_address(AddressIndex::New)?.address;
// Send 10 BTC from core wallet to bdk wallet
bitcoind.client.send_to_address(
&bdk_new_addr,
Amount::from_btc(10.0)?,
None,
None,
None,
None,
None,
None,
)?;
// Confirm transaction by generating 1 block
bitcoind.client.generate_to_address(1, &core_address)?;
// Sync the BDK wallet
// This time the sync will fetch the new transaction and update it in
// wallet database
wallet.sync(&blockchain, SyncOptions::default())?;
println!(">> Received coins in BDK wallet");
println!(
"Available balance in BDK wallet: {} sats",
wallet.get_balance()?
);
println!("\n>> Sending coins: BDK --> Core, 5 BTC");
// Attempt to send back 5.0 BTC to core address by creating a transaction
//
// Transactions are created using a `TxBuilder`.
// This helps us to systematically build a transaction with all
// required customization.
// A full list of APIs offered by `TxBuilder` can be found at
// https://docs.rs/bdk/latest/bdk/wallet/tx_builder/struct.TxBuilder.html
let mut tx_builder = wallet.build_tx();
// For a regular transaction, just set the recipient and amount
tx_builder.set_recipients(vec![(core_address.script_pubkey(), 500000000)]);
// Finalize the transaction and extract the PSBT
let (mut psbt, _) = tx_builder.finish()?;
// Set signing option
let signopt = SignOptions {
assume_height: None,
..Default::default()
};
// Sign the psbt
wallet.sign(&mut psbt, signopt)?;
// Extract the signed transaction
let tx = psbt.extract_tx();
// Broadcast the transaction
blockchain.broadcast(&tx)?;
// Confirm transaction by generating some blocks
bitcoind.client.generate_to_address(1, &core_address)?;
// Sync the BDK wallet
wallet.sync(&blockchain, SyncOptions::default())?;
println!(">> Coins sent to Core wallet");
println!(
"Remaining BDK wallet balance: {} sats",
wallet.get_balance()?
);
println!("\nCongrats!! you made your first test transaction with bdk and bitcoin core.");
Ok(())
}
// Helper function demonstrating privatekey extraction using bip39 mnemonic
// The mnemonic can be shown to user to safekeeping and the same wallet
// private descriptors can be recreated from it.
fn generate_random_ext_privkey() -> Result<impl DerivableKey<Segwitv0> + Clone, Box<dyn Error>> {
// a Bip39 passphrase can be set optionally
let password = Some("random password".to_string());
// Generate a random mnemonic, and use that to create a "DerivableKey"
let mnemonic: GeneratedKey<_, _> = Mnemonic::generate((WordCount::Words12, Language::English))
.map_err(|e| e.expect("Unknown Error"))?;
// `Ok(mnemonic)` would also work if there's no passphrase and it would
// yield the same result as this construct with `password` = `None`.
Ok((mnemonic, password))
}

30
examples/utils/mod.rs Normal file
View File

@@ -0,0 +1,30 @@
pub(crate) mod tx {
use std::str::FromStr;
use bdk::{database::BatchDatabase, SignOptions, Wallet};
use bitcoin::{Address, Transaction};
pub fn build_signed_tx<D: BatchDatabase>(
wallet: &Wallet<D>,
recipient_address: &str,
amount: u64,
) -> Transaction {
// Create a transaction builder
let mut tx_builder = wallet.build_tx();
let to_address = Address::from_str(recipient_address).unwrap();
// Set recipient of the transaction
tx_builder.set_recipients(vec![(to_address.script_pubkey(), amount)]);
// Finalise the transaction and extract PSBT
let (mut psbt, _) = tx_builder.finish().unwrap();
// Sign the above psbt with signing option
wallet.sign(&mut psbt, SignOptions::default()).unwrap();
// Extract the final transaction
psbt.extract_tx()
}
}

24
macros/Cargo.toml Normal file
View File

@@ -0,0 +1,24 @@
[package]
name = "bdk-macros"
version = "0.6.0"
authors = ["Alekos Filini <alekos.filini@gmail.com>"]
edition = "2018"
homepage = "https://bitcoindevkit.org"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk-macros"
description = "Supporting macros for `bdk`"
keywords = ["bdk"]
license = "MIT OR Apache-2.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
syn = { version = "1.0", features = ["parsing", "full"] }
proc-macro2 = "1.0"
quote = "1.0"
[features]
debug = ["syn/extra-traits"]
[lib]
proc-macro = true

146
macros/src/lib.rs Normal file
View File

@@ -0,0 +1,146 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
#[macro_use]
extern crate quote;
use proc_macro::TokenStream;
use syn::spanned::Spanned;
use syn::{parse, ImplItemMethod, ItemImpl, ItemTrait, Token};
fn add_async_trait(mut parsed: ItemTrait) -> TokenStream {
let output = quote! {
#[cfg(not(feature = "async-interface"))]
#parsed
};
for mut item in &mut parsed.items {
if let syn::TraitItem::Method(m) = &mut item {
m.sig.asyncness = Some(Token![async](m.span()));
}
}
let output = quote! {
#output
#[cfg(feature = "async-interface")]
#[async_trait(?Send)]
#parsed
};
output.into()
}
fn add_async_method(mut parsed: ImplItemMethod) -> TokenStream {
let output = quote! {
#[cfg(not(feature = "async-interface"))]
#parsed
};
parsed.sig.asyncness = Some(Token![async](parsed.span()));
let output = quote! {
#output
#[cfg(feature = "async-interface")]
#parsed
};
output.into()
}
fn add_async_impl_trait(mut parsed: ItemImpl) -> TokenStream {
let output = quote! {
#[cfg(not(feature = "async-interface"))]
#parsed
};
for mut item in &mut parsed.items {
if let syn::ImplItem::Method(m) = &mut item {
m.sig.asyncness = Some(Token![async](m.span()));
}
}
let output = quote! {
#output
#[cfg(feature = "async-interface")]
#[async_trait(?Send)]
#parsed
};
output.into()
}
/// Makes a method or every method of a trait `async`, if the `async-interface` feature is enabled.
///
/// Requires the `async-trait` crate as a dependency whenever this attribute is used on a trait
/// definition or trait implementation.
#[proc_macro_attribute]
pub fn maybe_async(_attr: TokenStream, item: TokenStream) -> TokenStream {
if let Ok(parsed) = parse(item.clone()) {
add_async_trait(parsed)
} else if let Ok(parsed) = parse(item.clone()) {
add_async_method(parsed)
} else if let Ok(parsed) = parse(item) {
add_async_impl_trait(parsed)
} else {
(quote! {
compile_error!("#[maybe_async] can only be used on methods, trait or trait impl blocks")
})
.into()
}
}
/// Awaits, if the `async-interface` feature is enabled.
#[proc_macro]
pub fn maybe_await(expr: TokenStream) -> TokenStream {
let expr: proc_macro2::TokenStream = expr.into();
let quoted = quote! {
{
#[cfg(not(feature = "async-interface"))]
{
#expr
}
#[cfg(feature = "async-interface")]
{
#expr.await
}
}
};
quoted.into()
}
/// Awaits, if the `async-interface` feature is enabled, uses `tokio::Runtime::block_on()` otherwise
///
/// Requires the `tokio` crate as a dependecy with `rt-core` or `rt-threaded` to build.
#[proc_macro]
pub fn await_or_block(expr: TokenStream) -> TokenStream {
let expr: proc_macro2::TokenStream = expr.into();
let quoted = quote! {
{
#[cfg(not(feature = "async-interface"))]
{
tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap().block_on(#expr)
}
#[cfg(feature = "async-interface")]
{
#expr.await
}
}
};
quoted.into()
}

View File

@@ -1,5 +0,0 @@
# Bitcoin Dev Kit Nursery
This is a directory for crates that are experimental and have not been released yet.
Keep in mind that they may never be released.
Things in `/example-crates` may use them to demonsrate how things might look in the future.

View File

@@ -1,11 +0,0 @@
[package]
name = "bdk_coin_select"
version = "0.0.1"
authors = [ "LLFourn <lloyd.fourn@gmail.com>" ]
[dependencies]
bdk_chain = { path = "../../crates/chain" }
[features]
default = ["std"]
std = []

View File

@@ -1,645 +0,0 @@
use super::*;
/// Strategy in which we should branch.
pub enum BranchStrategy {
/// We continue exploring subtrees of this node, starting with the inclusion branch.
Continue,
/// We continue exploring ONLY the omission branch of this node, skipping the inclusion branch.
SkipInclusion,
/// We skip both the inclusion and omission branches of this node.
SkipBoth,
}
impl BranchStrategy {
pub fn will_continue(&self) -> bool {
matches!(self, Self::Continue | Self::SkipInclusion)
}
}
/// Closure to decide the branching strategy, alongside a score (if the current selection is a
/// candidate solution).
pub type DecideStrategy<'c, S> = dyn Fn(&Bnb<'c, S>) -> (BranchStrategy, Option<S>);
/// [`Bnb`] represents the current state of the BnB algorithm.
pub struct Bnb<'c, S> {
pub pool: Vec<(usize, &'c WeightedValue)>,
pub pool_pos: usize,
pub best_score: S,
pub selection: CoinSelector<'c>,
pub rem_abs: u64,
pub rem_eff: i64,
}
impl<'c, S: Ord> Bnb<'c, S> {
/// Creates a new [`Bnb`].
pub fn new(selector: CoinSelector<'c>, pool: Vec<(usize, &'c WeightedValue)>, max: S) -> Self {
let (rem_abs, rem_eff) = pool.iter().fold((0, 0), |(abs, eff), (_, c)| {
(
abs + c.value,
eff + c.effective_value(selector.opts.target_feerate),
)
});
Self {
pool,
pool_pos: 0,
best_score: max,
selection: selector,
rem_abs,
rem_eff,
}
}
/// Turns our [`Bnb`] state into an iterator.
///
/// `strategy` should assess our current selection/node and determine the branching strategy and
/// whether this selection is a candidate solution (if so, return the selection score).
pub fn into_iter<'f>(self, strategy: &'f DecideStrategy<'c, S>) -> BnbIter<'c, 'f, S> {
BnbIter {
state: self,
done: false,
strategy,
}
}
/// Attempt to backtrack to the previously selected node's omission branch, return false
/// otherwise (no more solutions).
pub fn backtrack(&mut self) -> bool {
(0..self.pool_pos).rev().any(|pos| {
let (index, candidate) = self.pool[pos];
if self.selection.is_selected(index) {
// deselect the last `pos`, so the next round will check the omission branch
self.pool_pos = pos;
self.selection.deselect(index);
true
} else {
self.rem_abs += candidate.value;
self.rem_eff += candidate.effective_value(self.selection.opts.target_feerate);
false
}
})
}
/// Continue down this branch and skip the inclusion branch if specified.
pub fn forward(&mut self, skip: bool) {
let (index, candidate) = self.pool[self.pool_pos];
self.rem_abs -= candidate.value;
self.rem_eff -= candidate.effective_value(self.selection.opts.target_feerate);
if !skip {
self.selection.select(index);
}
}
/// Compare the advertised score with the current best. The new best will be the smaller value. Return true
/// if best is replaced.
pub fn advertise_new_score(&mut self, score: S) -> bool {
if score <= self.best_score {
self.best_score = score;
return true;
}
false
}
}
pub struct BnbIter<'c, 'f, S> {
state: Bnb<'c, S>,
done: bool,
/// Check our current selection (node) and returns the branching strategy alongside a score
/// (if the current selection is a candidate solution).
strategy: &'f DecideStrategy<'c, S>,
}
impl<'c, 'f, S: Ord + Copy + Display> Iterator for BnbIter<'c, 'f, S> {
type Item = Option<CoinSelector<'c>>;
fn next(&mut self) -> Option<Self::Item> {
if self.done {
return None;
}
let (strategy, score) = (self.strategy)(&self.state);
let mut found_best = Option::<CoinSelector>::None;
if let Some(score) = score {
if self.state.advertise_new_score(score) {
found_best = Some(self.state.selection.clone());
}
}
debug_assert!(
!strategy.will_continue() || self.state.pool_pos < self.state.pool.len(),
"Faulty strategy implementation! Strategy suggested that we continue traversing, however, we have already reached the end of the candidates pool! pool_len={}, pool_pos={}",
self.state.pool.len(), self.state.pool_pos,
);
match strategy {
BranchStrategy::Continue => {
self.state.forward(false);
}
BranchStrategy::SkipInclusion => {
self.state.forward(true);
}
BranchStrategy::SkipBoth => {
if !self.state.backtrack() {
self.done = true;
}
}
};
// increment selection pool position for next round
self.state.pool_pos += 1;
if found_best.is_some() || !self.done {
Some(found_best)
} else {
// we have traversed all branches
None
}
}
}
/// Determines how we should limit rounds of branch and bound.
pub enum BnbLimit {
Rounds(usize),
#[cfg(feature = "std")]
Duration(core::time::Duration),
}
impl From<usize> for BnbLimit {
fn from(v: usize) -> Self {
Self::Rounds(v)
}
}
#[cfg(feature = "std")]
impl From<core::time::Duration> for BnbLimit {
fn from(v: core::time::Duration) -> Self {
Self::Duration(v)
}
}
/// This is a variation of the Branch and Bound Coin Selection algorithm designed by Murch (as seen
/// in Bitcoin Core).
///
/// The differences are as follows:
/// * In addition to working with effective values, we also work with absolute values.
/// This way, we can use bounds of the absolute values to enforce `min_absolute_fee` (which is used by
/// RBF), and `max_extra_target` (which can be used to increase the possible solution set, given
/// that the sender is okay with sending extra to the receiver).
///
/// Murch's Master Thesis: <https://murch.one/wp-content/uploads/2016/11/erhardt2016coinselection.pdf>
/// Bitcoin Core Implementation: <https://github.com/bitcoin/bitcoin/blob/23.x/src/wallet/coinselection.cpp#L65>
///
/// TODO: Another optimization we could do is figure out candidates with the smallest waste, and
/// if we find a result with waste equal to this, we can just break.
pub fn coin_select_bnb<L>(limit: L, selector: CoinSelector) -> Option<CoinSelector>
where
L: Into<BnbLimit>,
{
let opts = selector.opts;
// prepare the pool of candidates to select from:
// * filter out candidates with negative/zero effective values
// * sort candidates by descending effective value
let pool = {
let mut pool = selector
.unselected()
.filter(|(_, c)| c.effective_value(opts.target_feerate) > 0)
.collect::<Vec<_>>();
pool.sort_unstable_by(|(_, a), (_, b)| {
let a = a.effective_value(opts.target_feerate);
let b = b.effective_value(opts.target_feerate);
b.cmp(&a)
});
pool
};
let feerate_decreases = opts.target_feerate > opts.long_term_feerate();
let target_abs = opts.target_value.unwrap_or(0) + opts.min_absolute_fee;
let target_eff = selector.effective_target();
let upper_bound_abs = target_abs + (opts.drain_weight as f32 * opts.target_feerate) as u64;
let upper_bound_eff = target_eff + opts.drain_waste();
let strategy = move |bnb: &Bnb<i64>| -> (BranchStrategy, Option<i64>) {
let selected_abs = bnb.selection.selected_absolute_value();
let selected_eff = bnb.selection.selected_effective_value();
// backtrack if the remaining value is not enough to reach the target
if selected_abs + bnb.rem_abs < target_abs || selected_eff + bnb.rem_eff < target_eff {
return (BranchStrategy::SkipBoth, None);
}
// backtrack if the selected value has already surpassed upper bounds
if selected_abs > upper_bound_abs && selected_eff > upper_bound_eff {
return (BranchStrategy::SkipBoth, None);
}
let selected_waste = bnb.selection.selected_waste();
// when feerate decreases, waste without excess is guaranteed to increase with each
// selection. So if we have already surpassed the best score, we can backtrack.
if feerate_decreases && selected_waste > bnb.best_score {
return (BranchStrategy::SkipBoth, None);
}
// solution?
if selected_abs >= target_abs && selected_eff >= target_eff {
let waste = selected_waste + bnb.selection.current_excess();
return (BranchStrategy::SkipBoth, Some(waste));
}
// early bailout optimization:
// If the candidate at the previous position is NOT selected and has the same weight and
// value as the current candidate, we can skip selecting the current candidate.
if bnb.pool_pos > 0 && !bnb.selection.is_empty() {
let (_, candidate) = bnb.pool[bnb.pool_pos];
let (prev_index, prev_candidate) = bnb.pool[bnb.pool_pos - 1];
if !bnb.selection.is_selected(prev_index)
&& candidate.value == prev_candidate.value
&& candidate.weight == prev_candidate.weight
{
return (BranchStrategy::SkipInclusion, None);
}
}
// check out the inclusion branch first
(BranchStrategy::Continue, None)
};
// determine the sum of absolute and effective values for the current selection
let (selected_abs, selected_eff) = selector.selected().fold((0, 0), |(abs, eff), (_, c)| {
(
abs + c.value,
eff + c.effective_value(selector.opts.target_feerate),
)
});
let bnb = Bnb::new(selector, pool, i64::MAX);
// not enough to select anyway
if selected_abs + bnb.rem_abs < target_abs || selected_eff + bnb.rem_eff < target_eff {
return None;
}
match limit.into() {
BnbLimit::Rounds(rounds) => {
bnb.into_iter(&strategy)
.take(rounds)
.reduce(|b, c| if c.is_some() { c } else { b })
}
#[cfg(feature = "std")]
BnbLimit::Duration(duration) => {
let start = std::time::SystemTime::now();
bnb.into_iter(&strategy)
.take_while(|_| start.elapsed().expect("failed to get system time") <= duration)
.reduce(|b, c| if c.is_some() { c } else { b })
}
}?
}
#[cfg(all(test, feature = "miniscript"))]
mod test {
use bitcoin::secp256k1::Secp256k1;
use crate::coin_select::{evaluate_cs::evaluate, ExcessStrategyKind};
use super::{
coin_select_bnb,
evaluate_cs::{Evaluation, EvaluationError},
tester::Tester,
CoinSelector, CoinSelectorOpt, Vec, WeightedValue,
};
fn tester() -> Tester {
const DESC_STR: &str = "tr(xprv9uBuvtdjghkz8D1qzsSXS9Vs64mqrUnXqzNccj2xcvnCHPpXKYE1U2Gbh9CDHk8UPyF2VuXpVkDA7fk5ZP4Hd9KnhUmTscKmhee9Dp5sBMK)";
Tester::new(&Secp256k1::default(), DESC_STR)
}
fn evaluate_bnb(
initial_selector: CoinSelector,
max_tries: usize,
) -> Result<Evaluation, EvaluationError> {
evaluate(initial_selector, |cs| {
coin_select_bnb(max_tries, cs.clone()).map_or(false, |new_cs| {
*cs = new_cs;
true
})
})
}
#[test]
fn not_enough_coins() {
let t = tester();
let candidates: Vec<WeightedValue> = vec![
t.gen_candidate(0, 100_000).into(),
t.gen_candidate(1, 100_000).into(),
];
let opts = t.gen_opts(200_000);
let selector = CoinSelector::new(&candidates, &opts);
assert!(!coin_select_bnb(10_000, selector).is_some());
}
#[test]
fn exactly_enough_coins_preselected() {
let t = tester();
let candidates: Vec<WeightedValue> = vec![
t.gen_candidate(0, 100_000).into(), // to preselect
t.gen_candidate(1, 100_000).into(), // to preselect
t.gen_candidate(2, 100_000).into(),
];
let opts = CoinSelectorOpt {
target_feerate: 0.0,
..t.gen_opts(200_000)
};
let selector = {
let mut selector = CoinSelector::new(&candidates, &opts);
selector.select(0); // preselect
selector.select(1); // preselect
selector
};
let evaluation = evaluate_bnb(selector, 10_000).expect("eval failed");
println!("{}", evaluation);
assert_eq!(evaluation.solution.selected, (0..=1).collect());
assert_eq!(evaluation.solution.excess_strategies.len(), 1);
assert_eq!(
evaluation.feerate_offset(ExcessStrategyKind::ToFee).floor(),
0.0
);
}
/// `cost_of_change` acts as the upper-bound in Bnb; we check whether these boundaries are
/// enforced in code
#[test]
fn cost_of_change() {
let t = tester();
let candidates: Vec<WeightedValue> = vec![
t.gen_candidate(0, 200_000).into(),
t.gen_candidate(1, 200_000).into(),
t.gen_candidate(2, 200_000).into(),
];
// lowest and highest possible `recipient_value` opts for derived `drain_waste`, assuming
// that we want 2 candidates selected
let (lowest_opts, highest_opts) = {
let opts = t.gen_opts(0);
let fee_from_inputs =
(candidates[0].weight as f32 * opts.target_feerate).ceil() as u64 * 2;
let fee_from_template =
((opts.base_weight + 2) as f32 * opts.target_feerate).ceil() as u64;
let lowest_opts = CoinSelectorOpt {
target_value: Some(
400_000 - fee_from_inputs - fee_from_template - opts.drain_waste() as u64,
),
..opts
};
let highest_opts = CoinSelectorOpt {
target_value: Some(400_000 - fee_from_inputs - fee_from_template),
..opts
};
(lowest_opts, highest_opts)
};
// test lowest possible target we can select
let lowest_eval = evaluate_bnb(CoinSelector::new(&candidates, &lowest_opts), 10_000);
assert!(lowest_eval.is_ok());
let lowest_eval = lowest_eval.unwrap();
println!("LB {}", lowest_eval);
assert_eq!(lowest_eval.solution.selected.len(), 2);
assert_eq!(lowest_eval.solution.excess_strategies.len(), 1);
assert_eq!(
lowest_eval
.feerate_offset(ExcessStrategyKind::ToFee)
.floor(),
0.0
);
// test the highest possible target we can select
let highest_eval = evaluate_bnb(CoinSelector::new(&candidates, &highest_opts), 10_000);
assert!(highest_eval.is_ok());
let highest_eval = highest_eval.unwrap();
println!("UB {}", highest_eval);
assert_eq!(highest_eval.solution.selected.len(), 2);
assert_eq!(highest_eval.solution.excess_strategies.len(), 1);
assert_eq!(
highest_eval
.feerate_offset(ExcessStrategyKind::ToFee)
.floor(),
0.0
);
// test lower out of bounds
let loob_opts = CoinSelectorOpt {
target_value: lowest_opts.target_value.map(|v| v - 1),
..lowest_opts
};
let loob_eval = evaluate_bnb(CoinSelector::new(&candidates, &loob_opts), 10_000);
assert!(loob_eval.is_err());
println!("Lower OOB: {}", loob_eval.unwrap_err());
// test upper out of bounds
let uoob_opts = CoinSelectorOpt {
target_value: highest_opts.target_value.map(|v| v + 1),
..highest_opts
};
let uoob_eval = evaluate_bnb(CoinSelector::new(&candidates, &uoob_opts), 10_000);
assert!(uoob_eval.is_err());
println!("Upper OOB: {}", uoob_eval.unwrap_err());
}
#[test]
fn try_select() {
let t = tester();
let candidates: Vec<WeightedValue> = vec![
t.gen_candidate(0, 300_000).into(),
t.gen_candidate(1, 300_000).into(),
t.gen_candidate(2, 300_000).into(),
t.gen_candidate(3, 200_000).into(),
t.gen_candidate(4, 200_000).into(),
];
let make_opts = |v: u64| -> CoinSelectorOpt {
CoinSelectorOpt {
target_feerate: 0.0,
..t.gen_opts(v)
}
};
let test_cases = vec![
(make_opts(100_000), false, 0),
(make_opts(200_000), true, 1),
(make_opts(300_000), true, 1),
(make_opts(500_000), true, 2),
(make_opts(1_000_000), true, 4),
(make_opts(1_200_000), false, 0),
(make_opts(1_300_000), true, 5),
(make_opts(1_400_000), false, 0),
];
for (opts, expect_solution, expect_selected) in test_cases {
let res = evaluate_bnb(CoinSelector::new(&candidates, &opts), 10_000);
assert_eq!(res.is_ok(), expect_solution);
match res {
Ok(eval) => {
println!("{}", eval);
assert_eq!(eval.feerate_offset(ExcessStrategyKind::ToFee), 0.0);
assert_eq!(eval.solution.selected.len(), expect_selected as _);
}
Err(err) => println!("expected failure: {}", err),
}
}
}
#[test]
fn early_bailout_optimization() {
let t = tester();
// target: 300_000
// candidates: 2x of 125_000, 1000x of 100_000, 1x of 50_000
// expected solution: 2x 125_000, 1x 50_000
// set bnb max tries: 1100, should succeed
let candidates = {
let mut candidates: Vec<WeightedValue> = vec![
t.gen_candidate(0, 125_000).into(),
t.gen_candidate(1, 125_000).into(),
t.gen_candidate(2, 50_000).into(),
];
(3..3 + 1000_u32)
.for_each(|index| candidates.push(t.gen_candidate(index, 100_000).into()));
candidates
};
let opts = CoinSelectorOpt {
target_feerate: 0.0,
..t.gen_opts(300_000)
};
let result = evaluate_bnb(CoinSelector::new(&candidates, &opts), 1100);
assert!(result.is_ok());
let eval = result.unwrap();
println!("{}", eval);
assert_eq!(eval.solution.selected, (0..=2).collect());
}
#[test]
fn should_exhaust_iteration() {
static MAX_TRIES: usize = 1000;
let t = tester();
let candidates = (0..MAX_TRIES + 1)
.map(|index| t.gen_candidate(index as _, 10_000).into())
.collect::<Vec<WeightedValue>>();
let opts = t.gen_opts(10_001 * MAX_TRIES as u64);
let result = evaluate_bnb(CoinSelector::new(&candidates, &opts), MAX_TRIES);
assert!(result.is_err());
println!("error as expected: {}", result.unwrap_err());
}
/// Solution should have fee >= min_absolute_fee (or no solution at all)
#[test]
fn min_absolute_fee() {
let t = tester();
let candidates = {
let mut candidates = Vec::new();
t.gen_weighted_values(&mut candidates, 5, 10_000);
t.gen_weighted_values(&mut candidates, 5, 20_000);
t.gen_weighted_values(&mut candidates, 5, 30_000);
t.gen_weighted_values(&mut candidates, 10, 10_300);
t.gen_weighted_values(&mut candidates, 10, 10_500);
t.gen_weighted_values(&mut candidates, 10, 10_700);
t.gen_weighted_values(&mut candidates, 10, 10_900);
t.gen_weighted_values(&mut candidates, 10, 11_000);
t.gen_weighted_values(&mut candidates, 10, 12_000);
t.gen_weighted_values(&mut candidates, 10, 13_000);
candidates
};
let mut opts = CoinSelectorOpt {
min_absolute_fee: 1,
..t.gen_opts(100_000)
};
(1..=120_u64).for_each(|fee_factor| {
opts.min_absolute_fee = fee_factor * 31;
let result = evaluate_bnb(CoinSelector::new(&candidates, &opts), 21_000);
match result {
Ok(result) => {
println!("Solution {}", result);
let fee = result.solution.excess_strategies[&ExcessStrategyKind::ToFee].fee;
assert!(fee >= opts.min_absolute_fee);
assert_eq!(result.solution.excess_strategies.len(), 1);
}
Err(err) => {
println!("No Solution: {}", err);
}
}
});
}
/// For a decreasing feerate (long-term feerate is lower than effective feerate), we should
/// select less. For increasing feerate (long-term feerate is higher than effective feerate), we
/// should select more.
#[test]
fn feerate_difference() {
let t = tester();
let candidates = {
let mut candidates = Vec::new();
t.gen_weighted_values(&mut candidates, 10, 2_000);
t.gen_weighted_values(&mut candidates, 10, 5_000);
t.gen_weighted_values(&mut candidates, 10, 20_000);
candidates
};
let decreasing_feerate_opts = CoinSelectorOpt {
target_feerate: 1.25,
long_term_feerate: Some(0.25),
..t.gen_opts(100_000)
};
let increasing_feerate_opts = CoinSelectorOpt {
target_feerate: 0.25,
long_term_feerate: Some(1.25),
..t.gen_opts(100_000)
};
let decreasing_res = evaluate_bnb(
CoinSelector::new(&candidates, &decreasing_feerate_opts),
21_000,
)
.expect("no result");
let decreasing_len = decreasing_res.solution.selected.len();
let increasing_res = evaluate_bnb(
CoinSelector::new(&candidates, &increasing_feerate_opts),
21_000,
)
.expect("no result");
let increasing_len = increasing_res.solution.selected.len();
println!("decreasing_len: {}", decreasing_len);
println!("increasing_len: {}", increasing_len);
assert!(decreasing_len < increasing_len);
}
/// TODO: UNIMPLEMENTED TESTS:
/// * Excess strategies:
/// * We should always have `ExcessStrategy::ToFee`.
/// * We should only have `ExcessStrategy::ToRecipient` when `max_extra_target > 0`.
/// * We should only have `ExcessStrategy::ToDrain` when `drain_value >= min_drain_value`.
/// * Fuzz
/// * Solution feerate should never be lower than target feerate
/// * Solution fee should never be lower than `min_absolute_fee`.
/// * Preselected should always remain selected
fn _todo() {}
}

View File

@@ -1,615 +0,0 @@
use super::*;
/// A [`WeightedValue`] represents an input candidate for [`CoinSelector`]. This can either be a
/// single UTXO, or a group of UTXOs that should be spent together.
#[derive(Debug, Clone, Copy)]
pub struct WeightedValue {
/// Total value of the UTXO(s) that this [`WeightedValue`] represents.
pub value: u64,
/// Total weight of including this/these UTXO(s).
/// `txin` fields: `prevout`, `nSequence`, `scriptSigLen`, `scriptSig`, `scriptWitnessLen`,
/// `scriptWitness` should all be included.
pub weight: u32,
/// The total number of inputs; so we can calculate extra `varint` weight due to `vin` length changes.
pub input_count: usize,
/// Whether this [`WeightedValue`] contains at least one segwit spend.
pub is_segwit: bool,
}
impl WeightedValue {
/// Create a new [`WeightedValue`] that represents a single input.
///
/// `satisfaction_weight` is the weight of `scriptSigLen + scriptSig + scriptWitnessLen +
/// scriptWitness`.
pub fn new(value: u64, satisfaction_weight: u32, is_segwit: bool) -> WeightedValue {
let weight = TXIN_BASE_WEIGHT + satisfaction_weight;
WeightedValue {
value,
weight,
input_count: 1,
is_segwit,
}
}
/// Effective value of this input candidate: `actual_value - input_weight * feerate (sats/wu)`.
pub fn effective_value(&self, effective_feerate: f32) -> i64 {
// We prefer undershooting the candidate's effective value (so we over-estimate the fee of a
// candidate). If we overshoot the candidate's effective value, it may be possible to find a
// solution which does not meet the target feerate.
self.value as i64 - (self.weight as f32 * effective_feerate).ceil() as i64
}
}
#[derive(Debug, Clone, Copy)]
pub struct CoinSelectorOpt {
/// The value we need to select.
/// If the value is `None`, then the selection will be complete if it can pay for the drain
/// output and satisfy the other constraints (e.g., minimum fees).
pub target_value: Option<u64>,
/// Additional leeway for the target value.
pub max_extra_target: u64, // TODO: Maybe out of scope here?
/// The feerate we should try and achieve in sats per weight unit.
pub target_feerate: f32,
/// The feerate
pub long_term_feerate: Option<f32>, // TODO: Maybe out of scope? (waste)
/// The minimum absolute fee. I.e., needed for RBF.
pub min_absolute_fee: u64,
/// The weight of the template transaction, including fixed fields and outputs.
pub base_weight: u32,
/// Additional weight if we include the drain (change) output.
pub drain_weight: u32,
/// Weight of spending the drain (change) output in the future.
pub spend_drain_weight: u32, // TODO: Maybe out of scope? (waste)
/// Minimum value allowed for a drain (change) output.
pub min_drain_value: u64,
}
impl CoinSelectorOpt {
fn from_weights(base_weight: u32, drain_weight: u32, spend_drain_weight: u32) -> Self {
// 0.25 sats/wu == 1 sat/vb
let target_feerate = 0.25_f32;
// set `min_drain_value` to dust limit
let min_drain_value =
3 * ((drain_weight + spend_drain_weight) as f32 * target_feerate) as u64;
Self {
target_value: None,
max_extra_target: 0,
target_feerate,
long_term_feerate: None,
min_absolute_fee: 0,
base_weight,
drain_weight,
spend_drain_weight,
min_drain_value,
}
}
pub fn fund_outputs(
txouts: &[TxOut],
drain_output: &TxOut,
drain_satisfaction_weight: u32,
) -> Self {
let mut tx = Transaction {
input: vec![],
version: 1,
lock_time: LockTime::ZERO.into(),
output: txouts.to_vec(),
};
let base_weight = tx.weight();
// this awkward calculation is necessary since TxOut doesn't have \.weight()
let drain_weight = {
tx.output.push(drain_output.clone());
tx.weight() - base_weight
};
Self {
target_value: if txouts.is_empty() {
None
} else {
Some(txouts.iter().map(|txout| txout.value).sum())
},
..Self::from_weights(
base_weight as u32,
drain_weight as u32,
TXIN_BASE_WEIGHT + drain_satisfaction_weight,
)
}
}
pub fn long_term_feerate(&self) -> f32 {
self.long_term_feerate.unwrap_or(self.target_feerate)
}
pub fn drain_waste(&self) -> i64 {
(self.drain_weight as f32 * self.target_feerate
+ self.spend_drain_weight as f32 * self.long_term_feerate()) as i64
}
}
/// [`CoinSelector`] selects and deselects from a set of candidates.
#[derive(Debug, Clone)]
pub struct CoinSelector<'a> {
pub opts: &'a CoinSelectorOpt,
pub candidates: &'a Vec<WeightedValue>,
selected: BTreeSet<usize>,
}
impl<'a> CoinSelector<'a> {
pub fn candidate(&self, index: usize) -> &WeightedValue {
&self.candidates[index]
}
pub fn new(candidates: &'a Vec<WeightedValue>, opts: &'a CoinSelectorOpt) -> Self {
Self {
candidates,
selected: Default::default(),
opts,
}
}
pub fn select(&mut self, index: usize) -> bool {
assert!(index < self.candidates.len());
self.selected.insert(index)
}
pub fn deselect(&mut self, index: usize) -> bool {
self.selected.remove(&index)
}
pub fn is_selected(&self, index: usize) -> bool {
self.selected.contains(&index)
}
pub fn is_empty(&self) -> bool {
self.selected.is_empty()
}
/// Weight sum of all selected inputs.
pub fn selected_weight(&self) -> u32 {
self.selected
.iter()
.map(|&index| self.candidates[index].weight)
.sum()
}
/// Effective value sum of all selected inputs.
pub fn selected_effective_value(&self) -> i64 {
self.selected
.iter()
.map(|&index| self.candidates[index].effective_value(self.opts.target_feerate))
.sum()
}
/// Absolute value sum of all selected inputs.
pub fn selected_absolute_value(&self) -> u64 {
self.selected
.iter()
.map(|&index| self.candidates[index].value)
.sum()
}
/// Waste sum of all selected inputs.
pub fn selected_waste(&self) -> i64 {
(self.selected_weight() as f32 * (self.opts.target_feerate - self.opts.long_term_feerate()))
as i64
}
/// Current weight of template tx + selected inputs.
pub fn current_weight(&self) -> u32 {
let witness_header_extra_weight = self
.selected()
.find(|(_, wv)| wv.is_segwit)
.map(|_| 2)
.unwrap_or(0);
let vin_count_varint_extra_weight = {
let input_count = self.selected().map(|(_, wv)| wv.input_count).sum::<usize>();
(varint_size(input_count) - 1) * 4
};
self.opts.base_weight
+ self.selected_weight()
+ witness_header_extra_weight
+ vin_count_varint_extra_weight
}
/// Current excess.
pub fn current_excess(&self) -> i64 {
self.selected_effective_value() - self.effective_target()
}
/// This is the effective target value.
pub fn effective_target(&self) -> i64 {
let (has_segwit, max_input_count) = self
.candidates
.iter()
.fold((false, 0_usize), |(is_segwit, input_count), c| {
(is_segwit || c.is_segwit, input_count + c.input_count)
});
let effective_base_weight = self.opts.base_weight
+ if has_segwit { 2_u32 } else { 0_u32 }
+ (varint_size(max_input_count) - 1) * 4;
self.opts.target_value.unwrap_or(0) as i64
+ (effective_base_weight as f32 * self.opts.target_feerate).ceil() as i64
}
pub fn selected_count(&self) -> usize {
self.selected.len()
}
pub fn selected(&self) -> impl Iterator<Item = (usize, &'a WeightedValue)> + '_ {
self.selected
.iter()
.map(move |&index| (index, &self.candidates[index]))
}
pub fn unselected(&self) -> impl Iterator<Item = (usize, &'a WeightedValue)> + '_ {
self.candidates
.iter()
.enumerate()
.filter(move |(index, _)| !self.selected.contains(index))
}
pub fn selected_indexes(&self) -> impl Iterator<Item = usize> + '_ {
self.selected.iter().cloned()
}
pub fn unselected_indexes(&self) -> impl Iterator<Item = usize> + '_ {
(0..self.candidates.len()).filter(move |index| !self.selected.contains(index))
}
pub fn all_selected(&self) -> bool {
self.selected.len() == self.candidates.len()
}
pub fn select_all(&mut self) {
self.selected = (0..self.candidates.len()).collect();
}
pub fn select_until_finished(&mut self) -> Result<Selection, SelectionError> {
let mut selection = self.finish();
if selection.is_ok() {
return selection;
}
let unselected = self.unselected_indexes().collect::<Vec<_>>();
for index in unselected {
self.select(index);
selection = self.finish();
if selection.is_ok() {
break;
}
}
selection
}
pub fn finish(&self) -> Result<Selection, SelectionError> {
let weight_without_drain = self.current_weight();
let weight_with_drain = weight_without_drain + self.opts.drain_weight;
let fee_without_drain =
(weight_without_drain as f32 * self.opts.target_feerate).ceil() as u64;
let fee_with_drain = (weight_with_drain as f32 * self.opts.target_feerate).ceil() as u64;
let inputs_minus_outputs = {
let target_value = self.opts.target_value.unwrap_or(0);
let selected = self.selected_absolute_value();
// find the largest unsatisfied constraint (if any), and return the error of that constraint
// "selected" should always be greater than or equal to these selected values
[
(
SelectionConstraint::TargetValue,
target_value.saturating_sub(selected),
),
(
SelectionConstraint::TargetFee,
(target_value + fee_without_drain).saturating_sub(selected),
),
(
SelectionConstraint::MinAbsoluteFee,
(target_value + self.opts.min_absolute_fee).saturating_sub(selected),
),
(
SelectionConstraint::MinDrainValue,
// when we have no target value (hence no recipient txouts), we need to ensure
// the selected amount can satisfy requirements for a drain output (so we at least have one txout)
if self.opts.target_value.is_none() {
(fee_with_drain + self.opts.min_drain_value).saturating_sub(selected)
} else {
0
},
),
]
.iter()
.filter(|&(_, v)| v > &0)
.max_by_key(|&(_, v)| v)
.map_or(Ok(()), |(constraint, missing)| {
Err(SelectionError {
selected,
missing: *missing,
constraint: *constraint,
})
})?;
selected - target_value
};
let fee_without_drain = fee_without_drain.max(self.opts.min_absolute_fee);
let fee_with_drain = fee_with_drain.max(self.opts.min_absolute_fee);
let excess_without_drain = inputs_minus_outputs - fee_without_drain;
let input_waste = self.selected_waste();
// begin preparing excess strategies for final selection
let mut excess_strategies = HashMap::new();
// only allow `ToFee` and `ToRecipient` excess strategies when we have a `target_value`,
// otherwise, we will result in a result with no txouts, or attempt to add value to an output
// that does not exist.
if self.opts.target_value.is_some() {
// no drain, excess to fee
excess_strategies.insert(
ExcessStrategyKind::ToFee,
ExcessStrategy {
recipient_value: self.opts.target_value,
drain_value: None,
fee: fee_without_drain + excess_without_drain,
weight: weight_without_drain,
waste: input_waste + excess_without_drain as i64,
},
);
// no drain, send the excess to the recipient
// if `excess == 0`, this result will be the same as the previous, so don't consider it
// if `max_extra_target == 0`, there is no leeway for this strategy
if excess_without_drain > 0 && self.opts.max_extra_target > 0 {
let extra_recipient_value =
core::cmp::min(self.opts.max_extra_target, excess_without_drain);
let extra_fee = excess_without_drain - extra_recipient_value;
excess_strategies.insert(
ExcessStrategyKind::ToRecipient,
ExcessStrategy {
recipient_value: self.opts.target_value.map(|v| v + extra_recipient_value),
drain_value: None,
fee: fee_without_drain + extra_fee,
weight: weight_without_drain,
waste: input_waste + extra_fee as i64,
},
);
}
}
// with drain
if fee_with_drain >= self.opts.min_absolute_fee
&& inputs_minus_outputs >= fee_with_drain + self.opts.min_drain_value
{
excess_strategies.insert(
ExcessStrategyKind::ToDrain,
ExcessStrategy {
recipient_value: self.opts.target_value,
drain_value: Some(inputs_minus_outputs.saturating_sub(fee_with_drain)),
fee: fee_with_drain,
weight: weight_with_drain,
waste: input_waste + self.opts.drain_waste(),
},
);
}
debug_assert!(
!excess_strategies.is_empty(),
"should have at least one excess strategy."
);
Ok(Selection {
selected: self.selected.clone(),
excess: excess_without_drain,
excess_strategies,
})
}
}
#[derive(Clone, Debug)]
pub struct SelectionError {
selected: u64,
missing: u64,
constraint: SelectionConstraint,
}
impl core::fmt::Display for SelectionError {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
let SelectionError {
selected,
missing,
constraint,
} = self;
write!(
f,
"insufficient coins selected; selected={}, missing={}, unsatisfied_constraint={:?}",
selected, missing, constraint
)
}
}
#[cfg(feature = "std")]
impl std::error::Error for SelectionError {}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum SelectionConstraint {
/// The target is not met
TargetValue,
/// The target fee (given the feerate) is not met
TargetFee,
/// Min absolute fee is not met
MinAbsoluteFee,
/// Min drain value is not met
MinDrainValue,
}
impl core::fmt::Display for SelectionConstraint {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
SelectionConstraint::TargetValue => core::write!(f, "target_value"),
SelectionConstraint::TargetFee => core::write!(f, "target_fee"),
SelectionConstraint::MinAbsoluteFee => core::write!(f, "min_absolute_fee"),
SelectionConstraint::MinDrainValue => core::write!(f, "min_drain_value"),
}
}
}
#[derive(Clone, Debug)]
pub struct Selection {
pub selected: BTreeSet<usize>,
pub excess: u64,
pub excess_strategies: HashMap<ExcessStrategyKind, ExcessStrategy>,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, core::hash::Hash)]
pub enum ExcessStrategyKind {
ToFee,
ToRecipient,
ToDrain,
}
#[derive(Clone, Copy, Debug)]
pub struct ExcessStrategy {
pub recipient_value: Option<u64>,
pub drain_value: Option<u64>,
pub fee: u64,
pub weight: u32,
pub waste: i64,
}
impl Selection {
pub fn apply_selection<'a, T>(
&'a self,
candidates: &'a [T],
) -> impl Iterator<Item = &'a T> + 'a {
self.selected.iter().map(move |i| &candidates[*i])
}
/// Returns the [`ExcessStrategy`] that results in the least waste.
pub fn best_strategy(&self) -> (&ExcessStrategyKind, &ExcessStrategy) {
self.excess_strategies
.iter()
.min_by_key(|&(_, a)| a.waste)
.expect("selection has no excess strategy")
}
}
impl core::fmt::Display for ExcessStrategyKind {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
ExcessStrategyKind::ToFee => core::write!(f, "to_fee"),
ExcessStrategyKind::ToRecipient => core::write!(f, "to_recipient"),
ExcessStrategyKind::ToDrain => core::write!(f, "to_drain"),
}
}
}
impl ExcessStrategy {
/// Returns feerate in sats/wu.
pub fn feerate(&self) -> f32 {
self.fee as f32 / self.weight as f32
}
}
#[cfg(test)]
mod test {
use crate::{ExcessStrategyKind, SelectionConstraint};
use super::{CoinSelector, CoinSelectorOpt, WeightedValue};
/// Ensure `target_value` is respected. Can't have any disrespect.
#[test]
fn target_value_respected() {
let target_value = 1000_u64;
let candidates = (500..1500_u64)
.map(|value| WeightedValue {
value,
weight: 100,
input_count: 1,
is_segwit: false,
})
.collect::<super::Vec<_>>();
let opts = CoinSelectorOpt {
target_value: Some(target_value),
max_extra_target: 0,
target_feerate: 0.00,
long_term_feerate: None,
min_absolute_fee: 0,
base_weight: 10,
drain_weight: 10,
spend_drain_weight: 10,
min_drain_value: 10,
};
for (index, v) in candidates.iter().enumerate() {
let mut selector = CoinSelector::new(&candidates, &opts);
assert!(selector.select(index));
let res = selector.finish();
if v.value < opts.target_value.unwrap_or(0) {
let err = res.expect_err("should have failed");
assert_eq!(err.selected, v.value);
assert_eq!(err.missing, target_value - v.value);
assert_eq!(err.constraint, SelectionConstraint::MinAbsoluteFee);
} else {
let sel = res.expect("should have succeeded");
assert_eq!(sel.excess, v.value - opts.target_value.unwrap_or(0));
}
}
}
#[test]
fn drain_all() {
let candidates = (0..100)
.map(|_| WeightedValue {
value: 666,
weight: 166,
input_count: 1,
is_segwit: false,
})
.collect::<super::Vec<_>>();
let opts = CoinSelectorOpt {
target_value: None,
max_extra_target: 0,
target_feerate: 0.25,
long_term_feerate: None,
min_absolute_fee: 0,
base_weight: 10,
drain_weight: 100,
spend_drain_weight: 66,
min_drain_value: 1000,
};
let selection = CoinSelector::new(&candidates, &opts)
.select_until_finished()
.expect("should succeed");
assert!(selection.selected.len() > 1);
assert_eq!(selection.excess_strategies.len(), 1);
let (kind, strategy) = selection.best_strategy();
assert_eq!(*kind, ExcessStrategyKind::ToDrain);
assert!(strategy.recipient_value.is_none());
assert!(strategy.drain_value.is_some());
}
/// TODO: Tests to add:
/// * `finish` should ensure at least `target_value` is selected.
/// * actual feerate should be equal or higher than `target_feerate`.
/// * actual drain value should be equal to or higher than `min_drain_value` (or else no drain).
fn _todo() {}
}

View File

@@ -1,33 +0,0 @@
#![no_std]
#[cfg(feature = "std")]
extern crate std;
#[macro_use]
extern crate alloc;
extern crate bdk_chain;
use alloc::vec::Vec;
use bdk_chain::{
bitcoin,
collections::{BTreeSet, HashMap},
};
use bitcoin::{LockTime, Transaction, TxOut};
use core::fmt::{Debug, Display};
mod coin_selector;
pub use coin_selector::*;
mod bnb;
pub use bnb::*;
/// Txin "base" fields include `outpoint` (32+4) and `nSequence` (4). This does not include
/// `scriptSigLen` or `scriptSig`.
pub const TXIN_BASE_WEIGHT: u32 = (32 + 4 + 4) * 4;
/// Helper to calculate varint size. `v` is the value the varint represents.
// Shamelessly copied from
// https://github.com/rust-bitcoin/rust-miniscript/blob/d5615acda1a7fdc4041a11c1736af139b8c7ebe8/src/util.rs#L8
pub(crate) fn varint_size(v: usize) -> u32 {
bitcoin::VarInt(v as u64).len() as u32
}

View File

@@ -1,13 +0,0 @@
[package]
name = "bdk_tmp_plan"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk_chain = { path = "../../crates/chain", features = ["miniscript"] }
[features]
default = ["std"]
std = []

View File

@@ -1,3 +0,0 @@
# Temporary planning module
A temporary place to hold the planning module until https://github.com/rust-bitcoin/rust-miniscript/pull/481 is merged and released

View File

@@ -1,13 +0,0 @@
[package]
name = "bdk_tmp_plan"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bdk_chain = { path = "../../../crates/chain", version = "0.3.1", features = ["miniscript"] }
[features]
default = ["std"]
std = []

View File

@@ -1,3 +0,0 @@
# Temporary planning module
A temporary place to hold the planning module until https://github.com/rust-bitcoin/rust-miniscript/pull/481 is merged and released

View File

@@ -1,436 +0,0 @@
#![allow(unused)]
#![allow(missing_docs)]
//! A spending plan or *plan* for short is a representation of a particular spending path on a
//! descriptor. This allows us to analayze a choice of spending path without producing any
//! signatures or other witness data for it.
//!
//! To make a plan you provide the descriptor with "assets" like which keys you are able to use, hash
//! pre-images you have access to, the current block height etc.
//!
//! Once you've got a plan it can tell you its expected satisfaction weight which can be useful for
//! doing coin selection. Furthermore it provides which subset of those keys and hash pre-images you
//! will actually need as well as what locktime or sequence number you need to set.
//!
//! Once you've obstained signatures, hash pre-images etc required by the plan, it can create a
//! witness/script_sig for the input.
use bdk_chain::{bitcoin, collections::*, miniscript};
use bitcoin::{
blockdata::{locktime::LockTime, transaction::Sequence},
hashes::{hash160, ripemd160, sha256},
secp256k1::Secp256k1,
util::{
address::WitnessVersion,
bip32::{DerivationPath, Fingerprint, KeySource},
taproot::{LeafVersion, TapBranchHash, TapLeafHash},
},
EcdsaSig, SchnorrSig, Script, TxIn, Witness,
};
use miniscript::{
descriptor::{InnerXKey, Tr},
hash256, DefiniteDescriptorKey, Descriptor, DescriptorPublicKey, ScriptContext, ToPublicKey,
};
pub(crate) fn varint_len(v: usize) -> usize {
bitcoin::VarInt(v as u64).len() as usize
}
mod plan_impls;
mod requirements;
mod template;
pub use requirements::*;
pub use template::PlanKey;
use template::TemplateItem;
#[derive(Clone, Debug)]
enum TrSpend {
KeySpend,
LeafSpend {
script: Script,
leaf_version: LeafVersion,
},
}
#[derive(Clone, Debug)]
enum Target {
Legacy,
Segwitv0 {
script_code: Script,
},
Segwitv1 {
tr: Tr<DefiniteDescriptorKey>,
tr_plan: TrSpend,
},
}
impl Target {}
#[derive(Clone, Debug)]
/// A plan represents a particular spending path for a descriptor.
///
/// See the module level documentation for more info.
pub struct Plan<AK> {
template: Vec<TemplateItem<AK>>,
target: Target,
set_locktime: Option<LockTime>,
set_sequence: Option<Sequence>,
}
impl Default for Target {
fn default() -> Self {
Target::Legacy
}
}
#[derive(Clone, Debug, Default)]
/// Signatures and hash pre-images that can be used to complete a plan.
pub struct SatisfactionMaterial {
/// Schnorr signautres under their keys
pub schnorr_sigs: BTreeMap<DefiniteDescriptorKey, SchnorrSig>,
/// ECDSA signatures under their keys
pub ecdsa_sigs: BTreeMap<DefiniteDescriptorKey, EcdsaSig>,
/// SHA256 pre-images under their images
pub sha256_preimages: BTreeMap<sha256::Hash, Vec<u8>>,
/// hash160 pre-images under their images
pub hash160_preimages: BTreeMap<hash160::Hash, Vec<u8>>,
/// hash256 pre-images under their images
pub hash256_preimages: BTreeMap<hash256::Hash, Vec<u8>>,
/// ripemd160 pre-images under their images
pub ripemd160_preimages: BTreeMap<ripemd160::Hash, Vec<u8>>,
}
impl<Ak> Plan<Ak>
where
Ak: Clone,
{
/// The expected satisfaction weight for the plan if it is completed.
pub fn expected_weight(&self) -> usize {
let script_sig_size = match self.target {
Target::Legacy => unimplemented!(), // self
// .template
// .iter()
// .map(|step| {
// let size = step.expected_size();
// size + push_opcode_size(size)
// })
// .sum()
Target::Segwitv0 { .. } | Target::Segwitv1 { .. } => 1,
};
let witness_elem_sizes: Option<Vec<usize>> = match &self.target {
Target::Legacy => None,
Target::Segwitv0 { .. } => Some(
self.template
.iter()
.map(|step| step.expected_size())
.collect(),
),
Target::Segwitv1 { tr, tr_plan } => {
let mut witness_elems = self
.template
.iter()
.map(|step| step.expected_size())
.collect::<Vec<_>>();
if let TrSpend::LeafSpend {
script,
leaf_version,
} = tr_plan
{
let control_block = tr
.spend_info()
.control_block(&(script.clone(), *leaf_version))
.expect("must exist");
witness_elems.push(script.len());
witness_elems.push(control_block.size());
}
Some(witness_elems)
}
};
let witness_size: usize = match witness_elem_sizes {
Some(elems) => {
varint_len(elems.len())
+ elems
.into_iter()
.map(|elem| varint_len(elem) + elem)
.sum::<usize>()
}
None => 0,
};
script_sig_size * 4 + witness_size
}
pub fn requirements(&self) -> Requirements<Ak> {
match self.try_complete(&SatisfactionMaterial::default()) {
PlanState::Complete { .. } => Requirements::default(),
PlanState::Incomplete(requirements) => requirements,
}
}
pub fn try_complete(&self, auth_data: &SatisfactionMaterial) -> PlanState<Ak> {
let unsatisfied_items = self
.template
.iter()
.filter(|step| match step {
TemplateItem::Sign(key) => {
!auth_data.schnorr_sigs.contains_key(&key.descriptor_key)
}
TemplateItem::Hash160(image) => !auth_data.hash160_preimages.contains_key(image),
TemplateItem::Hash256(image) => !auth_data.hash256_preimages.contains_key(image),
TemplateItem::Sha256(image) => !auth_data.sha256_preimages.contains_key(image),
TemplateItem::Ripemd160(image) => {
!auth_data.ripemd160_preimages.contains_key(image)
}
TemplateItem::Pk { .. } | TemplateItem::One | TemplateItem::Zero => false,
})
.collect::<Vec<_>>();
if unsatisfied_items.is_empty() {
let mut witness = self
.template
.iter()
.flat_map(|step| step.to_witness_stack(&auth_data))
.collect::<Vec<_>>();
match &self.target {
Target::Segwitv0 { .. } => todo!(),
Target::Legacy => todo!(),
Target::Segwitv1 {
tr_plan: TrSpend::KeySpend,
..
} => PlanState::Complete {
final_script_sig: None,
final_script_witness: Some(Witness::from_vec(witness)),
},
Target::Segwitv1 {
tr,
tr_plan:
TrSpend::LeafSpend {
script,
leaf_version,
},
} => {
let spend_info = tr.spend_info();
let control_block = spend_info
.control_block(&(script.clone(), *leaf_version))
.expect("must exist");
witness.push(script.clone().into_bytes());
witness.push(control_block.serialize());
PlanState::Complete {
final_script_sig: None,
final_script_witness: Some(Witness::from_vec(witness)),
}
}
}
} else {
let mut requirements = Requirements::default();
match &self.target {
Target::Legacy => {
todo!()
}
Target::Segwitv0 { .. } => {
todo!()
}
Target::Segwitv1 { tr, tr_plan } => {
let spend_info = tr.spend_info();
match tr_plan {
TrSpend::KeySpend => match &self.template[..] {
[TemplateItem::Sign(ref plan_key)] => {
requirements.signatures = RequiredSignatures::TapKey {
merkle_root: spend_info.merkle_root(),
plan_key: plan_key.clone(),
};
}
_ => unreachable!("tapkey spend will always have only one sign step"),
},
TrSpend::LeafSpend {
script,
leaf_version,
} => {
let leaf_hash = TapLeafHash::from_script(&script, *leaf_version);
requirements.signatures = RequiredSignatures::TapScript {
leaf_hash,
plan_keys: vec![],
}
}
}
}
}
let required_signatures = match requirements.signatures {
RequiredSignatures::Legacy { .. } => todo!(),
RequiredSignatures::Segwitv0 { .. } => todo!(),
RequiredSignatures::TapKey { .. } => return PlanState::Incomplete(requirements),
RequiredSignatures::TapScript {
plan_keys: ref mut keys,
..
} => keys,
};
for step in unsatisfied_items {
match step {
TemplateItem::Sign(plan_key) => {
required_signatures.push(plan_key.clone());
}
TemplateItem::Hash160(image) => {
requirements.hash160_images.insert(image.clone());
}
TemplateItem::Hash256(image) => {
requirements.hash256_images.insert(image.clone());
}
TemplateItem::Sha256(image) => {
requirements.sha256_images.insert(image.clone());
}
TemplateItem::Ripemd160(image) => {
requirements.ripemd160_images.insert(image.clone());
}
TemplateItem::Pk { .. } | TemplateItem::One | TemplateItem::Zero => { /* no requirements */
}
}
}
PlanState::Incomplete(requirements)
}
}
/// Witness version for the plan
pub fn witness_version(&self) -> Option<WitnessVersion> {
match self.target {
Target::Legacy => None,
Target::Segwitv0 { .. } => Some(WitnessVersion::V0),
Target::Segwitv1 { .. } => Some(WitnessVersion::V1),
}
}
/// The minimum required locktime height or time on the transaction using the plan.
pub fn required_locktime(&self) -> Option<LockTime> {
self.set_locktime.clone()
}
/// The minimum required sequence (height or time) on the input to satisfy the plan
pub fn required_sequence(&self) -> Option<Sequence> {
self.set_sequence.clone()
}
/// The minmum required transaction version required on the transaction using the plan.
pub fn min_version(&self) -> Option<u32> {
if let Some(_) = self.set_sequence {
Some(2)
} else {
Some(1)
}
}
}
/// The returned value from [`Plan::try_complete`].
pub enum PlanState<Ak> {
/// The plan is complete
Complete {
/// The script sig that should be set on the input
final_script_sig: Option<Script>,
/// The witness that should be set on the input
final_script_witness: Option<Witness>,
},
Incomplete(Requirements<Ak>),
}
#[derive(Clone, Debug)]
pub struct Assets<K> {
pub keys: Vec<K>,
pub txo_age: Option<Sequence>,
pub max_locktime: Option<LockTime>,
pub sha256: Vec<sha256::Hash>,
pub hash256: Vec<hash256::Hash>,
pub ripemd160: Vec<ripemd160::Hash>,
pub hash160: Vec<hash160::Hash>,
}
impl<K> Default for Assets<K> {
fn default() -> Self {
Self {
keys: Default::default(),
txo_age: Default::default(),
max_locktime: Default::default(),
sha256: Default::default(),
hash256: Default::default(),
ripemd160: Default::default(),
hash160: Default::default(),
}
}
}
pub trait CanDerive {
fn can_derive(&self, key: &DefiniteDescriptorKey) -> Option<DerivationPath>;
}
impl CanDerive for KeySource {
fn can_derive(&self, key: &DefiniteDescriptorKey) -> Option<DerivationPath> {
match DescriptorPublicKey::from(key.clone()) {
DescriptorPublicKey::Single(single_pub) => {
path_to_child(self, single_pub.origin.as_ref()?, None)
}
DescriptorPublicKey::XPub(dxk) => {
let origin = dxk.origin.clone().unwrap_or_else(|| {
let secp = Secp256k1::signing_only();
(dxk.xkey.xkey_fingerprint(&secp), DerivationPath::master())
});
path_to_child(self, &origin, Some(&dxk.derivation_path))
}
}
}
}
impl CanDerive for DescriptorPublicKey {
fn can_derive(&self, key: &DefiniteDescriptorKey) -> Option<DerivationPath> {
match (self, DescriptorPublicKey::from(key.clone())) {
(parent, child) if parent == &child => Some(DerivationPath::master()),
(DescriptorPublicKey::XPub(parent), _) => {
let origin = parent.origin.clone().unwrap_or_else(|| {
let secp = Secp256k1::signing_only();
(
parent.xkey.xkey_fingerprint(&secp),
DerivationPath::master(),
)
});
KeySource::from(origin).can_derive(key)
}
_ => None,
}
}
}
fn path_to_child(
parent: &KeySource,
child_origin: &(Fingerprint, DerivationPath),
child_derivation: Option<&DerivationPath>,
) -> Option<DerivationPath> {
if parent.0 == child_origin.0 {
let mut remaining_derivation =
DerivationPath::from(child_origin.1[..].strip_prefix(&parent.1[..])?);
remaining_derivation =
remaining_derivation.extend(child_derivation.unwrap_or(&DerivationPath::master()));
Some(remaining_derivation)
} else {
None
}
}
pub fn plan_satisfaction<Ak>(
desc: &Descriptor<DefiniteDescriptorKey>,
assets: &Assets<Ak>,
) -> Option<Plan<Ak>>
where
Ak: CanDerive + Clone,
{
match desc {
Descriptor::Bare(_) => todo!(),
Descriptor::Pkh(_) => todo!(),
Descriptor::Wpkh(_) => todo!(),
Descriptor::Sh(_) => todo!(),
Descriptor::Wsh(_) => todo!(),
Descriptor::Tr(tr) => crate::plan_impls::plan_satisfaction_tr(tr, assets),
}
}

View File

@@ -1,323 +0,0 @@
use bdk_chain::{bitcoin, miniscript};
use bitcoin::locktime::{Height, Time};
use miniscript::Terminal;
use super::*;
impl<Ak> TermPlan<Ak> {
fn combine(self, other: Self) -> Option<Self> {
let min_locktime = {
match (self.min_locktime, other.min_locktime) {
(Some(lhs), Some(rhs)) => {
if lhs.is_same_unit(rhs) {
Some(if lhs.to_consensus_u32() > rhs.to_consensus_u32() {
lhs
} else {
rhs
})
} else {
return None;
}
}
_ => self.min_locktime.or(other.min_locktime),
}
};
let min_sequence = {
match (self.min_sequence, other.min_sequence) {
(Some(lhs), Some(rhs)) => {
if lhs.is_height_locked() == rhs.is_height_locked() {
Some(if lhs.to_consensus_u32() > rhs.to_consensus_u32() {
lhs
} else {
rhs
})
} else {
return None;
}
}
_ => self.min_sequence.or(other.min_sequence),
}
};
let mut template = self.template;
template.extend(other.template);
Some(Self {
min_locktime,
min_sequence,
template,
})
}
pub(crate) fn expected_size(&self) -> usize {
self.template.iter().map(|step| step.expected_size()).sum()
}
}
// impl crate::descriptor::Pkh<DefiniteDescriptorKey> {
// pub(crate) fn plan_satisfaction<Ak>(&self, assets: &Assets<Ak>) -> Option<Plan<Ak>>
// where
// Ak: CanDerive + Clone,
// {
// let (asset_key, derivation_hint) = assets.keys.iter().find_map(|asset_key| {
// let derivation_hint = asset_key.can_derive(self.as_inner())?;
// Some((asset_key, derivation_hint))
// })?;
// Some(Plan {
// template: vec![TemplateItem::Sign(PlanKey {
// asset_key: asset_key.clone(),
// descriptor_key: self.as_inner().clone(),
// derivation_hint,
// })],
// target: Target::Legacy,
// set_locktime: None,
// set_sequence: None,
// })
// }
// }
// impl crate::descriptor::Wpkh<DefiniteDescriptorKey> {
// pub(crate) fn plan_satisfaction<Ak>(&self, assets: &Assets<Ak>) -> Option<Plan<Ak>>
// where
// Ak: CanDerive + Clone,
// {
// let (asset_key, derivation_hint) = assets.keys.iter().find_map(|asset_key| {
// let derivation_hint = asset_key.can_derive(self.as_inner())?;
// Some((asset_key, derivation_hint))
// })?;
// Some(Plan {
// template: vec![TemplateItem::Sign(PlanKey {
// asset_key: asset_key.clone(),
// descriptor_key: self.as_inner().clone(),
// derivation_hint,
// })],
// target: Target::Segwitv0,
// set_locktime: None,
// set_sequence: None,
// })
// }
// }
pub(crate) fn plan_satisfaction_tr<Ak>(
tr: &miniscript::descriptor::Tr<DefiniteDescriptorKey>,
assets: &Assets<Ak>,
) -> Option<Plan<Ak>>
where
Ak: CanDerive + Clone,
{
let key_path_spend = assets.keys.iter().find_map(|asset_key| {
let derivation_hint = asset_key.can_derive(tr.internal_key())?;
Some((asset_key, derivation_hint))
});
if let Some((asset_key, derivation_hint)) = key_path_spend {
return Some(Plan {
template: vec![TemplateItem::Sign(PlanKey {
asset_key: asset_key.clone(),
descriptor_key: tr.internal_key().clone(),
derivation_hint,
})],
target: Target::Segwitv1 {
tr: tr.clone(),
tr_plan: TrSpend::KeySpend,
},
set_locktime: None,
set_sequence: None,
});
}
let mut plans = tr
.iter_scripts()
.filter_map(|(_, ms)| Some((ms, (plan_steps(&ms.node, assets)?))))
.collect::<Vec<_>>();
plans.sort_by_cached_key(|(_, plan)| plan.expected_size());
let (script, best_plan) = plans.into_iter().next()?;
Some(Plan {
target: Target::Segwitv1 {
tr: tr.clone(),
tr_plan: TrSpend::LeafSpend {
script: script.encode(),
leaf_version: LeafVersion::TapScript,
},
},
set_locktime: best_plan.min_locktime.clone(),
set_sequence: best_plan.min_sequence.clone(),
template: best_plan.template,
})
}
#[derive(Debug)]
struct TermPlan<Ak> {
pub min_locktime: Option<LockTime>,
pub min_sequence: Option<Sequence>,
pub template: Vec<TemplateItem<Ak>>,
}
impl<Ak> TermPlan<Ak> {
fn new(template: Vec<TemplateItem<Ak>>) -> Self {
TermPlan {
template,
..Default::default()
}
}
}
impl<Ak> Default for TermPlan<Ak> {
fn default() -> Self {
Self {
min_locktime: Default::default(),
min_sequence: Default::default(),
template: Default::default(),
}
}
}
fn plan_steps<Ak: Clone + CanDerive, Ctx: ScriptContext>(
term: &Terminal<DefiniteDescriptorKey, Ctx>,
assets: &Assets<Ak>,
) -> Option<TermPlan<Ak>> {
match term {
Terminal::True => Some(TermPlan::new(vec![])),
Terminal::False => return None,
Terminal::PkH(key) => {
let (asset_key, derivation_hint) = assets
.keys
.iter()
.find_map(|asset_key| Some((asset_key, asset_key.can_derive(key)?)))?;
Some(TermPlan::new(vec![
TemplateItem::Sign(PlanKey {
asset_key: asset_key.clone(),
derivation_hint,
descriptor_key: key.clone(),
}),
TemplateItem::Pk { key: key.clone() },
]))
}
Terminal::PkK(key) => {
let (asset_key, derivation_hint) = assets
.keys
.iter()
.find_map(|asset_key| Some((asset_key, asset_key.can_derive(key)?)))?;
Some(TermPlan::new(vec![TemplateItem::Sign(PlanKey {
asset_key: asset_key.clone(),
derivation_hint,
descriptor_key: key.clone(),
})]))
}
Terminal::RawPkH(_pk_hash) => {
/* TODO */
None
}
Terminal::After(locktime) => {
let max_locktime = assets.max_locktime?;
let locktime = LockTime::from(locktime);
let (height, time) = match max_locktime {
LockTime::Blocks(height) => (height, Time::from_consensus(0).unwrap()),
LockTime::Seconds(seconds) => (Height::from_consensus(0).unwrap(), seconds),
};
if max_locktime.is_satisfied_by(height, time) {
Some(TermPlan {
min_locktime: Some(locktime),
..Default::default()
})
} else {
None
}
}
Terminal::Older(older) => {
// FIXME: older should be a height or time not a sequence.
let max_sequence = assets.txo_age?;
//TODO: this whole thing is probably wrong but upstream should provide a way of
// doing it properly.
if max_sequence.is_height_locked() == older.is_height_locked() {
if max_sequence.to_consensus_u32() >= older.to_consensus_u32() {
Some(TermPlan {
min_sequence: Some(*older),
..Default::default()
})
} else {
None
}
} else {
None
}
}
Terminal::Sha256(image) => {
if assets.sha256.contains(&image) {
Some(TermPlan::new(vec![TemplateItem::Sha256(image.clone())]))
} else {
None
}
}
Terminal::Hash256(image) => {
if assets.hash256.contains(image) {
Some(TermPlan::new(vec![TemplateItem::Hash256(image.clone())]))
} else {
None
}
}
Terminal::Ripemd160(image) => {
if assets.ripemd160.contains(&image) {
Some(TermPlan::new(vec![TemplateItem::Ripemd160(image.clone())]))
} else {
None
}
}
Terminal::Hash160(image) => {
if assets.hash160.contains(&image) {
Some(TermPlan::new(vec![TemplateItem::Hash160(image.clone())]))
} else {
None
}
}
Terminal::Alt(ms)
| Terminal::Swap(ms)
| Terminal::Check(ms)
| Terminal::Verify(ms)
| Terminal::NonZero(ms)
| Terminal::ZeroNotEqual(ms) => plan_steps(&ms.node, assets),
Terminal::DupIf(ms) => {
let mut plan = plan_steps(&ms.node, assets)?;
plan.template.push(TemplateItem::One);
Some(plan)
}
Terminal::AndV(l, r) | Terminal::AndB(l, r) => {
let lhs = plan_steps(&l.node, assets)?;
let rhs = plan_steps(&r.node, assets)?;
lhs.combine(rhs)
}
Terminal::AndOr(_, _, _) => todo!(),
Terminal::OrB(_, _) => todo!(),
Terminal::OrD(_, _) => todo!(),
Terminal::OrC(_, _) => todo!(),
Terminal::OrI(lhs, rhs) => {
let lplan = plan_steps(&lhs.node, assets).map(|mut plan| {
plan.template.push(TemplateItem::One);
plan
});
let rplan = plan_steps(&rhs.node, assets).map(|mut plan| {
plan.template.push(TemplateItem::Zero);
plan
});
match (lplan, rplan) {
(Some(lplan), Some(rplan)) => {
if lplan.expected_size() <= rplan.expected_size() {
Some(lplan)
} else {
Some(rplan)
}
}
(lplan, rplan) => lplan.or(rplan),
}
}
Terminal::Thresh(_, _) => todo!(),
Terminal::Multi(_, _) => todo!(),
Terminal::MultiA(_, _) => todo!(),
}
}

View File

@@ -1,218 +0,0 @@
use bdk_chain::{bitcoin, collections::*, miniscript};
use core::ops::Deref;
use bitcoin::{
hashes::{hash160, ripemd160, sha256},
psbt::Prevouts,
secp256k1::{KeyPair, Message, PublicKey, Signing, Verification},
util::{bip32, sighash, sighash::SighashCache, taproot},
EcdsaSighashType, SchnorrSighashType, Transaction, TxOut, XOnlyPublicKey,
};
use super::*;
use miniscript::{
descriptor::{DescriptorSecretKey, KeyMap},
hash256,
};
#[derive(Clone, Debug)]
/// Signatures and hash pre-images that must be provided to complete the plan.
pub struct Requirements<Ak> {
/// required signatures
pub signatures: RequiredSignatures<Ak>,
/// required sha256 pre-images
pub sha256_images: HashSet<sha256::Hash>,
/// required hash160 pre-images
pub hash160_images: HashSet<hash160::Hash>,
/// required hash256 pre-images
pub hash256_images: HashSet<hash256::Hash>,
/// required ripemd160 pre-images
pub ripemd160_images: HashSet<ripemd160::Hash>,
}
impl<Ak> Default for RequiredSignatures<Ak> {
fn default() -> Self {
RequiredSignatures::Legacy {
keys: Default::default(),
}
}
}
impl<Ak> Default for Requirements<Ak> {
fn default() -> Self {
Self {
signatures: Default::default(),
sha256_images: Default::default(),
hash160_images: Default::default(),
hash256_images: Default::default(),
ripemd160_images: Default::default(),
}
}
}
impl<Ak> Requirements<Ak> {
/// Whether any hash pre-images are required in the plan
pub fn requires_hash_preimages(&self) -> bool {
!(self.sha256_images.is_empty()
&& self.hash160_images.is_empty()
&& self.hash256_images.is_empty()
&& self.ripemd160_images.is_empty())
}
}
/// The signatures required to complete the plan
#[derive(Clone, Debug)]
pub enum RequiredSignatures<Ak> {
/// Legacy ECDSA signatures are required
Legacy { keys: Vec<PlanKey<Ak>> },
/// Segwitv0 ECDSA signatures are required
Segwitv0 { keys: Vec<PlanKey<Ak>> },
/// A Taproot key spend signature is required
TapKey {
/// the internal key
plan_key: PlanKey<Ak>,
/// The merkle root of the taproot output
merkle_root: Option<TapBranchHash>,
},
/// Taproot script path signatures are required
TapScript {
/// The leaf hash of the script being used
leaf_hash: TapLeafHash,
/// The keys in the script that require signatures
plan_keys: Vec<PlanKey<Ak>>,
},
}
#[derive(Clone, Debug)]
pub enum SigningError {
SigHashError(sighash::Error),
DerivationError(bip32::Error),
}
impl From<sighash::Error> for SigningError {
fn from(e: sighash::Error) -> Self {
Self::SigHashError(e)
}
}
impl core::fmt::Display for SigningError {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
SigningError::SigHashError(e) => e.fmt(f),
SigningError::DerivationError(e) => e.fmt(f),
}
}
}
impl From<bip32::Error> for SigningError {
fn from(e: bip32::Error) -> Self {
Self::DerivationError(e)
}
}
#[cfg(feature = "std")]
impl std::error::Error for SigningError {}
impl RequiredSignatures<DescriptorPublicKey> {
pub fn sign_with_keymap<T: Deref<Target = Transaction>>(
&self,
input_index: usize,
keymap: &KeyMap,
prevouts: &Prevouts<'_, impl core::borrow::Borrow<TxOut>>,
schnorr_sighashty: Option<SchnorrSighashType>,
_ecdsa_sighashty: Option<EcdsaSighashType>,
sighash_cache: &mut SighashCache<T>,
auth_data: &mut SatisfactionMaterial,
secp: &Secp256k1<impl Signing + Verification>,
) -> Result<bool, SigningError> {
match self {
RequiredSignatures::Legacy { .. } | RequiredSignatures::Segwitv0 { .. } => todo!(),
RequiredSignatures::TapKey {
plan_key,
merkle_root,
} => {
let schnorr_sighashty = schnorr_sighashty.unwrap_or(SchnorrSighashType::Default);
let sighash = sighash_cache.taproot_key_spend_signature_hash(
input_index,
prevouts,
schnorr_sighashty,
)?;
let secret_key = match keymap.get(&plan_key.asset_key) {
Some(secret_key) => secret_key,
None => return Ok(false),
};
let secret_key = match secret_key {
DescriptorSecretKey::Single(single) => single.key.inner,
DescriptorSecretKey::XPrv(xprv) => {
xprv.xkey
.derive_priv(&secp, &plan_key.derivation_hint)?
.private_key
}
};
let pubkey = PublicKey::from_secret_key(&secp, &secret_key);
let x_only_pubkey = XOnlyPublicKey::from(pubkey);
let tweak =
taproot::TapTweakHash::from_key_and_tweak(x_only_pubkey, merkle_root.clone());
let keypair = KeyPair::from_secret_key(&secp, &secret_key.clone())
.add_xonly_tweak(&secp, &tweak.to_scalar())
.unwrap();
let msg = Message::from_slice(sighash.as_ref()).expect("Sighashes are 32 bytes");
let sig = secp.sign_schnorr_no_aux_rand(&msg, &keypair);
let bitcoin_sig = SchnorrSig {
sig,
hash_ty: schnorr_sighashty,
};
auth_data
.schnorr_sigs
.insert(plan_key.descriptor_key.clone(), bitcoin_sig);
Ok(true)
}
RequiredSignatures::TapScript {
leaf_hash,
plan_keys,
} => {
let sighash_type = schnorr_sighashty.unwrap_or(SchnorrSighashType::Default);
let sighash = sighash_cache.taproot_script_spend_signature_hash(
input_index,
prevouts,
*leaf_hash,
sighash_type,
)?;
let mut modified = false;
for plan_key in plan_keys {
if let Some(secret_key) = keymap.get(&plan_key.asset_key) {
let secret_key = match secret_key {
DescriptorSecretKey::Single(single) => single.key.inner,
DescriptorSecretKey::XPrv(xprv) => {
xprv.xkey
.derive_priv(&secp, &plan_key.derivation_hint)?
.private_key
}
};
let keypair = KeyPair::from_secret_key(&secp, &secret_key.clone());
let msg =
Message::from_slice(sighash.as_ref()).expect("Sighashes are 32 bytes");
let sig = secp.sign_schnorr_no_aux_rand(&msg, &keypair);
let bitcoin_sig = SchnorrSig {
sig,
hash_ty: sighash_type,
};
auth_data
.schnorr_sigs
.insert(plan_key.descriptor_key.clone(), bitcoin_sig);
modified = true;
}
}
Ok(modified)
}
}
}
}

View File

@@ -1,76 +0,0 @@
use bdk_chain::{bitcoin, miniscript};
use bitcoin::{
hashes::{hash160, ripemd160, sha256},
util::bip32::DerivationPath,
};
use super::*;
use crate::{hash256, varint_len, DefiniteDescriptorKey};
#[derive(Clone, Debug)]
pub(crate) enum TemplateItem<Ak> {
Sign(PlanKey<Ak>),
Pk { key: DefiniteDescriptorKey },
One,
Zero,
Sha256(sha256::Hash),
Hash256(hash256::Hash),
Ripemd160(ripemd160::Hash),
Hash160(hash160::Hash),
}
/// A plan key contains the asset key originally provided along with key in the descriptor it
/// purports to be able to derive for along with a "hint" on how to derive it.
#[derive(Clone, Debug)]
pub struct PlanKey<Ak> {
/// The key the planner will sign with
pub asset_key: Ak,
/// A hint from how to get from the asset key to the concrete key we need to sign with.
pub derivation_hint: DerivationPath,
/// The key that was in the descriptor that we are satisfying with the signature from the asset
/// key.
pub descriptor_key: DefiniteDescriptorKey,
}
impl<Ak> TemplateItem<Ak> {
pub fn expected_size(&self) -> usize {
match self {
TemplateItem::Sign { .. } => 64, /*size of sig TODO: take into consideration sighash falg*/
TemplateItem::Pk { .. } => 32,
TemplateItem::One => varint_len(1),
TemplateItem::Zero => 0, /* zero means an empty witness element */
// I'm not sure if it should be 32 here (it's a 20 byte hash) but that's what other
// parts of the code were doing.
TemplateItem::Hash160(_) | TemplateItem::Ripemd160(_) => 32,
TemplateItem::Sha256(_) | TemplateItem::Hash256(_) => 32,
}
}
// this can only be called if we are sure that auth_data has what we need
pub(super) fn to_witness_stack(&self, auth_data: &SatisfactionMaterial) -> Vec<Vec<u8>> {
match self {
TemplateItem::Sign(plan_key) => {
vec![auth_data
.schnorr_sigs
.get(&plan_key.descriptor_key)
.unwrap()
.to_vec()]
}
TemplateItem::One => vec![vec![1]],
TemplateItem::Zero => vec![vec![]],
TemplateItem::Sha256(image) => {
vec![auth_data.sha256_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Hash160(image) => {
vec![auth_data.hash160_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Ripemd160(image) => {
vec![auth_data.ripemd160_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Hash256(image) => {
vec![auth_data.hash256_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Pk { key } => vec![key.to_public_key().to_bytes()],
}
}
}

View File

@@ -1,437 +0,0 @@
#![allow(unused)]
#![allow(missing_docs)]
#![allow(clippy::all)] // FIXME
//! A spending plan or *plan* for short is a representation of a particular spending path on a
//! descriptor. This allows us to analayze a choice of spending path without producing any
//! signatures or other witness data for it.
//!
//! To make a plan you provide the descriptor with "assets" like which keys you are able to use, hash
//! pre-images you have access to, the current block height etc.
//!
//! Once you've got a plan it can tell you its expected satisfaction weight which can be useful for
//! doing coin selection. Furthermore it provides which subset of those keys and hash pre-images you
//! will actually need as well as what locktime or sequence number you need to set.
//!
//! Once you've obstained signatures, hash pre-images etc required by the plan, it can create a
//! witness/script_sig for the input.
use bdk_chain::{bitcoin, collections::*, miniscript};
use bitcoin::{
blockdata::{locktime::LockTime, transaction::Sequence},
hashes::{hash160, ripemd160, sha256},
secp256k1::Secp256k1,
util::{
address::WitnessVersion,
bip32::{DerivationPath, Fingerprint, KeySource},
taproot::{LeafVersion, TapBranchHash, TapLeafHash},
},
EcdsaSig, SchnorrSig, Script, TxIn, Witness,
};
use miniscript::{
descriptor::{InnerXKey, Tr},
hash256, DefiniteDescriptorKey, Descriptor, DescriptorPublicKey, ScriptContext, ToPublicKey,
};
pub(crate) fn varint_len(v: usize) -> usize {
bitcoin::VarInt(v as u64).len() as usize
}
mod plan_impls;
mod requirements;
mod template;
pub use requirements::*;
pub use template::PlanKey;
use template::TemplateItem;
#[derive(Clone, Debug)]
enum TrSpend {
KeySpend,
LeafSpend {
script: Script,
leaf_version: LeafVersion,
},
}
#[derive(Clone, Debug)]
enum Target {
Legacy,
Segwitv0 {
script_code: Script,
},
Segwitv1 {
tr: Tr<DefiniteDescriptorKey>,
tr_plan: TrSpend,
},
}
impl Target {}
#[derive(Clone, Debug)]
/// A plan represents a particular spending path for a descriptor.
///
/// See the module level documentation for more info.
pub struct Plan<AK> {
template: Vec<TemplateItem<AK>>,
target: Target,
set_locktime: Option<LockTime>,
set_sequence: Option<Sequence>,
}
impl Default for Target {
fn default() -> Self {
Target::Legacy
}
}
#[derive(Clone, Debug, Default)]
/// Signatures and hash pre-images that can be used to complete a plan.
pub struct SatisfactionMaterial {
/// Schnorr signautres under their keys
pub schnorr_sigs: BTreeMap<DefiniteDescriptorKey, SchnorrSig>,
/// ECDSA signatures under their keys
pub ecdsa_sigs: BTreeMap<DefiniteDescriptorKey, EcdsaSig>,
/// SHA256 pre-images under their images
pub sha256_preimages: BTreeMap<sha256::Hash, Vec<u8>>,
/// hash160 pre-images under their images
pub hash160_preimages: BTreeMap<hash160::Hash, Vec<u8>>,
/// hash256 pre-images under their images
pub hash256_preimages: BTreeMap<hash256::Hash, Vec<u8>>,
/// ripemd160 pre-images under their images
pub ripemd160_preimages: BTreeMap<ripemd160::Hash, Vec<u8>>,
}
impl<Ak> Plan<Ak>
where
Ak: Clone,
{
/// The expected satisfaction weight for the plan if it is completed.
pub fn expected_weight(&self) -> usize {
let script_sig_size = match self.target {
Target::Legacy => unimplemented!(), // self
// .template
// .iter()
// .map(|step| {
// let size = step.expected_size();
// size + push_opcode_size(size)
// })
// .sum()
Target::Segwitv0 { .. } | Target::Segwitv1 { .. } => 1,
};
let witness_elem_sizes: Option<Vec<usize>> = match &self.target {
Target::Legacy => None,
Target::Segwitv0 { .. } => Some(
self.template
.iter()
.map(|step| step.expected_size())
.collect(),
),
Target::Segwitv1 { tr, tr_plan } => {
let mut witness_elems = self
.template
.iter()
.map(|step| step.expected_size())
.collect::<Vec<_>>();
if let TrSpend::LeafSpend {
script,
leaf_version,
} = tr_plan
{
let control_block = tr
.spend_info()
.control_block(&(script.clone(), *leaf_version))
.expect("must exist");
witness_elems.push(script.len());
witness_elems.push(control_block.size());
}
Some(witness_elems)
}
};
let witness_size: usize = match witness_elem_sizes {
Some(elems) => {
varint_len(elems.len())
+ elems
.into_iter()
.map(|elem| varint_len(elem) + elem)
.sum::<usize>()
}
None => 0,
};
script_sig_size * 4 + witness_size
}
pub fn requirements(&self) -> Requirements<Ak> {
match self.try_complete(&SatisfactionMaterial::default()) {
PlanState::Complete { .. } => Requirements::default(),
PlanState::Incomplete(requirements) => requirements,
}
}
pub fn try_complete(&self, auth_data: &SatisfactionMaterial) -> PlanState<Ak> {
let unsatisfied_items = self
.template
.iter()
.filter(|step| match step {
TemplateItem::Sign(key) => {
!auth_data.schnorr_sigs.contains_key(&key.descriptor_key)
}
TemplateItem::Hash160(image) => !auth_data.hash160_preimages.contains_key(image),
TemplateItem::Hash256(image) => !auth_data.hash256_preimages.contains_key(image),
TemplateItem::Sha256(image) => !auth_data.sha256_preimages.contains_key(image),
TemplateItem::Ripemd160(image) => {
!auth_data.ripemd160_preimages.contains_key(image)
}
TemplateItem::Pk { .. } | TemplateItem::One | TemplateItem::Zero => false,
})
.collect::<Vec<_>>();
if unsatisfied_items.is_empty() {
let mut witness = self
.template
.iter()
.flat_map(|step| step.to_witness_stack(&auth_data))
.collect::<Vec<_>>();
match &self.target {
Target::Segwitv0 { .. } => todo!(),
Target::Legacy => todo!(),
Target::Segwitv1 {
tr_plan: TrSpend::KeySpend,
..
} => PlanState::Complete {
final_script_sig: None,
final_script_witness: Some(Witness::from_vec(witness)),
},
Target::Segwitv1 {
tr,
tr_plan:
TrSpend::LeafSpend {
script,
leaf_version,
},
} => {
let spend_info = tr.spend_info();
let control_block = spend_info
.control_block(&(script.clone(), *leaf_version))
.expect("must exist");
witness.push(script.clone().into_bytes());
witness.push(control_block.serialize());
PlanState::Complete {
final_script_sig: None,
final_script_witness: Some(Witness::from_vec(witness)),
}
}
}
} else {
let mut requirements = Requirements::default();
match &self.target {
Target::Legacy => {
todo!()
}
Target::Segwitv0 { .. } => {
todo!()
}
Target::Segwitv1 { tr, tr_plan } => {
let spend_info = tr.spend_info();
match tr_plan {
TrSpend::KeySpend => match &self.template[..] {
[TemplateItem::Sign(ref plan_key)] => {
requirements.signatures = RequiredSignatures::TapKey {
merkle_root: spend_info.merkle_root(),
plan_key: plan_key.clone(),
};
}
_ => unreachable!("tapkey spend will always have only one sign step"),
},
TrSpend::LeafSpend {
script,
leaf_version,
} => {
let leaf_hash = TapLeafHash::from_script(&script, *leaf_version);
requirements.signatures = RequiredSignatures::TapScript {
leaf_hash,
plan_keys: vec![],
}
}
}
}
}
let required_signatures = match requirements.signatures {
RequiredSignatures::Legacy { .. } => todo!(),
RequiredSignatures::Segwitv0 { .. } => todo!(),
RequiredSignatures::TapKey { .. } => return PlanState::Incomplete(requirements),
RequiredSignatures::TapScript {
plan_keys: ref mut keys,
..
} => keys,
};
for step in unsatisfied_items {
match step {
TemplateItem::Sign(plan_key) => {
required_signatures.push(plan_key.clone());
}
TemplateItem::Hash160(image) => {
requirements.hash160_images.insert(image.clone());
}
TemplateItem::Hash256(image) => {
requirements.hash256_images.insert(image.clone());
}
TemplateItem::Sha256(image) => {
requirements.sha256_images.insert(image.clone());
}
TemplateItem::Ripemd160(image) => {
requirements.ripemd160_images.insert(image.clone());
}
TemplateItem::Pk { .. } | TemplateItem::One | TemplateItem::Zero => { /* no requirements */
}
}
}
PlanState::Incomplete(requirements)
}
}
/// Witness version for the plan
pub fn witness_version(&self) -> Option<WitnessVersion> {
match self.target {
Target::Legacy => None,
Target::Segwitv0 { .. } => Some(WitnessVersion::V0),
Target::Segwitv1 { .. } => Some(WitnessVersion::V1),
}
}
/// The minimum required locktime height or time on the transaction using the plan.
pub fn required_locktime(&self) -> Option<LockTime> {
self.set_locktime.clone()
}
/// The minimum required sequence (height or time) on the input to satisfy the plan
pub fn required_sequence(&self) -> Option<Sequence> {
self.set_sequence.clone()
}
/// The minmum required transaction version required on the transaction using the plan.
pub fn min_version(&self) -> Option<u32> {
if let Some(_) = self.set_sequence {
Some(2)
} else {
Some(1)
}
}
}
/// The returned value from [`Plan::try_complete`].
pub enum PlanState<Ak> {
/// The plan is complete
Complete {
/// The script sig that should be set on the input
final_script_sig: Option<Script>,
/// The witness that should be set on the input
final_script_witness: Option<Witness>,
},
Incomplete(Requirements<Ak>),
}
#[derive(Clone, Debug)]
pub struct Assets<K> {
pub keys: Vec<K>,
pub txo_age: Option<Sequence>,
pub max_locktime: Option<LockTime>,
pub sha256: Vec<sha256::Hash>,
pub hash256: Vec<hash256::Hash>,
pub ripemd160: Vec<ripemd160::Hash>,
pub hash160: Vec<hash160::Hash>,
}
impl<K> Default for Assets<K> {
fn default() -> Self {
Self {
keys: Default::default(),
txo_age: Default::default(),
max_locktime: Default::default(),
sha256: Default::default(),
hash256: Default::default(),
ripemd160: Default::default(),
hash160: Default::default(),
}
}
}
pub trait CanDerive {
fn can_derive(&self, key: &DefiniteDescriptorKey) -> Option<DerivationPath>;
}
impl CanDerive for KeySource {
fn can_derive(&self, key: &DefiniteDescriptorKey) -> Option<DerivationPath> {
match DescriptorPublicKey::from(key.clone()) {
DescriptorPublicKey::Single(single_pub) => {
path_to_child(self, single_pub.origin.as_ref()?, None)
}
DescriptorPublicKey::XPub(dxk) => {
let origin = dxk.origin.clone().unwrap_or_else(|| {
let secp = Secp256k1::signing_only();
(dxk.xkey.xkey_fingerprint(&secp), DerivationPath::master())
});
path_to_child(self, &origin, Some(&dxk.derivation_path))
}
}
}
}
impl CanDerive for DescriptorPublicKey {
fn can_derive(&self, key: &DefiniteDescriptorKey) -> Option<DerivationPath> {
match (self, DescriptorPublicKey::from(key.clone())) {
(parent, child) if parent == &child => Some(DerivationPath::master()),
(DescriptorPublicKey::XPub(parent), _) => {
let origin = parent.origin.clone().unwrap_or_else(|| {
let secp = Secp256k1::signing_only();
(
parent.xkey.xkey_fingerprint(&secp),
DerivationPath::master(),
)
});
KeySource::from(origin).can_derive(key)
}
_ => None,
}
}
}
fn path_to_child(
parent: &KeySource,
child_origin: &(Fingerprint, DerivationPath),
child_derivation: Option<&DerivationPath>,
) -> Option<DerivationPath> {
if parent.0 == child_origin.0 {
let mut remaining_derivation =
DerivationPath::from(child_origin.1[..].strip_prefix(&parent.1[..])?);
remaining_derivation =
remaining_derivation.extend(child_derivation.unwrap_or(&DerivationPath::master()));
Some(remaining_derivation)
} else {
None
}
}
pub fn plan_satisfaction<Ak>(
desc: &Descriptor<DefiniteDescriptorKey>,
assets: &Assets<Ak>,
) -> Option<Plan<Ak>>
where
Ak: CanDerive + Clone,
{
match desc {
Descriptor::Bare(_) => todo!(),
Descriptor::Pkh(_) => todo!(),
Descriptor::Wpkh(_) => todo!(),
Descriptor::Sh(_) => todo!(),
Descriptor::Wsh(_) => todo!(),
Descriptor::Tr(tr) => crate::plan_impls::plan_satisfaction_tr(tr, assets),
}
}

View File

@@ -1,323 +0,0 @@
use bdk_chain::{bitcoin, miniscript};
use bitcoin::locktime::{Height, Time};
use miniscript::Terminal;
use super::*;
impl<Ak> TermPlan<Ak> {
fn combine(self, other: Self) -> Option<Self> {
let min_locktime = {
match (self.min_locktime, other.min_locktime) {
(Some(lhs), Some(rhs)) => {
if lhs.is_same_unit(rhs) {
Some(if lhs.to_consensus_u32() > rhs.to_consensus_u32() {
lhs
} else {
rhs
})
} else {
return None;
}
}
_ => self.min_locktime.or(other.min_locktime),
}
};
let min_sequence = {
match (self.min_sequence, other.min_sequence) {
(Some(lhs), Some(rhs)) => {
if lhs.is_height_locked() == rhs.is_height_locked() {
Some(if lhs.to_consensus_u32() > rhs.to_consensus_u32() {
lhs
} else {
rhs
})
} else {
return None;
}
}
_ => self.min_sequence.or(other.min_sequence),
}
};
let mut template = self.template;
template.extend(other.template);
Some(Self {
min_locktime,
min_sequence,
template,
})
}
pub(crate) fn expected_size(&self) -> usize {
self.template.iter().map(|step| step.expected_size()).sum()
}
}
// impl crate::descriptor::Pkh<DefiniteDescriptorKey> {
// pub(crate) fn plan_satisfaction<Ak>(&self, assets: &Assets<Ak>) -> Option<Plan<Ak>>
// where
// Ak: CanDerive + Clone,
// {
// let (asset_key, derivation_hint) = assets.keys.iter().find_map(|asset_key| {
// let derivation_hint = asset_key.can_derive(self.as_inner())?;
// Some((asset_key, derivation_hint))
// })?;
// Some(Plan {
// template: vec![TemplateItem::Sign(PlanKey {
// asset_key: asset_key.clone(),
// descriptor_key: self.as_inner().clone(),
// derivation_hint,
// })],
// target: Target::Legacy,
// set_locktime: None,
// set_sequence: None,
// })
// }
// }
// impl crate::descriptor::Wpkh<DefiniteDescriptorKey> {
// pub(crate) fn plan_satisfaction<Ak>(&self, assets: &Assets<Ak>) -> Option<Plan<Ak>>
// where
// Ak: CanDerive + Clone,
// {
// let (asset_key, derivation_hint) = assets.keys.iter().find_map(|asset_key| {
// let derivation_hint = asset_key.can_derive(self.as_inner())?;
// Some((asset_key, derivation_hint))
// })?;
// Some(Plan {
// template: vec![TemplateItem::Sign(PlanKey {
// asset_key: asset_key.clone(),
// descriptor_key: self.as_inner().clone(),
// derivation_hint,
// })],
// target: Target::Segwitv0,
// set_locktime: None,
// set_sequence: None,
// })
// }
// }
pub(crate) fn plan_satisfaction_tr<Ak>(
tr: &miniscript::descriptor::Tr<DefiniteDescriptorKey>,
assets: &Assets<Ak>,
) -> Option<Plan<Ak>>
where
Ak: CanDerive + Clone,
{
let key_path_spend = assets.keys.iter().find_map(|asset_key| {
let derivation_hint = asset_key.can_derive(tr.internal_key())?;
Some((asset_key, derivation_hint))
});
if let Some((asset_key, derivation_hint)) = key_path_spend {
return Some(Plan {
template: vec![TemplateItem::Sign(PlanKey {
asset_key: asset_key.clone(),
descriptor_key: tr.internal_key().clone(),
derivation_hint,
})],
target: Target::Segwitv1 {
tr: tr.clone(),
tr_plan: TrSpend::KeySpend,
},
set_locktime: None,
set_sequence: None,
});
}
let mut plans = tr
.iter_scripts()
.filter_map(|(_, ms)| Some((ms, (plan_steps(&ms.node, assets)?))))
.collect::<Vec<_>>();
plans.sort_by_cached_key(|(_, plan)| plan.expected_size());
let (script, best_plan) = plans.into_iter().next()?;
Some(Plan {
target: Target::Segwitv1 {
tr: tr.clone(),
tr_plan: TrSpend::LeafSpend {
script: script.encode(),
leaf_version: LeafVersion::TapScript,
},
},
set_locktime: best_plan.min_locktime.clone(),
set_sequence: best_plan.min_sequence.clone(),
template: best_plan.template,
})
}
#[derive(Debug)]
struct TermPlan<Ak> {
pub min_locktime: Option<LockTime>,
pub min_sequence: Option<Sequence>,
pub template: Vec<TemplateItem<Ak>>,
}
impl<Ak> TermPlan<Ak> {
fn new(template: Vec<TemplateItem<Ak>>) -> Self {
TermPlan {
template,
..Default::default()
}
}
}
impl<Ak> Default for TermPlan<Ak> {
fn default() -> Self {
Self {
min_locktime: Default::default(),
min_sequence: Default::default(),
template: Default::default(),
}
}
}
fn plan_steps<Ak: Clone + CanDerive, Ctx: ScriptContext>(
term: &Terminal<DefiniteDescriptorKey, Ctx>,
assets: &Assets<Ak>,
) -> Option<TermPlan<Ak>> {
match term {
Terminal::True => Some(TermPlan::new(vec![])),
Terminal::False => return None,
Terminal::PkH(key) => {
let (asset_key, derivation_hint) = assets
.keys
.iter()
.find_map(|asset_key| Some((asset_key, asset_key.can_derive(key)?)))?;
Some(TermPlan::new(vec![
TemplateItem::Sign(PlanKey {
asset_key: asset_key.clone(),
derivation_hint,
descriptor_key: key.clone(),
}),
TemplateItem::Pk { key: key.clone() },
]))
}
Terminal::PkK(key) => {
let (asset_key, derivation_hint) = assets
.keys
.iter()
.find_map(|asset_key| Some((asset_key, asset_key.can_derive(key)?)))?;
Some(TermPlan::new(vec![TemplateItem::Sign(PlanKey {
asset_key: asset_key.clone(),
derivation_hint,
descriptor_key: key.clone(),
})]))
}
Terminal::RawPkH(_pk_hash) => {
/* TODO */
None
}
Terminal::After(locktime) => {
let max_locktime = assets.max_locktime?;
let locktime = LockTime::from(locktime);
let (height, time) = match max_locktime {
LockTime::Blocks(height) => (height, Time::from_consensus(0).unwrap()),
LockTime::Seconds(seconds) => (Height::from_consensus(0).unwrap(), seconds),
};
if max_locktime.is_satisfied_by(height, time) {
Some(TermPlan {
min_locktime: Some(locktime),
..Default::default()
})
} else {
None
}
}
Terminal::Older(older) => {
// FIXME: older should be a height or time not a sequence.
let max_sequence = assets.txo_age?;
//TODO: this whole thing is probably wrong but upstream should provide a way of
// doing it properly.
if max_sequence.is_height_locked() == older.is_height_locked() {
if max_sequence.to_consensus_u32() >= older.to_consensus_u32() {
Some(TermPlan {
min_sequence: Some(*older),
..Default::default()
})
} else {
None
}
} else {
None
}
}
Terminal::Sha256(image) => {
if assets.sha256.contains(&image) {
Some(TermPlan::new(vec![TemplateItem::Sha256(image.clone())]))
} else {
None
}
}
Terminal::Hash256(image) => {
if assets.hash256.contains(image) {
Some(TermPlan::new(vec![TemplateItem::Hash256(image.clone())]))
} else {
None
}
}
Terminal::Ripemd160(image) => {
if assets.ripemd160.contains(&image) {
Some(TermPlan::new(vec![TemplateItem::Ripemd160(image.clone())]))
} else {
None
}
}
Terminal::Hash160(image) => {
if assets.hash160.contains(&image) {
Some(TermPlan::new(vec![TemplateItem::Hash160(image.clone())]))
} else {
None
}
}
Terminal::Alt(ms)
| Terminal::Swap(ms)
| Terminal::Check(ms)
| Terminal::Verify(ms)
| Terminal::NonZero(ms)
| Terminal::ZeroNotEqual(ms) => plan_steps(&ms.node, assets),
Terminal::DupIf(ms) => {
let mut plan = plan_steps(&ms.node, assets)?;
plan.template.push(TemplateItem::One);
Some(plan)
}
Terminal::AndV(l, r) | Terminal::AndB(l, r) => {
let lhs = plan_steps(&l.node, assets)?;
let rhs = plan_steps(&r.node, assets)?;
lhs.combine(rhs)
}
Terminal::AndOr(_, _, _) => todo!(),
Terminal::OrB(_, _) => todo!(),
Terminal::OrD(_, _) => todo!(),
Terminal::OrC(_, _) => todo!(),
Terminal::OrI(lhs, rhs) => {
let lplan = plan_steps(&lhs.node, assets).map(|mut plan| {
plan.template.push(TemplateItem::One);
plan
});
let rplan = plan_steps(&rhs.node, assets).map(|mut plan| {
plan.template.push(TemplateItem::Zero);
plan
});
match (lplan, rplan) {
(Some(lplan), Some(rplan)) => {
if lplan.expected_size() <= rplan.expected_size() {
Some(lplan)
} else {
Some(rplan)
}
}
(lplan, rplan) => lplan.or(rplan),
}
}
Terminal::Thresh(_, _) => todo!(),
Terminal::Multi(_, _) => todo!(),
Terminal::MultiA(_, _) => todo!(),
}
}

View File

@@ -1,218 +0,0 @@
use bdk_chain::{bitcoin, collections::*, miniscript};
use core::ops::Deref;
use bitcoin::{
hashes::{hash160, ripemd160, sha256},
psbt::Prevouts,
secp256k1::{KeyPair, Message, PublicKey, Signing, Verification},
util::{bip32, sighash, sighash::SighashCache, taproot},
EcdsaSighashType, SchnorrSighashType, Transaction, TxOut, XOnlyPublicKey,
};
use super::*;
use miniscript::{
descriptor::{DescriptorSecretKey, KeyMap},
hash256,
};
#[derive(Clone, Debug)]
/// Signatures and hash pre-images that must be provided to complete the plan.
pub struct Requirements<Ak> {
/// required signatures
pub signatures: RequiredSignatures<Ak>,
/// required sha256 pre-images
pub sha256_images: HashSet<sha256::Hash>,
/// required hash160 pre-images
pub hash160_images: HashSet<hash160::Hash>,
/// required hash256 pre-images
pub hash256_images: HashSet<hash256::Hash>,
/// required ripemd160 pre-images
pub ripemd160_images: HashSet<ripemd160::Hash>,
}
impl<Ak> Default for RequiredSignatures<Ak> {
fn default() -> Self {
RequiredSignatures::Legacy {
keys: Default::default(),
}
}
}
impl<Ak> Default for Requirements<Ak> {
fn default() -> Self {
Self {
signatures: Default::default(),
sha256_images: Default::default(),
hash160_images: Default::default(),
hash256_images: Default::default(),
ripemd160_images: Default::default(),
}
}
}
impl<Ak> Requirements<Ak> {
/// Whether any hash pre-images are required in the plan
pub fn requires_hash_preimages(&self) -> bool {
!(self.sha256_images.is_empty()
&& self.hash160_images.is_empty()
&& self.hash256_images.is_empty()
&& self.ripemd160_images.is_empty())
}
}
/// The signatures required to complete the plan
#[derive(Clone, Debug)]
pub enum RequiredSignatures<Ak> {
/// Legacy ECDSA signatures are required
Legacy { keys: Vec<PlanKey<Ak>> },
/// Segwitv0 ECDSA signatures are required
Segwitv0 { keys: Vec<PlanKey<Ak>> },
/// A Taproot key spend signature is required
TapKey {
/// the internal key
plan_key: PlanKey<Ak>,
/// The merkle root of the taproot output
merkle_root: Option<TapBranchHash>,
},
/// Taproot script path signatures are required
TapScript {
/// The leaf hash of the script being used
leaf_hash: TapLeafHash,
/// The keys in the script that require signatures
plan_keys: Vec<PlanKey<Ak>>,
},
}
#[derive(Clone, Debug)]
pub enum SigningError {
SigHashError(sighash::Error),
DerivationError(bip32::Error),
}
impl From<sighash::Error> for SigningError {
fn from(e: sighash::Error) -> Self {
Self::SigHashError(e)
}
}
impl core::fmt::Display for SigningError {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
SigningError::SigHashError(e) => e.fmt(f),
SigningError::DerivationError(e) => e.fmt(f),
}
}
}
impl From<bip32::Error> for SigningError {
fn from(e: bip32::Error) -> Self {
Self::DerivationError(e)
}
}
#[cfg(feature = "std")]
impl std::error::Error for SigningError {}
impl RequiredSignatures<DescriptorPublicKey> {
pub fn sign_with_keymap<T: Deref<Target = Transaction>>(
&self,
input_index: usize,
keymap: &KeyMap,
prevouts: &Prevouts<'_, impl core::borrow::Borrow<TxOut>>,
schnorr_sighashty: Option<SchnorrSighashType>,
_ecdsa_sighashty: Option<EcdsaSighashType>,
sighash_cache: &mut SighashCache<T>,
auth_data: &mut SatisfactionMaterial,
secp: &Secp256k1<impl Signing + Verification>,
) -> Result<bool, SigningError> {
match self {
RequiredSignatures::Legacy { .. } | RequiredSignatures::Segwitv0 { .. } => todo!(),
RequiredSignatures::TapKey {
plan_key,
merkle_root,
} => {
let schnorr_sighashty = schnorr_sighashty.unwrap_or(SchnorrSighashType::Default);
let sighash = sighash_cache.taproot_key_spend_signature_hash(
input_index,
prevouts,
schnorr_sighashty,
)?;
let secret_key = match keymap.get(&plan_key.asset_key) {
Some(secret_key) => secret_key,
None => return Ok(false),
};
let secret_key = match secret_key {
DescriptorSecretKey::Single(single) => single.key.inner,
DescriptorSecretKey::XPrv(xprv) => {
xprv.xkey
.derive_priv(&secp, &plan_key.derivation_hint)?
.private_key
}
};
let pubkey = PublicKey::from_secret_key(&secp, &secret_key);
let x_only_pubkey = XOnlyPublicKey::from(pubkey);
let tweak =
taproot::TapTweakHash::from_key_and_tweak(x_only_pubkey, merkle_root.clone());
let keypair = KeyPair::from_secret_key(&secp, &secret_key.clone())
.add_xonly_tweak(&secp, &tweak.to_scalar())
.unwrap();
let msg = Message::from_slice(sighash.as_ref()).expect("Sighashes are 32 bytes");
let sig = secp.sign_schnorr_no_aux_rand(&msg, &keypair);
let bitcoin_sig = SchnorrSig {
sig,
hash_ty: schnorr_sighashty,
};
auth_data
.schnorr_sigs
.insert(plan_key.descriptor_key.clone(), bitcoin_sig);
Ok(true)
}
RequiredSignatures::TapScript {
leaf_hash,
plan_keys,
} => {
let sighash_type = schnorr_sighashty.unwrap_or(SchnorrSighashType::Default);
let sighash = sighash_cache.taproot_script_spend_signature_hash(
input_index,
prevouts,
*leaf_hash,
sighash_type,
)?;
let mut modified = false;
for plan_key in plan_keys {
if let Some(secret_key) = keymap.get(&plan_key.asset_key) {
let secret_key = match secret_key {
DescriptorSecretKey::Single(single) => single.key.inner,
DescriptorSecretKey::XPrv(xprv) => {
xprv.xkey
.derive_priv(&secp, &plan_key.derivation_hint)?
.private_key
}
};
let keypair = KeyPair::from_secret_key(&secp, &secret_key.clone());
let msg =
Message::from_slice(sighash.as_ref()).expect("Sighashes are 32 bytes");
let sig = secp.sign_schnorr_no_aux_rand(&msg, &keypair);
let bitcoin_sig = SchnorrSig {
sig,
hash_ty: sighash_type,
};
auth_data
.schnorr_sigs
.insert(plan_key.descriptor_key.clone(), bitcoin_sig);
modified = true;
}
}
Ok(modified)
}
}
}
}

View File

@@ -1,76 +0,0 @@
use bdk_chain::{bitcoin, miniscript};
use bitcoin::{
hashes::{hash160, ripemd160, sha256},
util::bip32::DerivationPath,
};
use super::*;
use crate::{hash256, varint_len, DefiniteDescriptorKey};
#[derive(Clone, Debug)]
pub(crate) enum TemplateItem<Ak> {
Sign(PlanKey<Ak>),
Pk { key: DefiniteDescriptorKey },
One,
Zero,
Sha256(sha256::Hash),
Hash256(hash256::Hash),
Ripemd160(ripemd160::Hash),
Hash160(hash160::Hash),
}
/// A plan key contains the asset key originally provided along with key in the descriptor it
/// purports to be able to derive for along with a "hint" on how to derive it.
#[derive(Clone, Debug)]
pub struct PlanKey<Ak> {
/// The key the planner will sign with
pub asset_key: Ak,
/// A hint from how to get from the asset key to the concrete key we need to sign with.
pub derivation_hint: DerivationPath,
/// The key that was in the descriptor that we are satisfying with the signature from the asset
/// key.
pub descriptor_key: DefiniteDescriptorKey,
}
impl<Ak> TemplateItem<Ak> {
pub fn expected_size(&self) -> usize {
match self {
TemplateItem::Sign { .. } => 64, /* size of sig TODO: take into consideration sighash falg */
TemplateItem::Pk { .. } => 32,
TemplateItem::One => varint_len(1),
TemplateItem::Zero => 0, /* zero means an empty witness element */
// I'm not sure if it should be 32 here (it's a 20 byte hash) but that's what other
// parts of the code were doing.
TemplateItem::Hash160(_) | TemplateItem::Ripemd160(_) => 32,
TemplateItem::Sha256(_) | TemplateItem::Hash256(_) => 32,
}
}
// this can only be called if we are sure that auth_data has what we need
pub(super) fn to_witness_stack(&self, auth_data: &SatisfactionMaterial) -> Vec<Vec<u8>> {
match self {
TemplateItem::Sign(plan_key) => {
vec![auth_data
.schnorr_sigs
.get(&plan_key.descriptor_key)
.unwrap()
.to_vec()]
}
TemplateItem::One => vec![vec![1]],
TemplateItem::Zero => vec![vec![]],
TemplateItem::Sha256(image) => {
vec![auth_data.sha256_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Hash160(image) => {
vec![auth_data.hash160_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Ripemd160(image) => {
vec![auth_data.ripemd160_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Hash256(image) => {
vec![auth_data.hash256_preimages.get(image).unwrap().to_vec()]
}
TemplateItem::Pk { key } => vec![key.to_public_key().to_bytes()],
}
}
}

248
src/blockchain/any.rs Normal file
View File

@@ -0,0 +1,248 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Runtime-checked blockchain types
//!
//! This module provides the implementation of [`AnyBlockchain`] which allows switching the
//! inner [`Blockchain`] type at runtime.
//!
//! ## Example
//!
//! When paired with the use of [`ConfigurableBlockchain`], it allows creating any
//! blockchain type supported using a single line of code:
//!
//! ```no_run
//! # use bitcoin::Network;
//! # use bdk::blockchain::*;
//! # #[cfg(all(feature = "esplora", feature = "ureq"))]
//! # {
//! let config = serde_json::from_str("...")?;
//! let blockchain = AnyBlockchain::from_config(&config)?;
//! let height = blockchain.get_height();
//! # }
//! # Ok::<(), bdk::Error>(())
//! ```
use super::*;
macro_rules! impl_from {
( boxed $from:ty, $to:ty, $variant:ident, $( $cfg:tt )* ) => {
$( $cfg )*
impl From<$from> for $to {
fn from(inner: $from) -> Self {
<$to>::$variant(Box::new(inner))
}
}
};
( $from:ty, $to:ty, $variant:ident, $( $cfg:tt )* ) => {
$( $cfg )*
impl From<$from> for $to {
fn from(inner: $from) -> Self {
<$to>::$variant(inner)
}
}
};
}
macro_rules! impl_inner_method {
( $self:expr, $name:ident $(, $args:expr)* ) => {
match $self {
#[cfg(feature = "electrum")]
AnyBlockchain::Electrum(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "esplora")]
AnyBlockchain::Esplora(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "compact_filters")]
AnyBlockchain::CompactFilters(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "rpc")]
AnyBlockchain::Rpc(inner) => inner.$name( $($args, )* ),
}
}
}
/// Type that can contain any of the [`Blockchain`] types defined by the library
///
/// It allows switching backend at runtime
///
/// See [this module](crate::blockchain::any)'s documentation for a usage example.
pub enum AnyBlockchain {
#[cfg(feature = "electrum")]
#[cfg_attr(docsrs, doc(cfg(feature = "electrum")))]
/// Electrum client
Electrum(Box<electrum::ElectrumBlockchain>),
#[cfg(feature = "esplora")]
#[cfg_attr(docsrs, doc(cfg(feature = "esplora")))]
/// Esplora client
Esplora(Box<esplora::EsploraBlockchain>),
#[cfg(feature = "compact_filters")]
#[cfg_attr(docsrs, doc(cfg(feature = "compact_filters")))]
/// Compact filters client
CompactFilters(Box<compact_filters::CompactFiltersBlockchain>),
#[cfg(feature = "rpc")]
#[cfg_attr(docsrs, doc(cfg(feature = "rpc")))]
/// RPC client
Rpc(Box<rpc::RpcBlockchain>),
}
#[maybe_async]
impl Blockchain for AnyBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
maybe_await!(impl_inner_method!(self, get_capabilities))
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
maybe_await!(impl_inner_method!(self, broadcast, tx))
}
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
maybe_await!(impl_inner_method!(self, estimate_fee, target))
}
}
#[maybe_async]
impl GetHeight for AnyBlockchain {
fn get_height(&self) -> Result<u32, Error> {
maybe_await!(impl_inner_method!(self, get_height))
}
}
#[maybe_async]
impl GetTx for AnyBlockchain {
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
maybe_await!(impl_inner_method!(self, get_tx, txid))
}
}
#[maybe_async]
impl GetBlockHash for AnyBlockchain {
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
maybe_await!(impl_inner_method!(self, get_block_hash, height))
}
}
#[maybe_async]
impl WalletSync for AnyBlockchain {
fn wallet_sync<D: BatchDatabase>(
&self,
database: &RefCell<D>,
progress_update: Box<dyn Progress>,
) -> Result<(), Error> {
maybe_await!(impl_inner_method!(
self,
wallet_sync,
database,
progress_update
))
}
fn wallet_setup<D: BatchDatabase>(
&self,
database: &RefCell<D>,
progress_update: Box<dyn Progress>,
) -> Result<(), Error> {
maybe_await!(impl_inner_method!(
self,
wallet_setup,
database,
progress_update
))
}
}
impl_from!(boxed electrum::ElectrumBlockchain, AnyBlockchain, Electrum, #[cfg(feature = "electrum")]);
impl_from!(boxed esplora::EsploraBlockchain, AnyBlockchain, Esplora, #[cfg(feature = "esplora")]);
impl_from!(boxed compact_filters::CompactFiltersBlockchain, AnyBlockchain, CompactFilters, #[cfg(feature = "compact_filters")]);
impl_from!(boxed rpc::RpcBlockchain, AnyBlockchain, Rpc, #[cfg(feature = "rpc")]);
/// Type that can contain any of the blockchain configurations defined by the library
///
/// This allows storing a single configuration that can be loaded into an [`AnyBlockchain`]
/// instance. Wallets that plan to offer users the ability to switch blockchain backend at runtime
/// will find this particularly useful.
///
/// This type can be serialized from a JSON object like:
///
/// ```
/// # #[cfg(feature = "electrum")]
/// # {
/// use bdk::blockchain::{electrum::ElectrumBlockchainConfig, AnyBlockchainConfig};
/// let config: AnyBlockchainConfig = serde_json::from_str(
/// r#"{
/// "type" : "electrum",
/// "url" : "ssl://electrum.blockstream.info:50002",
/// "retry": 2,
/// "stop_gap": 20,
/// "validate_domain": true
/// }"#,
/// )
/// .unwrap();
/// assert_eq!(
/// config,
/// AnyBlockchainConfig::Electrum(ElectrumBlockchainConfig {
/// url: "ssl://electrum.blockstream.info:50002".into(),
/// retry: 2,
/// socks5: None,
/// timeout: None,
/// stop_gap: 20,
/// validate_domain: true,
/// })
/// );
/// # }
/// ```
#[derive(Debug, serde::Serialize, serde::Deserialize, Clone, PartialEq, Eq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum AnyBlockchainConfig {
#[cfg(feature = "electrum")]
#[cfg_attr(docsrs, doc(cfg(feature = "electrum")))]
/// Electrum client
Electrum(electrum::ElectrumBlockchainConfig),
#[cfg(feature = "esplora")]
#[cfg_attr(docsrs, doc(cfg(feature = "esplora")))]
/// Esplora client
Esplora(esplora::EsploraBlockchainConfig),
#[cfg(feature = "compact_filters")]
#[cfg_attr(docsrs, doc(cfg(feature = "compact_filters")))]
/// Compact filters client
CompactFilters(compact_filters::CompactFiltersBlockchainConfig),
#[cfg(feature = "rpc")]
#[cfg_attr(docsrs, doc(cfg(feature = "rpc")))]
/// RPC client configuration
Rpc(rpc::RpcConfig),
}
impl ConfigurableBlockchain for AnyBlockchain {
type Config = AnyBlockchainConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
Ok(match config {
#[cfg(feature = "electrum")]
AnyBlockchainConfig::Electrum(inner) => {
AnyBlockchain::Electrum(Box::new(electrum::ElectrumBlockchain::from_config(inner)?))
}
#[cfg(feature = "esplora")]
AnyBlockchainConfig::Esplora(inner) => {
AnyBlockchain::Esplora(Box::new(esplora::EsploraBlockchain::from_config(inner)?))
}
#[cfg(feature = "compact_filters")]
AnyBlockchainConfig::CompactFilters(inner) => AnyBlockchain::CompactFilters(Box::new(
compact_filters::CompactFiltersBlockchain::from_config(inner)?,
)),
#[cfg(feature = "rpc")]
AnyBlockchainConfig::Rpc(inner) => {
AnyBlockchain::Rpc(Box::new(rpc::RpcBlockchain::from_config(inner)?))
}
})
}
}
impl_from!(electrum::ElectrumBlockchainConfig, AnyBlockchainConfig, Electrum, #[cfg(feature = "electrum")]);
impl_from!(esplora::EsploraBlockchainConfig, AnyBlockchainConfig, Esplora, #[cfg(feature = "esplora")]);
impl_from!(compact_filters::CompactFiltersBlockchainConfig, AnyBlockchainConfig, CompactFilters, #[cfg(feature = "compact_filters")]);
impl_from!(rpc::RpcConfig, AnyBlockchainConfig, Rpc, #[cfg(feature = "rpc")]);

View File

@@ -0,0 +1,618 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Compact Filters
//!
//! This module contains a multithreaded implementation of an [`Blockchain`] backend that
//! uses BIP157 (aka "Neutrino") to populate the wallet's [database](crate::database::Database)
//! by downloading compact filters from the P2P network.
//!
//! Since there are currently very few peers "in the wild" that advertise the required service
//! flag, this implementation requires that one or more known peers are provided by the user.
//! No dns or other kinds of peer discovery are done internally.
//!
//! Moreover, this module doesn't currently support detecting and resolving conflicts between
//! messages received by different peers. Thus, it's recommended to use this module by only
//! connecting to a single peer at a time, optionally by opening multiple connections if it's
//! desirable to use multiple threads at once to sync in parallel.
//!
//! This is an **EXPERIMENTAL** feature, API and other major changes are expected.
//!
//! ## Example
//!
//! ```no_run
//! # use std::sync::Arc;
//! # use bitcoin::*;
//! # use bdk::*;
//! # use bdk::blockchain::compact_filters::*;
//! let num_threads = 4;
//!
//! let mempool = Arc::new(Mempool::default());
//! let peers = (0..num_threads)
//! .map(|_| {
//! Peer::connect(
//! "btcd-mainnet.lightning.computer:8333",
//! Arc::clone(&mempool),
//! Network::Bitcoin,
//! )
//! })
//! .collect::<Result<_, _>>()?;
//! let blockchain = CompactFiltersBlockchain::new(peers, "./wallet-filters", Some(500_000))?;
//! # Ok::<(), CompactFiltersError>(())
//! ```
use std::collections::HashSet;
use std::fmt;
use std::ops::DerefMut;
use std::path::Path;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use bitcoin::network::message_blockdata::Inventory;
use bitcoin::{Network, OutPoint, Transaction, Txid};
use rocksdb::{Options, SliceTransform, DB};
mod peer;
mod store;
mod sync;
use crate::blockchain::*;
use crate::database::{BatchDatabase, BatchOperations, DatabaseUtils};
use crate::error::Error;
use crate::types::{KeychainKind, LocalUtxo, TransactionDetails};
use crate::{BlockTime, FeeRate};
use peer::*;
use store::*;
use sync::*;
pub use peer::{Mempool, Peer};
const SYNC_HEADERS_COST: f32 = 1.0;
const SYNC_FILTERS_COST: f32 = 11.6 * 1_000.0;
const PROCESS_BLOCKS_COST: f32 = 20_000.0;
/// Structure implementing the required blockchain traits
///
/// ## Example
/// See the [`blockchain::compact_filters`](crate::blockchain::compact_filters) module for a usage example.
#[derive(Debug)]
pub struct CompactFiltersBlockchain {
peers: Vec<Arc<Peer>>,
headers: Arc<ChainStore<Full>>,
skip_blocks: Option<usize>,
}
impl CompactFiltersBlockchain {
/// Construct a new instance given a list of peers, a path to store headers and block
/// filters downloaded during the sync and optionally a number of blocks to ignore starting
/// from the genesis while scanning for the wallet's outputs.
///
/// For each [`Peer`] specified a new thread will be spawned to download and verify the filters
/// in parallel. It's currently recommended to only connect to a single peer to avoid
/// inconsistencies in the data returned, optionally with multiple connections in parallel to
/// speed-up the sync process.
pub fn new<P: AsRef<Path>>(
peers: Vec<Peer>,
storage_dir: P,
skip_blocks: Option<usize>,
) -> Result<Self, CompactFiltersError> {
if peers.is_empty() {
return Err(CompactFiltersError::NoPeers);
}
let mut opts = Options::default();
opts.create_if_missing(true);
opts.set_prefix_extractor(SliceTransform::create_fixed_prefix(16));
let network = peers[0].get_network();
let cfs = DB::list_cf(&opts, &storage_dir).unwrap_or_else(|_| vec!["default".to_string()]);
let db = DB::open_cf(&opts, &storage_dir, &cfs)?;
let headers = Arc::new(ChainStore::new(db, network)?);
// try to recover partial snapshots
for cf_name in &cfs {
if !cf_name.starts_with("_headers:") {
continue;
}
info!("Trying to recover: {:?}", cf_name);
headers.recover_snapshot(cf_name)?;
}
Ok(CompactFiltersBlockchain {
peers: peers.into_iter().map(Arc::new).collect(),
headers,
skip_blocks,
})
}
/// Process a transaction by looking for inputs that spend from a UTXO in the database or
/// outputs that send funds to a know script_pubkey.
fn process_tx<D: BatchDatabase>(
&self,
database: &mut D,
tx: &Transaction,
height: Option<u32>,
timestamp: Option<u64>,
internal_max_deriv: &mut Option<u32>,
external_max_deriv: &mut Option<u32>,
) -> Result<(), Error> {
let mut updates = database.begin_batch();
let mut incoming: u64 = 0;
let mut outgoing: u64 = 0;
let mut inputs_sum: u64 = 0;
let mut outputs_sum: u64 = 0;
// look for our own inputs
for (i, input) in tx.input.iter().enumerate() {
if let Some(previous_output) = database.get_previous_output(&input.previous_output)? {
inputs_sum += previous_output.value;
// this output is ours, we have a path to derive it
if let Some((keychain, _)) =
database.get_path_from_script_pubkey(&previous_output.script_pubkey)?
{
outgoing += previous_output.value;
debug!("{} input #{} is mine, setting utxo as spent", tx.txid(), i);
updates.set_utxo(&LocalUtxo {
outpoint: input.previous_output,
txout: previous_output.clone(),
keychain,
is_spent: true,
})?;
}
}
}
for (i, output) in tx.output.iter().enumerate() {
// to compute the fees later
outputs_sum += output.value;
// this output is ours, we have a path to derive it
if let Some((keychain, child)) =
database.get_path_from_script_pubkey(&output.script_pubkey)?
{
debug!("{} output #{} is mine, adding utxo", tx.txid(), i);
updates.set_utxo(&LocalUtxo {
outpoint: OutPoint::new(tx.txid(), i as u32),
txout: output.clone(),
keychain,
is_spent: false,
})?;
incoming += output.value;
if keychain == KeychainKind::Internal
&& (internal_max_deriv.is_none() || child > internal_max_deriv.unwrap_or(0))
{
*internal_max_deriv = Some(child);
} else if keychain == KeychainKind::External
&& (external_max_deriv.is_none() || child > external_max_deriv.unwrap_or(0))
{
*external_max_deriv = Some(child);
}
}
}
if incoming > 0 || outgoing > 0 {
let tx = TransactionDetails {
txid: tx.txid(),
transaction: Some(tx.clone()),
received: incoming,
sent: outgoing,
confirmation_time: BlockTime::new(height, timestamp),
fee: Some(inputs_sum.saturating_sub(outputs_sum)),
};
info!("Saving tx {}", tx.txid);
updates.set_tx(&tx)?;
}
database.commit_batch(updates)?;
Ok(())
}
}
impl Blockchain for CompactFiltersBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
vec![Capability::FullHistory].into_iter().collect()
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
self.peers[0].broadcast_tx(tx.clone())?;
Ok(())
}
fn estimate_fee(&self, _target: usize) -> Result<FeeRate, Error> {
// TODO
Ok(FeeRate::default())
}
}
impl GetHeight for CompactFiltersBlockchain {
fn get_height(&self) -> Result<u32, Error> {
Ok(self.headers.get_height()? as u32)
}
}
impl GetTx for CompactFiltersBlockchain {
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(self.peers[0]
.get_mempool()
.get_tx(&Inventory::Transaction(*txid)))
}
}
impl GetBlockHash for CompactFiltersBlockchain {
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
self.headers
.get_block_hash(height as usize)?
.ok_or(Error::CompactFilters(
CompactFiltersError::BlockHashNotFound,
))
}
}
impl WalletSync for CompactFiltersBlockchain {
#[allow(clippy::mutex_atomic)] // Mutex is easier to understand than a CAS loop.
fn wallet_setup<D: BatchDatabase>(
&self,
database: &RefCell<D>,
progress_update: Box<dyn Progress>,
) -> Result<(), Error> {
let first_peer = &self.peers[0];
let skip_blocks = self.skip_blocks.unwrap_or(0);
let cf_sync = Arc::new(CfSync::new(Arc::clone(&self.headers), skip_blocks, 0x00)?);
let initial_height = self.headers.get_height()?;
let total_bundles = (first_peer.get_version().start_height as usize)
.checked_sub(skip_blocks)
.map(|x| x / 1000)
.unwrap_or(0)
+ 1;
let expected_bundles_to_sync = total_bundles.saturating_sub(cf_sync.pruned_bundles()?);
let headers_cost = (first_peer.get_version().start_height as usize)
.saturating_sub(initial_height) as f32
* SYNC_HEADERS_COST;
let filters_cost = expected_bundles_to_sync as f32 * SYNC_FILTERS_COST;
let total_cost = headers_cost + filters_cost + PROCESS_BLOCKS_COST;
if let Some(snapshot) = sync::sync_headers(
Arc::clone(first_peer),
Arc::clone(&self.headers),
|new_height| {
let local_headers_cost =
new_height.saturating_sub(initial_height) as f32 * SYNC_HEADERS_COST;
progress_update.update(
local_headers_cost / total_cost * 100.0,
Some(format!("Synced headers to {}", new_height)),
)
},
)? {
if snapshot.work()? > self.headers.work()? {
info!("Applying snapshot with work: {}", snapshot.work()?);
self.headers.apply_snapshot(snapshot)?;
}
}
let synced_height = self.headers.get_height()?;
let buried_height = synced_height.saturating_sub(sync::BURIED_CONFIRMATIONS);
info!("Synced headers to height: {}", synced_height);
cf_sync.prepare_sync(Arc::clone(first_peer))?;
let mut database = database.borrow_mut();
let database = database.deref_mut();
let all_scripts = Arc::new(
database
.iter_script_pubkeys(None)?
.into_iter()
.map(|s| s.to_bytes())
.collect::<Vec<_>>(),
);
#[allow(clippy::mutex_atomic)]
let last_synced_block = Arc::new(Mutex::new(synced_height));
let synced_bundles = Arc::new(AtomicUsize::new(0));
let progress_update = Arc::new(Mutex::new(progress_update));
let mut threads = Vec::with_capacity(self.peers.len());
for peer in &self.peers {
let cf_sync = Arc::clone(&cf_sync);
let peer = Arc::clone(peer);
let headers = Arc::clone(&self.headers);
let all_scripts = Arc::clone(&all_scripts);
let last_synced_block = Arc::clone(&last_synced_block);
let progress_update = Arc::clone(&progress_update);
let synced_bundles = Arc::clone(&synced_bundles);
let thread = std::thread::spawn(move || {
cf_sync.capture_thread_for_sync(
peer,
|block_hash, filter| {
if !filter
.match_any(block_hash, &mut all_scripts.iter().map(AsRef::as_ref))?
{
return Ok(false);
}
let block_height = headers.get_height_for(block_hash)?.unwrap_or(0);
let saved_correct_block = matches!(headers.get_full_block(block_height)?, Some(block) if &block.block_hash() == block_hash);
if saved_correct_block {
Ok(false)
} else {
let mut last_synced_block = last_synced_block.lock().unwrap();
// If we download a block older than `last_synced_block`, we update it so that
// we know to delete and re-process all txs starting from that height
if block_height < *last_synced_block {
*last_synced_block = block_height;
}
Ok(true)
}
},
|index| {
let synced_bundles = synced_bundles.fetch_add(1, Ordering::SeqCst);
let local_filters_cost = synced_bundles as f32 * SYNC_FILTERS_COST;
progress_update.lock().unwrap().update(
(headers_cost + local_filters_cost) / total_cost * 100.0,
Some(format!(
"Synced filters {} - {}",
index * 1000 + 1,
(index + 1) * 1000
)),
)
},
)
});
threads.push(thread);
}
for t in threads {
t.join().unwrap()?;
}
progress_update.lock().unwrap().update(
(headers_cost + filters_cost) / total_cost * 100.0,
Some("Processing downloaded blocks and mempool".into()),
)?;
// delete all txs newer than last_synced_block
let last_synced_block = *last_synced_block.lock().unwrap();
log::debug!(
"Dropping transactions newer than `last_synced_block` = {}",
last_synced_block
);
let mut updates = database.begin_batch();
for details in database.iter_txs(false)? {
match details.confirmation_time {
Some(c) if (c.height as usize) < last_synced_block => continue,
_ => updates.del_tx(&details.txid, false)?,
};
}
database.commit_batch(updates)?;
match first_peer.ask_for_mempool() {
Err(CompactFiltersError::PeerBloomDisabled) => {
log::warn!("Peer has BLOOM disabled, we can't ask for the mempool")
}
e => e?,
};
let mut internal_max_deriv = None;
let mut external_max_deriv = None;
for (height, block) in self.headers.iter_full_blocks()? {
for tx in &block.txdata {
self.process_tx(
database,
tx,
Some(height as u32),
None,
&mut internal_max_deriv,
&mut external_max_deriv,
)?;
}
}
for tx in first_peer.get_mempool().iter_txs().iter() {
self.process_tx(
database,
tx,
None,
None,
&mut internal_max_deriv,
&mut external_max_deriv,
)?;
}
let current_ext = database
.get_last_index(KeychainKind::External)?
.unwrap_or(0);
let first_ext_new = external_max_deriv.map(|x| x + 1).unwrap_or(0);
if first_ext_new > current_ext {
info!("Setting external index to {}", first_ext_new);
database.set_last_index(KeychainKind::External, first_ext_new)?;
}
let current_int = database
.get_last_index(KeychainKind::Internal)?
.unwrap_or(0);
let first_int_new = internal_max_deriv.map(|x| x + 1).unwrap_or(0);
if first_int_new > current_int {
info!("Setting internal index to {}", first_int_new);
database.set_last_index(KeychainKind::Internal, first_int_new)?;
}
info!("Dropping blocks until {}", buried_height);
self.headers.delete_blocks_until(buried_height)?;
progress_update
.lock()
.unwrap()
.update(100.0, Some("Done".into()))?;
Ok(())
}
}
/// Data to connect to a Bitcoin P2P peer
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq, Eq)]
pub struct BitcoinPeerConfig {
/// Peer address such as 127.0.0.1:18333
pub address: String,
/// Optional socks5 proxy
pub socks5: Option<String>,
/// Optional socks5 proxy credentials
pub socks5_credentials: Option<(String, String)>,
}
/// Configuration for a [`CompactFiltersBlockchain`]
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq, Eq)]
pub struct CompactFiltersBlockchainConfig {
/// List of peers to try to connect to for asking headers and filters
pub peers: Vec<BitcoinPeerConfig>,
/// Network used
pub network: Network,
/// Storage dir to save partially downloaded headers and full blocks. Should be a separate directory per descriptor. Consider using [crate::wallet::wallet_name_from_descriptor] for this.
pub storage_dir: String,
/// Optionally skip initial `skip_blocks` blocks (default: 0)
pub skip_blocks: Option<usize>,
}
impl ConfigurableBlockchain for CompactFiltersBlockchain {
type Config = CompactFiltersBlockchainConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
let mempool = Arc::new(Mempool::default());
let peers = config
.peers
.iter()
.map(|peer_conf| match &peer_conf.socks5 {
None => Peer::connect(&peer_conf.address, Arc::clone(&mempool), config.network),
Some(proxy) => Peer::connect_proxy(
peer_conf.address.as_str(),
proxy,
peer_conf
.socks5_credentials
.as_ref()
.map(|(a, b)| (a.as_str(), b.as_str())),
Arc::clone(&mempool),
config.network,
),
})
.collect::<Result<_, _>>()?;
Ok(CompactFiltersBlockchain::new(
peers,
&config.storage_dir,
config.skip_blocks,
)?)
}
}
/// An error that can occur during sync with a [`CompactFiltersBlockchain`]
#[derive(Debug)]
pub enum CompactFiltersError {
/// A peer sent an invalid or unexpected response
InvalidResponse,
/// The headers returned are invalid
InvalidHeaders,
/// The compact filter headers returned are invalid
InvalidFilterHeader,
/// The compact filter returned is invalid
InvalidFilter,
/// The peer is missing a block in the valid chain
MissingBlock,
/// Block hash at specified height not found
BlockHashNotFound,
/// The data stored in the block filters storage are corrupted
DataCorruption,
/// A peer is not connected
NotConnected,
/// A peer took too long to reply to one of our messages
Timeout,
/// The peer doesn't advertise the [`BLOOM`](bitcoin::network::constants::ServiceFlags::BLOOM) service flag
PeerBloomDisabled,
/// No peers have been specified
NoPeers,
/// Internal database error
Db(rocksdb::Error),
/// Internal I/O error
Io(std::io::Error),
/// Invalid BIP158 filter
Bip158(bitcoin::util::bip158::Error),
/// Internal system time error
Time(std::time::SystemTimeError),
/// Wrapper for [`crate::error::Error`]
Global(Box<crate::error::Error>),
}
impl fmt::Display for CompactFiltersError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::InvalidResponse => write!(f, "A peer sent an invalid or unexpected response"),
Self::InvalidHeaders => write!(f, "Invalid headers"),
Self::InvalidFilterHeader => write!(f, "Invalid filter header"),
Self::InvalidFilter => write!(f, "Invalid filters"),
Self::MissingBlock => write!(f, "The peer is missing a block in the valid chain"),
Self::BlockHashNotFound => write!(f, "Block hash not found"),
Self::DataCorruption => write!(
f,
"The data stored in the block filters storage are corrupted"
),
Self::NotConnected => write!(f, "A peer is not connected"),
Self::Timeout => write!(f, "A peer took too long to reply to one of our messages"),
Self::PeerBloomDisabled => write!(f, "Peer doesn't advertise the BLOOM service flag"),
Self::NoPeers => write!(f, "No peers have been specified"),
Self::Db(err) => write!(f, "Internal database error: {}", err),
Self::Io(err) => write!(f, "Internal I/O error: {}", err),
Self::Bip158(err) => write!(f, "Invalid BIP158 filter: {}", err),
Self::Time(err) => write!(f, "Invalid system time: {}", err),
Self::Global(err) => write!(f, "Generic error: {}", err),
}
}
}
impl std::error::Error for CompactFiltersError {}
impl_error!(rocksdb::Error, Db, CompactFiltersError);
impl_error!(std::io::Error, Io, CompactFiltersError);
impl_error!(bitcoin::util::bip158::Error, Bip158, CompactFiltersError);
impl_error!(std::time::SystemTimeError, Time, CompactFiltersError);
impl From<crate::error::Error> for CompactFiltersError {
fn from(err: crate::error::Error) -> Self {
CompactFiltersError::Global(Box::new(err))
}
}

View File

@@ -0,0 +1,576 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::collections::HashMap;
use std::io::BufReader;
use std::net::{TcpStream, ToSocketAddrs};
use std::sync::{Arc, Condvar, Mutex, RwLock};
use std::thread;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use socks::{Socks5Stream, ToTargetAddr};
use rand::{thread_rng, Rng};
use bitcoin::consensus::{Decodable, Encodable};
use bitcoin::hash_types::BlockHash;
use bitcoin::network::constants::ServiceFlags;
use bitcoin::network::message::{NetworkMessage, RawNetworkMessage};
use bitcoin::network::message_blockdata::*;
use bitcoin::network::message_filter::*;
use bitcoin::network::message_network::VersionMessage;
use bitcoin::network::Address;
use bitcoin::{Block, Network, Transaction, Txid, Wtxid};
use super::CompactFiltersError;
type ResponsesMap = HashMap<&'static str, Arc<(Mutex<Vec<NetworkMessage>>, Condvar)>>;
pub(crate) const TIMEOUT_SECS: u64 = 30;
/// Container for unconfirmed, but valid Bitcoin transactions
///
/// It is normally shared between [`Peer`]s with the use of [`Arc`], so that transactions are not
/// duplicated in memory.
#[derive(Debug, Default)]
pub struct Mempool(RwLock<InnerMempool>);
#[derive(Debug, Default)]
struct InnerMempool {
txs: HashMap<Txid, Transaction>,
wtxids: HashMap<Wtxid, Txid>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
enum TxIdentifier {
Wtxid(Wtxid),
Txid(Txid),
}
impl Mempool {
/// Create a new empty mempool
pub fn new() -> Self {
Self::default()
}
/// Add a transaction to the mempool
///
/// Note that this doesn't propagate the transaction to other
/// peers. To do that, [`broadcast`](crate::blockchain::Blockchain::broadcast) should be used.
pub fn add_tx(&self, tx: Transaction) {
let mut guard = self.0.write().unwrap();
guard.wtxids.insert(tx.wtxid(), tx.txid());
guard.txs.insert(tx.txid(), tx);
}
/// Look-up a transaction in the mempool given an [`Inventory`] request
pub fn get_tx(&self, inventory: &Inventory) -> Option<Transaction> {
let identifer = match inventory {
Inventory::Error
| Inventory::Block(_)
| Inventory::WitnessBlock(_)
| Inventory::CompactBlock(_) => return None,
Inventory::Transaction(txid) => TxIdentifier::Txid(*txid),
Inventory::WitnessTransaction(txid) => TxIdentifier::Txid(*txid),
Inventory::WTx(wtxid) => TxIdentifier::Wtxid(*wtxid),
Inventory::Unknown { inv_type, hash } => {
log::warn!(
"Unknown inventory request type `{}`, hash `{:?}`",
inv_type,
hash
);
return None;
}
};
let txid = match identifer {
TxIdentifier::Txid(txid) => Some(txid),
TxIdentifier::Wtxid(wtxid) => self.0.read().unwrap().wtxids.get(&wtxid).cloned(),
};
txid.and_then(|txid| self.0.read().unwrap().txs.get(&txid).cloned())
}
/// Return whether or not the mempool contains a transaction with a given txid
pub fn has_tx(&self, txid: &Txid) -> bool {
self.0.read().unwrap().txs.contains_key(txid)
}
/// Return the list of transactions contained in the mempool
pub fn iter_txs(&self) -> Vec<Transaction> {
self.0.read().unwrap().txs.values().cloned().collect()
}
}
/// A Bitcoin peer
#[derive(Debug)]
#[allow(dead_code)]
pub struct Peer {
writer: Arc<Mutex<TcpStream>>,
responses: Arc<RwLock<ResponsesMap>>,
reader_thread: thread::JoinHandle<()>,
connected: Arc<RwLock<bool>>,
mempool: Arc<Mempool>,
version: VersionMessage,
network: Network,
}
impl Peer {
/// Connect to a peer over a plaintext TCP connection
///
/// This function internally spawns a new thread that will monitor incoming messages from the
/// peer, and optionally reply to some of them transparently, like [pings](bitcoin::network::message::NetworkMessage::Ping)
pub fn connect<A: ToSocketAddrs>(
address: A,
mempool: Arc<Mempool>,
network: Network,
) -> Result<Self, CompactFiltersError> {
let stream = TcpStream::connect(address)?;
Peer::from_stream(stream, mempool, network)
}
/// Connect to a peer through a SOCKS5 proxy, optionally by using some credentials, specified
/// as a tuple of `(username, password)`
///
/// This function internally spawns a new thread that will monitor incoming messages from the
/// peer, and optionally reply to some of them transparently, like [pings](NetworkMessage::Ping)
pub fn connect_proxy<T: ToTargetAddr, P: ToSocketAddrs>(
target: T,
proxy: P,
credentials: Option<(&str, &str)>,
mempool: Arc<Mempool>,
network: Network,
) -> Result<Self, CompactFiltersError> {
let socks_stream = if let Some((username, password)) = credentials {
Socks5Stream::connect_with_password(proxy, target, username, password)?
} else {
Socks5Stream::connect(proxy, target)?
};
Peer::from_stream(socks_stream.into_inner(), mempool, network)
}
/// Create a [`Peer`] from an already connected TcpStream
fn from_stream(
stream: TcpStream,
mempool: Arc<Mempool>,
network: Network,
) -> Result<Self, CompactFiltersError> {
let writer = Arc::new(Mutex::new(stream.try_clone()?));
let responses: Arc<RwLock<ResponsesMap>> = Arc::new(RwLock::new(HashMap::new()));
let connected = Arc::new(RwLock::new(true));
let mut locked_writer = writer.lock().unwrap();
let reader_thread_responses = Arc::clone(&responses);
let reader_thread_writer = Arc::clone(&writer);
let reader_thread_mempool = Arc::clone(&mempool);
let reader_thread_connected = Arc::clone(&connected);
let reader_thread = thread::spawn(move || {
Self::reader_thread(
network,
stream,
reader_thread_responses,
reader_thread_writer,
reader_thread_mempool,
reader_thread_connected,
)
});
let timestamp = SystemTime::now().duration_since(UNIX_EPOCH)?.as_secs() as i64;
let nonce = thread_rng().gen();
let receiver = Address::new(&locked_writer.peer_addr()?, ServiceFlags::NONE);
let sender = Address {
services: ServiceFlags::NONE,
address: [0u16; 8],
port: 0,
};
Self::_send(
&mut locked_writer,
network.magic(),
NetworkMessage::Version(VersionMessage::new(
ServiceFlags::WITNESS,
timestamp,
receiver,
sender,
nonce,
"MagicalBitcoinWallet".into(),
0,
)),
)?;
let version = if let NetworkMessage::Version(version) =
Self::_recv(&responses, "version", None).unwrap()
{
version
} else {
return Err(CompactFiltersError::InvalidResponse);
};
if let NetworkMessage::Verack = Self::_recv(&responses, "verack", None).unwrap() {
Self::_send(&mut locked_writer, network.magic(), NetworkMessage::Verack)?;
} else {
return Err(CompactFiltersError::InvalidResponse);
}
std::mem::drop(locked_writer);
Ok(Peer {
writer,
responses,
reader_thread,
connected,
mempool,
version,
network,
})
}
/// Send a Bitcoin network message
fn _send(
writer: &mut TcpStream,
magic: u32,
payload: NetworkMessage,
) -> Result<(), CompactFiltersError> {
log::trace!("==> {:?}", payload);
let raw_message = RawNetworkMessage { magic, payload };
raw_message
.consensus_encode(writer)
.map_err(|_| CompactFiltersError::DataCorruption)?;
Ok(())
}
/// Wait for a specific incoming Bitcoin message, optionally with a timeout
fn _recv(
responses: &Arc<RwLock<ResponsesMap>>,
wait_for: &'static str,
timeout: Option<Duration>,
) -> Option<NetworkMessage> {
let message_resp = {
let mut lock = responses.write().unwrap();
let message_resp = lock.entry(wait_for).or_default();
Arc::clone(message_resp)
};
let (lock, cvar) = &*message_resp;
let mut messages = lock.lock().unwrap();
while messages.is_empty() {
match timeout {
None => messages = cvar.wait(messages).unwrap(),
Some(t) => {
let result = cvar.wait_timeout(messages, t).unwrap();
if result.1.timed_out() {
return None;
}
messages = result.0;
}
}
}
messages.pop()
}
/// Return the [`VersionMessage`] sent by the peer
pub fn get_version(&self) -> &VersionMessage {
&self.version
}
/// Return the Bitcoin [`Network`] in use
pub fn get_network(&self) -> Network {
self.network
}
/// Return the mempool used by this peer
pub fn get_mempool(&self) -> Arc<Mempool> {
Arc::clone(&self.mempool)
}
/// Return whether or not the peer is still connected
pub fn is_connected(&self) -> bool {
*self.connected.read().unwrap()
}
/// Internal function called once the `reader_thread` is spawned
fn reader_thread(
network: Network,
connection: TcpStream,
reader_thread_responses: Arc<RwLock<ResponsesMap>>,
reader_thread_writer: Arc<Mutex<TcpStream>>,
reader_thread_mempool: Arc<Mempool>,
reader_thread_connected: Arc<RwLock<bool>>,
) {
macro_rules! check_disconnect {
($call:expr) => {
match $call {
Ok(good) => good,
Err(e) => {
log::debug!("Error {:?}", e);
*reader_thread_connected.write().unwrap() = false;
break;
}
}
};
}
let mut reader = BufReader::new(connection);
loop {
let raw_message: RawNetworkMessage =
check_disconnect!(Decodable::consensus_decode(&mut reader));
let in_message = if raw_message.magic != network.magic() {
continue;
} else {
raw_message.payload
};
log::trace!("<== {:?}", in_message);
match in_message {
NetworkMessage::Ping(nonce) => {
check_disconnect!(Self::_send(
&mut reader_thread_writer.lock().unwrap(),
network.magic(),
NetworkMessage::Pong(nonce),
));
continue;
}
NetworkMessage::Alert(_) => continue,
NetworkMessage::GetData(ref inv) => {
let (found, not_found): (Vec<_>, Vec<_>) = inv
.iter()
.map(|item| (*item, reader_thread_mempool.get_tx(item)))
.partition(|(_, d)| d.is_some());
for (_, found_tx) in found {
check_disconnect!(Self::_send(
&mut reader_thread_writer.lock().unwrap(),
network.magic(),
NetworkMessage::Tx(found_tx.unwrap()),
));
}
if !not_found.is_empty() {
check_disconnect!(Self::_send(
&mut reader_thread_writer.lock().unwrap(),
network.magic(),
NetworkMessage::NotFound(
not_found.into_iter().map(|(i, _)| i).collect(),
),
));
}
}
_ => {}
}
let message_resp = {
let mut lock = reader_thread_responses.write().unwrap();
let message_resp = lock.entry(in_message.cmd()).or_default();
Arc::clone(message_resp)
};
let (lock, cvar) = &*message_resp;
let mut messages = lock.lock().unwrap();
messages.push(in_message);
cvar.notify_all();
}
}
/// Send a raw Bitcoin message to the peer
pub fn send(&self, payload: NetworkMessage) -> Result<(), CompactFiltersError> {
let mut writer = self.writer.lock().unwrap();
Self::_send(&mut writer, self.network.magic(), payload)
}
/// Waits for a specific incoming Bitcoin message, optionally with a timeout
pub fn recv(
&self,
wait_for: &'static str,
timeout: Option<Duration>,
) -> Result<Option<NetworkMessage>, CompactFiltersError> {
Ok(Self::_recv(&self.responses, wait_for, timeout))
}
}
pub trait CompactFiltersPeer {
fn get_cf_checkpt(
&self,
filter_type: u8,
stop_hash: BlockHash,
) -> Result<CFCheckpt, CompactFiltersError>;
fn get_cf_headers(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<CFHeaders, CompactFiltersError>;
fn get_cf_filters(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<(), CompactFiltersError>;
fn pop_cf_filter_resp(&self) -> Result<CFilter, CompactFiltersError>;
}
impl CompactFiltersPeer for Peer {
fn get_cf_checkpt(
&self,
filter_type: u8,
stop_hash: BlockHash,
) -> Result<CFCheckpt, CompactFiltersError> {
self.send(NetworkMessage::GetCFCheckpt(GetCFCheckpt {
filter_type,
stop_hash,
}))?;
let response = self
.recv("cfcheckpt", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let response = match response {
NetworkMessage::CFCheckpt(response) => response,
_ => return Err(CompactFiltersError::InvalidResponse),
};
if response.filter_type != filter_type {
return Err(CompactFiltersError::InvalidResponse);
}
Ok(response)
}
fn get_cf_headers(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<CFHeaders, CompactFiltersError> {
self.send(NetworkMessage::GetCFHeaders(GetCFHeaders {
filter_type,
start_height,
stop_hash,
}))?;
let response = self
.recv("cfheaders", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let response = match response {
NetworkMessage::CFHeaders(response) => response,
_ => return Err(CompactFiltersError::InvalidResponse),
};
if response.filter_type != filter_type {
return Err(CompactFiltersError::InvalidResponse);
}
Ok(response)
}
fn pop_cf_filter_resp(&self) -> Result<CFilter, CompactFiltersError> {
let response = self
.recv("cfilter", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let response = match response {
NetworkMessage::CFilter(response) => response,
_ => return Err(CompactFiltersError::InvalidResponse),
};
Ok(response)
}
fn get_cf_filters(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<(), CompactFiltersError> {
self.send(NetworkMessage::GetCFilters(GetCFilters {
filter_type,
start_height,
stop_hash,
}))?;
Ok(())
}
}
pub trait InvPeer {
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, CompactFiltersError>;
fn ask_for_mempool(&self) -> Result<(), CompactFiltersError>;
fn broadcast_tx(&self, tx: Transaction) -> Result<(), CompactFiltersError>;
}
impl InvPeer for Peer {
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, CompactFiltersError> {
self.send(NetworkMessage::GetData(vec![Inventory::WitnessBlock(
block_hash,
)]))?;
match self.recv("block", Some(Duration::from_secs(TIMEOUT_SECS)))? {
None => Ok(None),
Some(NetworkMessage::Block(response)) => Ok(Some(response)),
_ => Err(CompactFiltersError::InvalidResponse),
}
}
fn ask_for_mempool(&self) -> Result<(), CompactFiltersError> {
if !self.version.services.has(ServiceFlags::BLOOM) {
return Err(CompactFiltersError::PeerBloomDisabled);
}
self.send(NetworkMessage::MemPool)?;
let inv = match self.recv("inv", Some(Duration::from_secs(5)))? {
None => return Ok(()), // empty mempool
Some(NetworkMessage::Inv(inv)) => inv,
_ => return Err(CompactFiltersError::InvalidResponse),
};
let getdata = inv
.iter()
.cloned()
.filter(
|item| matches!(item, Inventory::Transaction(txid) if !self.mempool.has_tx(txid)),
)
.collect::<Vec<_>>();
let num_txs = getdata.len();
self.send(NetworkMessage::GetData(getdata))?;
for _ in 0..num_txs {
let tx = self
.recv("tx", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let tx = match tx {
NetworkMessage::Tx(tx) => tx,
_ => return Err(CompactFiltersError::InvalidResponse),
};
self.mempool.add_tx(tx);
}
Ok(())
}
fn broadcast_tx(&self, tx: Transaction) -> Result<(), CompactFiltersError> {
self.mempool.add_tx(tx.clone());
self.send(NetworkMessage::Tx(tx))?;
Ok(())
}
}

Some files were not shown because too many files have changed in this diff Show More