Compare commits
7 Commits
v0.20.0
...
compact_fi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d2da3755f4 | ||
|
|
6acb4d9796 | ||
|
|
377e5cdd49 | ||
|
|
70d2a0ee6b | ||
|
|
de1fc2a677 | ||
|
|
671d90e57c | ||
|
|
9480faa5d3 |
26
.github/ISSUE_TEMPLATE/bug_report.md
vendored
26
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,26 +0,0 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: 'bug'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
<!-- A clear and concise description of what the bug is. -->
|
||||
|
||||
**To Reproduce**
|
||||
<!-- Steps or code to reproduce the behavior. -->
|
||||
|
||||
**Expected behavior**
|
||||
<!-- A clear and concise description of what you expected to happen. -->
|
||||
|
||||
**Build environment**
|
||||
- BDK tag/commit: <!-- e.g. v0.13.0, 3a07614 -->
|
||||
- OS+version: <!-- e.g. ubuntu 20.04.01, macOS 12.0.1, windows -->
|
||||
- Rust/Cargo version: <!-- e.g. 1.56.0 -->
|
||||
- Rust/Cargo target: <!-- e.g. x86_64-apple-darwin, x86_64-unknown-linux-gnu, etc. -->
|
||||
|
||||
**Additional context**
|
||||
<!-- Add any other context about the problem here. -->
|
||||
77
.github/ISSUE_TEMPLATE/summer_project.md
vendored
77
.github/ISSUE_TEMPLATE/summer_project.md
vendored
@@ -1,77 +0,0 @@
|
||||
---
|
||||
name: Summer of Bitcoin Project
|
||||
about: Template to suggest a new https://www.summerofbitcoin.org/ project.
|
||||
title: ''
|
||||
labels: 'summer-of-bitcoin'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
## Overview
|
||||
|
||||
Project ideas are scoped for a university-level student with a basic background in CS and bitcoin
|
||||
fundamentals - achievable over 12-weeks. Below are just a few types of ideas:
|
||||
|
||||
- Low-hanging fruit: Relatively short projects with clear goals; requires basic technical knowledge
|
||||
and minimal familiarity with the codebase.
|
||||
- Core development: These projects derive from the ongoing work from the core of your development
|
||||
team. The list of features and bugs is never-ending, and help is always welcome.
|
||||
- Risky/Exploratory: These projects push the scope boundaries of your development effort. They
|
||||
might require expertise in an area not covered by your current development team. They might take
|
||||
advantage of a new technology. There is a reasonable chance that the project might be less
|
||||
successful, but the potential rewards make it worth the attempt.
|
||||
- Infrastructure/Automation: These projects are the code that your organization uses to get its
|
||||
development work done; for example, projects that improve the automation of releases, regression
|
||||
tests and automated builds. This is a category where a Summer of Bitcoin student can be really
|
||||
helpful, doing work that the development team has been putting off while they focus on core
|
||||
development.
|
||||
- Quality Assurance/Testing: Projects that work on and test your project's software development
|
||||
process. Additionally, projects that involve a thorough test and review of individual PRs.
|
||||
- Fun/Peripheral: These projects might not be related to the current core development focus, but
|
||||
create new innovations and new perspectives for your project.
|
||||
-->
|
||||
|
||||
**Description**
|
||||
<!-- Description: 3-7 sentences describing the project background and tasks to be done. -->
|
||||
|
||||
**Expected Outcomes**
|
||||
<!-- Short bullet list describing what is to be accomplished -->
|
||||
|
||||
**Resources**
|
||||
<!-- 2-3 reading materials for candidate to learn about the repo, project, scope etc -->
|
||||
<!-- Recommended reading such as a developer/contributor guide -->
|
||||
<!-- [Another example a paper citation](https://arxiv.org/pdf/1802.08091.pdf) -->
|
||||
<!-- [Another example an existing issue](https://github.com/opencv/opencv/issues/11013) -->
|
||||
<!-- [An existing related module](https://github.com/opencv/opencv_contrib/tree/master/modules/optflow) -->
|
||||
|
||||
**Skills Required**
|
||||
<!-- 3-4 technical skills that the candidate should know -->
|
||||
<!-- hands on experience with git -->
|
||||
<!-- mastery plus experience coding in C++ -->
|
||||
<!-- basic knowledge in matrix and tensor computations, college course work in cryptography -->
|
||||
<!-- strong mathematical background -->
|
||||
<!-- Bonus - has experience with React Native. Best if you have also worked with OSSFuzz -->
|
||||
|
||||
**Mentor(s)**
|
||||
<!-- names of mentor(s) for this project go here -->
|
||||
|
||||
**Difficulty**
|
||||
<!-- Easy, Medium, Hard -->
|
||||
|
||||
**Competency Test (optional)**
|
||||
<!-- 2-3 technical tasks related to the project idea or repository you’d like a candidate to
|
||||
perform in order to demonstrate competency, good first bugs, warm-up exercises -->
|
||||
<!-- ex. Read the instructions here to get Bitcoin core running on your machine -->
|
||||
<!-- ex. pick an issue labeled as “newcomer” in the repository, and send a merge request to the
|
||||
repository. You can also suggest some other improvement that we did not think of yet, or
|
||||
something that you find interesting or useful -->
|
||||
<!-- ex. fixes for coding style are usually easy to do, and are good issues for first time
|
||||
contributions for those learning how to interact with the project. After you are done with the
|
||||
coding style issue, try making a different contribution. -->
|
||||
<!-- ex. setup a full Debian packaging development environment and learn the basics of Debian
|
||||
packaging. Then identify and package the missing dependencies to package Specter Desktop -->
|
||||
<!-- ex. write a pull parser for CSV files. You'll be judged by the decisions to store the parser
|
||||
state and how flexible it is to wrap this parser in other scenarios. -->
|
||||
<!-- ex. Stretch Goal: Implement some basic metaprogram/app to prove you're very familiar with BDK.
|
||||
Be prepared to make adjustments as we judge your solution. -->
|
||||
4
.github/workflows/code_coverage.yml
vendored
4
.github/workflows/code_coverage.yml
vendored
@@ -24,14 +24,14 @@ jobs:
|
||||
- name: Update toolchain
|
||||
run: rustup update
|
||||
- name: Test
|
||||
run: cargo test --features all-keys,compiler,esplora,ureq,compact_filters --no-default-features
|
||||
run: cargo test --features all-keys,compiler,esplora,compact_filters --no-default-features
|
||||
|
||||
- id: coverage
|
||||
name: Generate coverage
|
||||
uses: actions-rs/grcov@v0.1.5
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v2
|
||||
uses: codecov/codecov-action@v1
|
||||
with:
|
||||
file: ${{ steps.coverage.outputs.report }}
|
||||
directory: ./coverage/reports/
|
||||
|
||||
45
.github/workflows/cont_integration.yml
vendored
45
.github/workflows/cont_integration.yml
vendored
@@ -10,30 +10,25 @@ jobs:
|
||||
strategy:
|
||||
matrix:
|
||||
rust:
|
||||
- version: 1.60.0 # STABLE
|
||||
clippy: true
|
||||
- version: 1.56.1 # MSRV
|
||||
- 1.53.0 # STABLE
|
||||
- 1.46.0 # MSRV
|
||||
features:
|
||||
- default
|
||||
- minimal
|
||||
- all-keys
|
||||
- minimal,use-esplora-ureq
|
||||
- minimal,esplora
|
||||
- key-value-db
|
||||
- electrum
|
||||
- compact_filters
|
||||
- esplora,ureq,key-value-db,electrum
|
||||
- esplora,key-value-db,electrum
|
||||
- compiler
|
||||
- rpc
|
||||
- verify
|
||||
- async-interface
|
||||
- use-esplora-reqwest
|
||||
- sqlite
|
||||
- sqlite-bundled
|
||||
steps:
|
||||
- name: checkout
|
||||
uses: actions/checkout@v2
|
||||
- name: Generate cache key
|
||||
run: echo "${{ matrix.rust.version }} ${{ matrix.features }}" | tee .cache_key
|
||||
run: echo "${{ matrix.rust }} ${{ matrix.features }}" | tee .cache_key
|
||||
- name: cache
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
@@ -43,18 +38,16 @@ jobs:
|
||||
target
|
||||
key: ${{ runner.os }}-cargo-${{ hashFiles('.cache_key') }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
|
||||
- name: Set default toolchain
|
||||
run: rustup default ${{ matrix.rust.version }}
|
||||
run: rustup default ${{ matrix.rust }}
|
||||
- name: Set profile
|
||||
run: rustup set profile minimal
|
||||
- name: Add clippy
|
||||
if: ${{ matrix.rust.clippy }}
|
||||
run: rustup component add clippy
|
||||
- name: Update toolchain
|
||||
run: rustup update
|
||||
- name: Build
|
||||
run: cargo build --features ${{ matrix.features }} --no-default-features
|
||||
- name: Clippy
|
||||
if: ${{ matrix.rust.clippy }}
|
||||
run: cargo clippy --all-targets --features ${{ matrix.features }} --no-default-features -- -D warnings
|
||||
- name: Test
|
||||
run: cargo test --features ${{ matrix.features }} --no-default-features
|
||||
@@ -83,27 +76,15 @@ jobs:
|
||||
run: cargo test --features test-md-docs --no-default-features -- doctest::ReadmeDoctests
|
||||
|
||||
test-blockchains:
|
||||
name: Blockchain ${{ matrix.blockchain.features }}
|
||||
name: Test ${{ matrix.blockchain.name }}
|
||||
runs-on: ubuntu-20.04
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
blockchain:
|
||||
- name: electrum
|
||||
testprefix: blockchain::electrum::test
|
||||
features: test-electrum,verify
|
||||
- name: rpc
|
||||
testprefix: blockchain::rpc::test
|
||||
features: test-rpc
|
||||
- name: rpc-legacy
|
||||
testprefix: blockchain::rpc::test
|
||||
features: test-rpc-legacy
|
||||
- name: esplora
|
||||
testprefix: esplora
|
||||
features: test-esplora,use-esplora-reqwest,verify
|
||||
- name: esplora
|
||||
testprefix: esplora
|
||||
features: test-esplora,use-esplora-ureq,verify
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
@@ -112,6 +93,8 @@ jobs:
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/registry
|
||||
~/.cargo/bitcoin
|
||||
~/.cargo/electrs
|
||||
~/.cargo/git
|
||||
target
|
||||
key: ${{ runner.os }}-cargo-${{ github.job }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
|
||||
@@ -121,11 +104,11 @@ jobs:
|
||||
toolchain: stable
|
||||
override: true
|
||||
- name: Test
|
||||
run: cargo test --no-default-features --features ${{ matrix.blockchain.features }} ${{ matrix.blockchain.testprefix }}::bdk_blockchain_tests
|
||||
run: cargo test --features test-${{ matrix.blockchain.name }} ${{ matrix.blockchain.name }}::bdk_blockchain_tests
|
||||
|
||||
check-wasm:
|
||||
name: Check WASM
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-16.04
|
||||
env:
|
||||
CC: clang-10
|
||||
CFLAGS: -I/usr/include
|
||||
@@ -142,11 +125,11 @@ jobs:
|
||||
key: ${{ runner.os }}-cargo-${{ github.job }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
|
||||
# Install a recent version of clang that supports wasm32
|
||||
- run: wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add - || exit 1
|
||||
- run: sudo apt-add-repository "deb http://apt.llvm.org/focal/ llvm-toolchain-focal-10 main" || exit 1
|
||||
- run: sudo apt-add-repository "deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-10 main" || exit 1
|
||||
- run: sudo apt-get update || exit 1
|
||||
- run: sudo apt-get install -y libclang-common-10-dev clang-10 libc6-dev-i386 || exit 1
|
||||
- name: Set default toolchain
|
||||
run: rustup default 1.56.1 # STABLE
|
||||
run: rustup default 1.53.0 # STABLE
|
||||
- name: Set profile
|
||||
run: rustup set profile minimal
|
||||
- name: Add target wasm32
|
||||
@@ -154,7 +137,7 @@ jobs:
|
||||
- name: Update toolchain
|
||||
run: rustup update
|
||||
- name: Check
|
||||
run: cargo check --target wasm32-unknown-unknown --features use-esplora-reqwest --no-default-features
|
||||
run: cargo check --target wasm32-unknown-unknown --features esplora --no-default-features
|
||||
|
||||
fmt:
|
||||
name: Rust fmt
|
||||
|
||||
12
.github/workflows/nightly_docs.yml
vendored
12
.github/workflows/nightly_docs.yml
vendored
@@ -18,13 +18,13 @@ jobs:
|
||||
target
|
||||
key: nightly-docs-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
|
||||
- name: Set default toolchain
|
||||
run: rustup default nightly-2022-01-25
|
||||
run: rustup default nightly
|
||||
- name: Set profile
|
||||
run: rustup set profile minimal
|
||||
- name: Update toolchain
|
||||
run: rustup update
|
||||
- name: Build docs
|
||||
run: cargo rustdoc --verbose --features=compiler,electrum,esplora,ureq,compact_filters,key-value-db,all-keys,sqlite -- --cfg docsrs -Dwarnings
|
||||
run: cargo rustdoc --verbose --features=compiler,electrum,esplora,compact_filters,key-value-db,all-keys -- --cfg docsrs -Dwarnings
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
@@ -44,18 +44,18 @@ jobs:
|
||||
repository: bitcoindevkit/bitcoindevkit.org
|
||||
ref: master
|
||||
- name: Create directories
|
||||
run: mkdir -p ./docs/.vuepress/public/docs-rs/bdk/nightly
|
||||
run: mkdir -p ./static/docs-rs/bdk/nightly
|
||||
- name: Remove old latest
|
||||
run: rm -rf ./docs/.vuepress/public/docs-rs/bdk/nightly/latest
|
||||
run: rm -rf ./static/docs-rs/bdk/nightly/latest
|
||||
- name: Download built docs
|
||||
uses: actions/download-artifact@v1
|
||||
with:
|
||||
name: built-docs
|
||||
path: ./docs/.vuepress/public/docs-rs/bdk/nightly/latest
|
||||
path: ./static/docs-rs/bdk/nightly/latest
|
||||
- name: Configure git
|
||||
run: git config user.email "github-actions@github.com" && git config user.name "github-actions"
|
||||
- name: Commit
|
||||
continue-on-error: true # If there's nothing to commit this step fails, but it's fine
|
||||
run: git add ./docs/.vuepress/public/docs-rs && git commit -m "Publish autogenerated nightly docs"
|
||||
run: git add ./static && git commit -m "Publish autogenerated nightly docs"
|
||||
- name: Push
|
||||
run: git push origin master
|
||||
|
||||
119
CHANGELOG.md
119
CHANGELOG.md
@@ -6,116 +6,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
|
||||
## [v0.20.0] - [v0.19.0]
|
||||
|
||||
- New MSRV set to `1.56.1`
|
||||
- Fee sniping discouraging through nLockTime - if the user specifies a `current_height`, we use that as a nlocktime, otherwise we use the last sync height (or 0 if we never synced)
|
||||
- Fix hang when `ElectrumBlockchainConfig::stop_gap` is zero.
|
||||
- Set coin type in BIP44, BIP49, and BIP84 templates
|
||||
- Get block hash given a block height - A `get_block_hash` method is now defined on the `GetBlockHash` trait and implemented on every blockchain backend. This method expects a block height and returns the corresponding block hash.
|
||||
- Add `remove_partial_sigs` and `try_finalize` to `SignOptions`
|
||||
- Deprecate `AddressValidator`
|
||||
- Fix Electrum wallet sync potentially causing address index decrement - compare proposed index and current index before applying batch operations during sync.
|
||||
|
||||
## [v0.19.0] - [v0.18.0]
|
||||
|
||||
- added `OldestFirstCoinSelection` impl to `CoinSelectionAlgorithm`
|
||||
- New MSRV set to `1.56`
|
||||
- Unpinned tokio to `1`
|
||||
- Add traits to reuse `Blockchain`s across multiple wallets (`BlockchainFactory` and `StatelessBlockchain`).
|
||||
- Upgrade to rust-bitcoin `0.28`
|
||||
- If using the `sqlite-db` feature all cached wallet data is deleted due to a possible UTXO inconsistency, a wallet.sync will recreate it
|
||||
- Update `PkOrF` in the policy module to become an enum
|
||||
- Add experimental support for Taproot, including:
|
||||
- Support for `tr()` descriptors with complex tapscript trees
|
||||
- Creation of Taproot PSBTs (BIP-371)
|
||||
- Signing Taproot PSBTs (key spend and script spend)
|
||||
- Support for `tr()` descriptors in the `descriptor!()` macro
|
||||
- Add support for Bitcoin Core 23.0 when using the `rpc` blockchain
|
||||
|
||||
## [v0.18.0] - [v0.17.0]
|
||||
|
||||
- Add `sqlite-bundled` feature for deployments that need a bundled version of sqlite, ie. for mobile platforms.
|
||||
- Added `Wallet::get_signers()`, `Wallet::descriptor_checksum()` and `Wallet::get_address_validators()`, exposed the `AsDerived` trait.
|
||||
- Deprecate `database::Database::flush()`, the function is only needed for the sled database on mobile, instead for mobile use the sqlite database.
|
||||
- Add `keychain: KeychainKind` to `wallet::AddressInfo`.
|
||||
- Improve key generation traits
|
||||
- Rename `WalletExport` to `FullyNodedExport`, deprecate the former.
|
||||
- Bump `miniscript` dependency version to `^6.1`.
|
||||
|
||||
## [v0.17.0] - [v0.16.1]
|
||||
|
||||
- Removed default verification from `wallet::sync`. sync-time verification is added in `script_sync` and is activated by `verify` feature flag.
|
||||
- `verify` flag removed from `TransactionDetails`.
|
||||
- Add `get_internal_address` to allow you to get internal addresses just as you get external addresses.
|
||||
- added `ensure_addresses_cached` to `Wallet` to let offline wallets load and cache addresses in their database
|
||||
- Add `is_spent` field to `LocalUtxo`; when we notice that a utxo has been spent we set `is_spent` field to true instead of deleting it from the db.
|
||||
|
||||
### Sync API change
|
||||
|
||||
To decouple the `Wallet` from the `Blockchain` we've made major changes:
|
||||
|
||||
- Removed `Blockchain` from Wallet.
|
||||
- Removed `Wallet::broadcast` (just use `Blockchain::broadcast`)
|
||||
- Deprecated `Wallet::new_offline` (all wallets are offline now)
|
||||
- Changed `Wallet::sync` to take a `Blockchain`.
|
||||
- Stop making a request for the block height when calling `Wallet:new`.
|
||||
- Added `SyncOptions` to capture extra (future) arguments to `Wallet::sync`.
|
||||
- Removed `max_addresses` sync parameter which determined how many addresses to cache before syncing since this can just be done with `ensure_addresses_cached`.
|
||||
- remove `flush` method from the `Database` trait.
|
||||
|
||||
## [v0.16.1] - [v0.16.0]
|
||||
|
||||
- Pin tokio dependency version to ~1.14 to prevent errors due to their new MSRV 1.49.0
|
||||
|
||||
## [v0.16.0] - [v0.15.0]
|
||||
|
||||
- Disable `reqwest` default features.
|
||||
- Added `reqwest-default-tls` feature: Use this to restore the TLS defaults of reqwest if you don't want to add a dependency to it in your own manifest.
|
||||
- Use dust_value from rust-bitcoin
|
||||
- Fixed generating WIF in the correct network format.
|
||||
|
||||
## [v0.15.0] - [v0.14.0]
|
||||
|
||||
- Overhauled sync logic for electrum and esplora.
|
||||
- Unify ureq and reqwest esplora backends to have the same configuration parameters. This means reqwest now has a timeout parameter and ureq has a concurrency parameter.
|
||||
- Fixed esplora fee estimation.
|
||||
|
||||
## [v0.14.0] - [v0.13.0]
|
||||
|
||||
- BIP39 implementation dependency, in `keys::bip39` changed from tiny-bip39 to rust-bip39.
|
||||
- Add new method on the `TxBuilder` to embed data in the transaction via `OP_RETURN`. To allow that a fix to check the dust only on spendable output has been introduced.
|
||||
- Update the `Database` trait to store the last sync timestamp and block height
|
||||
- Rename `ConfirmationTime` to `BlockTime`
|
||||
|
||||
## [v0.13.0] - [v0.12.0]
|
||||
|
||||
- Exposed `get_tx()` method from `Database` to `Wallet`.
|
||||
|
||||
## [v0.12.0] - [v0.11.0]
|
||||
|
||||
- Activate `miniscript/use-serde` feature to allow consumers of the library to access it via the re-exported `miniscript` crate.
|
||||
- Add support for proxies in `EsploraBlockchain`
|
||||
- Added `SqliteDatabase` that implements `Database` backed by a sqlite database using `rusqlite` crate.
|
||||
|
||||
## [v0.11.0] - [v0.10.0]
|
||||
|
||||
- Added `flush` method to the `Database` trait to explicitly flush to disk latest changes on the db.
|
||||
|
||||
## [v0.10.0] - [v0.9.0]
|
||||
|
||||
- Added `RpcBlockchain` in the `AnyBlockchain` struct to allow using Rpc backend where `AnyBlockchain` is used (eg `bdk-cli`)
|
||||
- Removed hard dependency on `tokio`.
|
||||
|
||||
### Wallet
|
||||
|
||||
- Removed and replaced `set_single_recipient` with more general `drain_to` and replaced `maintain_single_recipient` with `allow_shrinking`.
|
||||
|
||||
### Blockchain
|
||||
|
||||
- Removed `stop_gap` from `Blockchain` trait and added it to only `ElectrumBlockchain` and `EsploraBlockchain` structs.
|
||||
- Added a `ureq` backend for use when not using feature `async-interface` or target WASM. `ureq` is a blocking HTTP client.
|
||||
- Removed `stop_gap` from `Blockchain` trait and added it to only `ElectrumBlockchain` and `EsploraBlockchain` structs
|
||||
|
||||
## [v0.9.0] - [v0.8.0]
|
||||
|
||||
@@ -457,6 +354,7 @@ final transaction is created by calling `finish` on the builder.
|
||||
- Use `MemoryDatabase` in the compiler example
|
||||
- Make the REPL return JSON
|
||||
|
||||
[unreleased]: https://github.com/bitcoindevkit/bdk/compare/v0.9.0...HEAD
|
||||
[0.1.0-beta.1]: https://github.com/bitcoindevkit/bdk/compare/96c87ea5...0.1.0-beta.1
|
||||
[v0.2.0]: https://github.com/bitcoindevkit/bdk/compare/0.1.0-beta.1...v0.2.0
|
||||
[v0.3.0]: https://github.com/bitcoindevkit/bdk/compare/v0.2.0...v0.3.0
|
||||
@@ -467,16 +365,3 @@ final transaction is created by calling `finish` on the builder.
|
||||
[v0.7.0]: https://github.com/bitcoindevkit/bdk/compare/v0.6.0...v0.7.0
|
||||
[v0.8.0]: https://github.com/bitcoindevkit/bdk/compare/v0.7.0...v0.8.0
|
||||
[v0.9.0]: https://github.com/bitcoindevkit/bdk/compare/v0.8.0...v0.9.0
|
||||
[v0.10.0]: https://github.com/bitcoindevkit/bdk/compare/v0.9.0...v0.10.0
|
||||
[v0.11.0]: https://github.com/bitcoindevkit/bdk/compare/v0.10.0...v0.11.0
|
||||
[v0.12.0]: https://github.com/bitcoindevkit/bdk/compare/v0.11.0...v0.12.0
|
||||
[v0.13.0]: https://github.com/bitcoindevkit/bdk/compare/v0.12.0...v0.13.0
|
||||
[v0.14.0]: https://github.com/bitcoindevkit/bdk/compare/v0.13.0...v0.14.0
|
||||
[v0.15.0]: https://github.com/bitcoindevkit/bdk/compare/v0.14.0...v0.15.0
|
||||
[v0.16.0]: https://github.com/bitcoindevkit/bdk/compare/v0.15.0...v0.16.0
|
||||
[v0.16.1]: https://github.com/bitcoindevkit/bdk/compare/v0.16.0...v0.16.1
|
||||
[v0.17.0]: https://github.com/bitcoindevkit/bdk/compare/v0.16.1...v0.17.0
|
||||
[v0.18.0]: https://github.com/bitcoindevkit/bdk/compare/v0.17.0...v0.18.0
|
||||
[v0.19.0]: https://github.com/bitcoindevkit/bdk/compare/v0.18.0...v0.19.0
|
||||
[v0.20.0]: https://github.com/bitcoindevkit/bdk/compare/v0.19.0...v0.20.0
|
||||
[unreleased]: https://github.com/bitcoindevkit/bdk/compare/v0.20.0...HEAD
|
||||
|
||||
@@ -57,21 +57,6 @@ comment suggesting that you're working on it. If someone is already assigned,
|
||||
don't hesitate to ask if the assigned party or previous commenters are still
|
||||
working on it if it has been awhile.
|
||||
|
||||
Deprecation policy
|
||||
------------------
|
||||
|
||||
Where possible, breaking existing APIs should be avoided. Instead, add new APIs and
|
||||
use [`#[deprecated]`](https://github.com/rust-lang/rfcs/blob/master/text/1270-deprecation.md)
|
||||
to discourage use of the old one.
|
||||
|
||||
Deprecated APIs are typically maintained for one release cycle. In other words, an
|
||||
API that has been deprecated with the 0.10 release can be expected to be removed in the
|
||||
0.11 release. This allows for smoother upgrades without incurring too much technical
|
||||
debt inside this library.
|
||||
|
||||
If you deprecated an API as part of a contribution, we encourage you to "own" that API
|
||||
and send a follow-up to remove it as part of the next release cycle.
|
||||
|
||||
Peer review
|
||||
-----------
|
||||
|
||||
|
||||
71
Cargo.toml
71
Cargo.toml
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "bdk"
|
||||
version = "0.20.0"
|
||||
version = "0.9.1-dev"
|
||||
edition = "2018"
|
||||
authors = ["Alekos Filini <alekos.filini@gmail.com>", "Riccardo Casatta <riccardo@casatta.it>"]
|
||||
homepage = "https://bitcoindevkit.org"
|
||||
@@ -12,33 +12,30 @@ readme = "README.md"
|
||||
license = "MIT OR Apache-2.0"
|
||||
|
||||
[dependencies]
|
||||
bdk-macros = "^0.6"
|
||||
bdk-macros = "^0.4"
|
||||
log = "^0.4"
|
||||
miniscript = { version = "7.0", features = ["use-serde"] }
|
||||
bitcoin = { version = "0.28.1", features = ["use-serde", "base64", "rand"] }
|
||||
miniscript = "5.1"
|
||||
bitcoin = { version = "~0.26.2", features = ["use-serde", "base64"] }
|
||||
serde = { version = "^1.0", features = ["derive"] }
|
||||
serde_json = { version = "^1.0" }
|
||||
rand = "^0.7"
|
||||
|
||||
# Optional dependencies
|
||||
sled = { version = "0.34", optional = true }
|
||||
electrum-client = { version = "0.10", optional = true }
|
||||
rusqlite = { version = "0.27.0", optional = true }
|
||||
ahash = { version = "0.7.6", optional = true }
|
||||
reqwest = { version = "0.11", optional = true, default-features = false, features = ["json"] }
|
||||
ureq = { version = "~2.2.0", features = ["json"], optional = true }
|
||||
electrum-client = { version = "0.7", optional = true }
|
||||
reqwest = { version = "0.11", optional = true, features = ["json"] }
|
||||
futures = { version = "0.3", optional = true }
|
||||
async-trait = { version = "0.1", optional = true }
|
||||
rocksdb = { version = "0.14", default-features = false, features = ["snappy"], optional = true }
|
||||
rocksdb = { version = "0.14", optional = true }
|
||||
cc = { version = ">=1.0.64", optional = true }
|
||||
socks = { version = "0.3", optional = true }
|
||||
lazy_static = { version = "1.4", optional = true }
|
||||
|
||||
bip39 = { version = "1.0.1", optional = true }
|
||||
tiny-bip39 = { version = "^0.8", optional = true }
|
||||
zeroize = { version = "<1.4.0", optional = true }
|
||||
bitcoinconsensus = { version = "0.19.0-3", optional = true }
|
||||
|
||||
# Needed by bdk_blockchain_tests macro and the `rpc` feature
|
||||
bitcoincore-rpc = { version = "0.15", optional = true }
|
||||
# Needed by bdk_blockchain_tests macro
|
||||
bitcoincore-rpc = { version = "0.13", optional = true }
|
||||
|
||||
# Platform-specific dependencies
|
||||
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
|
||||
@@ -54,51 +51,28 @@ minimal = []
|
||||
compiler = ["miniscript/compiler"]
|
||||
verify = ["bitcoinconsensus"]
|
||||
default = ["key-value-db", "electrum"]
|
||||
sqlite = ["rusqlite", "ahash"]
|
||||
sqlite-bundled = ["sqlite", "rusqlite/bundled"]
|
||||
electrum = ["electrum-client"]
|
||||
esplora = ["reqwest", "futures"]
|
||||
compact_filters = ["rocksdb", "socks", "lazy_static", "cc"]
|
||||
key-value-db = ["sled"]
|
||||
async-interface = ["async-trait"]
|
||||
all-keys = ["keys-bip39"]
|
||||
keys-bip39 = ["bip39"]
|
||||
keys-bip39 = ["tiny-bip39", "zeroize"]
|
||||
rpc = ["bitcoincore-rpc"]
|
||||
|
||||
# We currently provide mulitple implementations of `Blockchain`, all are
|
||||
# blocking except for the `EsploraBlockchain` which can be either async or
|
||||
# blocking, depending on the HTTP client in use.
|
||||
#
|
||||
# - Users wanting asynchronous HTTP calls should enable `async-interface` to get
|
||||
# access to the asynchronous method implementations. Then, if Esplora is wanted,
|
||||
# enable `esplora` AND `reqwest` (`--features=use-esplora-reqwest`).
|
||||
# - Users wanting blocking HTTP calls can use any of the other blockchain
|
||||
# implementations (`compact_filters`, `electrum`, or `esplora`). Users wanting to
|
||||
# use Esplora should enable `esplora` AND `ureq` (`--features=use-esplora-ureq`).
|
||||
#
|
||||
# WARNING: Please take care with the features below, various combinations will
|
||||
# fail to build. We cannot currently build `bdk` with `--all-features`.
|
||||
async-interface = ["async-trait"]
|
||||
electrum = ["electrum-client"]
|
||||
# MUST ALSO USE `--no-default-features`.
|
||||
use-esplora-reqwest = ["esplora", "reqwest", "reqwest/socks", "futures"]
|
||||
use-esplora-ureq = ["esplora", "ureq", "ureq/socks"]
|
||||
# Typical configurations will not need to use `esplora` feature directly.
|
||||
esplora = []
|
||||
|
||||
# Use below feature with `use-esplora-reqwest` to enable reqwest default TLS support
|
||||
reqwest-default-tls = ["reqwest/default-tls"]
|
||||
|
||||
# Debug/Test features
|
||||
test-blockchains = ["bitcoincore-rpc", "electrum-client"]
|
||||
test-electrum = ["electrum", "electrsd/electrs_0_8_10", "electrsd/bitcoind_22_0", "test-blockchains"]
|
||||
test-rpc = ["rpc", "electrsd/electrs_0_8_10", "electrsd/bitcoind_22_0", "test-blockchains"]
|
||||
test-rpc-legacy = ["rpc", "electrsd/electrs_0_8_10", "electrsd/bitcoind_0_20_0", "test-blockchains"]
|
||||
test-esplora = ["electrsd/legacy", "electrsd/esplora_a33e97e1", "electrsd/bitcoind_22_0", "test-blockchains"]
|
||||
test-electrum = ["electrum", "electrsd/electrs_0_8_10", "test-blockchains"]
|
||||
test-rpc = ["rpc", "electrsd/electrs_0_8_10", "test-blockchains"]
|
||||
test-esplora = ["esplora", "electrsd/legacy", "electrsd/esplora_a33e97e1", "test-blockchains"]
|
||||
test-md-docs = ["electrum"]
|
||||
|
||||
[dev-dependencies]
|
||||
lazy_static = "1.4"
|
||||
env_logger = "0.7"
|
||||
clap = "2.33"
|
||||
electrsd = "0.19.1"
|
||||
electrsd = { version= "0.6", features = ["trigger", "bitcoind_0_21_1"] }
|
||||
|
||||
[[example]]
|
||||
name = "address_validator"
|
||||
@@ -111,14 +85,9 @@ name = "miniscriptc"
|
||||
path = "examples/compiler.rs"
|
||||
required-features = ["compiler"]
|
||||
|
||||
[[example]]
|
||||
name = "rpcwallet"
|
||||
path = "examples/rpcwallet.rs"
|
||||
required-features = ["keys-bip39", "key-value-db", "rpc"]
|
||||
|
||||
[workspace]
|
||||
members = ["macros"]
|
||||
[package.metadata.docs.rs]
|
||||
features = ["compiler", "electrum", "esplora", "ureq", "compact_filters", "rpc", "key-value-db", "sqlite", "all-keys", "verify"]
|
||||
features = ["compiler", "electrum", "esplora", "compact_filters", "rpc", "key-value-db", "all-keys", "verify"]
|
||||
# defines the configuration attribute `docsrs`
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
@@ -32,14 +32,14 @@ Pre-`v1.0.0` our "major" releases only affect the "minor" semver value. Accordin
|
||||
- If it's a minor issue you can just fix it in the release branch, since it will be merged back to `master` eventually
|
||||
- For bigger issues you can fix them on `master` and then *cherry-pick* the commit to the release branch
|
||||
6. Update the changelog with the new release version.
|
||||
7. Update `src/lib.rs` with the new version (line ~43)
|
||||
7. Update `src/lib.rs` with the new version (line ~59)
|
||||
8. On release day, make a commit on the release branch to bump the version to `x.y.z`. The message should be "Bump version to x.y.z".
|
||||
9. Add a tag to this commit. The tag name should be `vx.y.z` (for example `v0.5.0`), and the message "Release x.y.z". Make sure the tag is signed, for extra safety use the explicit `--sign` flag.
|
||||
10. Push the new commits to the upstream release branch, wait for the CI to finish one last time.
|
||||
11. Publish **all** the updated crates to crates.io.
|
||||
12. Make a new commit to bump the version value to `x.y.(z+1)-dev`. The message should be "Bump version to x.y.(z+1)-dev".
|
||||
13. Merge the release branch back into `master`.
|
||||
14. If the `master` branch contains any unreleased changes to the `bdk-macros` crate, change the `bdk` Cargo.toml `[dependencies]` to point to the local path (ie. `bdk-macros = { path = "./macros"}`)
|
||||
14. If the `master` branch contains any unreleased changes to the `bdk-macros`, `bdk-testutils`, or `bdk-testutils-macros` crates, change the `bdk` Cargo.toml `[dev-dependencies]` to point to the local path (ie. `bdk-testutils-macros = { path = "./testutils-macros"}`)
|
||||
15. Create the release on GitHub: go to "tags", click on the dots on the right and select "Create Release". Then set the title to `vx.y.z` and write down some brief release notes.
|
||||
16. Make sure the new release shows up on crates.io and that the docs are built correctly on docs.rs.
|
||||
17. Announce the release on Twitter, Discord and Telegram.
|
||||
|
||||
31
README.md
31
README.md
@@ -1,7 +1,7 @@
|
||||
<div align="center">
|
||||
<h1>BDK</h1>
|
||||
|
||||
<img src="./static/bdk.png" width="220" />
|
||||
<img src="./static/bdk.svg" width="220" />
|
||||
|
||||
<p>
|
||||
<strong>A modern, lightweight, descriptor-based wallet library written in Rust!</strong>
|
||||
@@ -13,7 +13,7 @@
|
||||
<a href="https://github.com/bitcoindevkit/bdk/actions?query=workflow%3ACI"><img alt="CI Status" src="https://github.com/bitcoindevkit/bdk/workflows/CI/badge.svg"></a>
|
||||
<a href="https://codecov.io/gh/bitcoindevkit/bdk"><img src="https://codecov.io/gh/bitcoindevkit/bdk/branch/master/graph/badge.svg"/></a>
|
||||
<a href="https://docs.rs/bdk"><img alt="API Docs" src="https://img.shields.io/badge/docs.rs-bdk-green"/></a>
|
||||
<a href="https://blog.rust-lang.org/2021/11/01/Rust-1.56.1.html"><img alt="Rustc Version 1.56.1+" src="https://img.shields.io/badge/rustc-1.56.1%2B-lightgrey.svg"/></a>
|
||||
<a href="https://blog.rust-lang.org/2020/08/27/Rust-1.46.0.html"><img alt="Rustc Version 1.46+" src="https://img.shields.io/badge/rustc-1.46%2B-lightgrey.svg"/></a>
|
||||
<a href="https://discord.gg/d7NkDKm"><img alt="Chat on Discord" src="https://img.shields.io/discord/753336465005608961?logo=discord"></a>
|
||||
</p>
|
||||
|
||||
@@ -41,21 +41,21 @@ The `bdk` library aims to be the core building block for Bitcoin wallets of any
|
||||
```rust,no_run
|
||||
use bdk::Wallet;
|
||||
use bdk::database::MemoryDatabase;
|
||||
use bdk::blockchain::ElectrumBlockchain;
|
||||
use bdk::SyncOptions;
|
||||
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
|
||||
|
||||
use bdk::electrum_client::Client;
|
||||
use bdk::bitcoin::Network;
|
||||
|
||||
fn main() -> Result<(), bdk::Error> {
|
||||
let blockchain = ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?);
|
||||
let client = Client::new("ssl://electrum.blockstream.info:60002")?;
|
||||
let wallet = Wallet::new(
|
||||
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
|
||||
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
|
||||
Network::Testnet,
|
||||
bitcoin::Network::Testnet,
|
||||
MemoryDatabase::default(),
|
||||
ElectrumBlockchain::from(client)
|
||||
)?;
|
||||
|
||||
wallet.sync(&blockchain, SyncOptions::default())?;
|
||||
wallet.sync(noop_progress(), None)?;
|
||||
|
||||
println!("Descriptor balance: {} SAT", wallet.get_balance()?);
|
||||
|
||||
@@ -70,7 +70,7 @@ use bdk::{Wallet, database::MemoryDatabase};
|
||||
use bdk::wallet::AddressIndex::New;
|
||||
|
||||
fn main() -> Result<(), bdk::Error> {
|
||||
let wallet = Wallet::new(
|
||||
let wallet = Wallet::new_offline(
|
||||
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
|
||||
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
|
||||
bitcoin::Network::Testnet,
|
||||
@@ -88,9 +88,9 @@ fn main() -> Result<(), bdk::Error> {
|
||||
### Create a transaction
|
||||
|
||||
```rust,no_run
|
||||
use bdk::{FeeRate, Wallet, SyncOptions};
|
||||
use bdk::{FeeRate, Wallet};
|
||||
use bdk::database::MemoryDatabase;
|
||||
use bdk::blockchain::ElectrumBlockchain;
|
||||
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
|
||||
|
||||
use bdk::electrum_client::Client;
|
||||
use bdk::wallet::AddressIndex::New;
|
||||
@@ -98,15 +98,16 @@ use bdk::wallet::AddressIndex::New;
|
||||
use bitcoin::consensus::serialize;
|
||||
|
||||
fn main() -> Result<(), bdk::Error> {
|
||||
let blockchain = ElectrumBlockchain::from(Client::new("ssl://electrum.blockstream.info:60002")?);
|
||||
let client = Client::new("ssl://electrum.blockstream.info:60002")?;
|
||||
let wallet = Wallet::new(
|
||||
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
|
||||
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
|
||||
bitcoin::Network::Testnet,
|
||||
MemoryDatabase::default(),
|
||||
ElectrumBlockchain::from(client)
|
||||
)?;
|
||||
|
||||
wallet.sync(&blockchain, SyncOptions::default())?;
|
||||
wallet.sync(noop_progress(), None)?;
|
||||
|
||||
let send_to = wallet.get_address(New)?;
|
||||
let (psbt, details) = {
|
||||
@@ -134,7 +135,7 @@ use bdk::{Wallet, SignOptions, database::MemoryDatabase};
|
||||
use bitcoin::consensus::deserialize;
|
||||
|
||||
fn main() -> Result<(), bdk::Error> {
|
||||
let wallet = Wallet::new(
|
||||
let wallet = Wallet::new_offline(
|
||||
"wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/0/*)",
|
||||
Some("wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/1/*)"),
|
||||
bitcoin::Network::Testnet,
|
||||
@@ -166,7 +167,7 @@ Integration testing require testing features, for example:
|
||||
cargo test --features test-electrum
|
||||
```
|
||||
|
||||
The other options are `test-esplora`, `test-rpc` or `test-rpc-legacy` which runs against an older version of Bitcoin Core.
|
||||
The other options are `test-esplora` or `test-rpc`.
|
||||
Note that `electrs` and `bitcoind` binaries are automatically downloaded (on mac and linux), to specify you already have installed binaries you must use `--no-default-features` and provide `BITCOIND_EXE` and `ELECTRS_EXE` as environment variables.
|
||||
|
||||
## License
|
||||
|
||||
@@ -14,7 +14,6 @@ use std::sync::Arc;
|
||||
use bdk::bitcoin;
|
||||
use bdk::database::MemoryDatabase;
|
||||
use bdk::descriptor::HdKeyPaths;
|
||||
#[allow(deprecated)]
|
||||
use bdk::wallet::address_validator::{AddressValidator, AddressValidatorError};
|
||||
use bdk::KeychainKind;
|
||||
use bdk::Wallet;
|
||||
@@ -26,7 +25,6 @@ use bitcoin::{Network, Script};
|
||||
|
||||
#[derive(Debug)]
|
||||
struct DummyValidator;
|
||||
#[allow(deprecated)]
|
||||
impl AddressValidator for DummyValidator {
|
||||
fn validate(
|
||||
&self,
|
||||
@@ -50,9 +48,9 @@ impl AddressValidator for DummyValidator {
|
||||
|
||||
fn main() -> Result<(), bdk::Error> {
|
||||
let descriptor = "sh(and_v(v:pk(tpubDDpWvmUrPZrhSPmUzCMBHffvC3HyMAPnWDSAQNBTnj1iZeJa7BZQEttFiP4DS4GCcXQHezdXhn86Hj6LHX5EDstXPWrMaSneRWM8yUf6NFd/*),after(630000)))";
|
||||
let mut wallet = Wallet::new(descriptor, None, Network::Regtest, MemoryDatabase::new())?;
|
||||
let mut wallet =
|
||||
Wallet::new_offline(descriptor, None, Network::Regtest, MemoryDatabase::new())?;
|
||||
|
||||
#[allow(deprecated)]
|
||||
wallet.add_address_validator(Arc::new(DummyValidator));
|
||||
|
||||
wallet.get_address(New)?;
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
// licenses.
|
||||
|
||||
use bdk::blockchain::compact_filters::*;
|
||||
use bdk::blockchain::noop_progress;
|
||||
use bdk::database::MemoryDatabase;
|
||||
use bdk::*;
|
||||
use bitcoin::*;
|
||||
@@ -34,8 +35,9 @@ fn main() -> Result<(), CompactFiltersError> {
|
||||
let descriptor = "wpkh(tpubD6NzVbkrYhZ4X2yy78HWrr1M9NT8dKeWfzNiQqDdMqqa9UmmGztGGz6TaLFGsLfdft5iu32gxq1T4eMNxExNNWzVCpf9Y6JZi5TnqoC9wJq/*)";
|
||||
|
||||
let database = MemoryDatabase::default();
|
||||
let wallet = Arc::new(Wallet::new(descriptor, None, Network::Testnet, database).unwrap());
|
||||
wallet.sync(&blockchain, SyncOptions::default()).unwrap();
|
||||
let wallet =
|
||||
Arc::new(Wallet::new(descriptor, None, Network::Testnet, database, blockchain).unwrap());
|
||||
wallet.sync(noop_progress(), None).unwrap();
|
||||
info!("balance: {}", wallet.get_balance()?);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -70,7 +70,7 @@ fn main() -> Result<(), Box<dyn Error>> {
|
||||
let policy_str = matches.value_of("POLICY").unwrap();
|
||||
info!("Compiling policy: {}", policy_str);
|
||||
|
||||
let policy = Concrete::<String>::from_str(policy_str)?;
|
||||
let policy = Concrete::<String>::from_str(&policy_str)?;
|
||||
|
||||
let descriptor = match matches.value_of("TYPE").unwrap() {
|
||||
"sh" => Descriptor::new_sh(policy.compile()?)?,
|
||||
@@ -85,11 +85,11 @@ fn main() -> Result<(), Box<dyn Error>> {
|
||||
|
||||
let network = matches
|
||||
.value_of("network")
|
||||
.map(Network::from_str)
|
||||
.map(|n| Network::from_str(n))
|
||||
.transpose()
|
||||
.unwrap()
|
||||
.unwrap_or(Network::Testnet);
|
||||
let wallet = Wallet::new(&format!("{}", descriptor), None, network, database)?;
|
||||
let wallet = Wallet::new_offline(&format!("{}", descriptor), None, network, database)?;
|
||||
|
||||
info!("... First address: {}", wallet.get_address(New)?);
|
||||
|
||||
|
||||
@@ -1,229 +0,0 @@
|
||||
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
|
||||
//
|
||||
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
|
||||
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
||||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
use bdk::bitcoin::secp256k1::Secp256k1;
|
||||
use bdk::bitcoin::Amount;
|
||||
use bdk::bitcoin::Network;
|
||||
use bdk::bitcoincore_rpc::RpcApi;
|
||||
|
||||
use bdk::blockchain::rpc::{Auth, RpcBlockchain, RpcConfig};
|
||||
use bdk::blockchain::ConfigurableBlockchain;
|
||||
|
||||
use bdk::keys::bip39::{Language, Mnemonic, WordCount};
|
||||
use bdk::keys::{DerivableKey, GeneratableKey, GeneratedKey};
|
||||
|
||||
use bdk::miniscript::miniscript::Segwitv0;
|
||||
|
||||
use bdk::sled;
|
||||
use bdk::template::Bip84;
|
||||
use bdk::wallet::{signer::SignOptions, wallet_name_from_descriptor, AddressIndex, SyncOptions};
|
||||
use bdk::KeychainKind;
|
||||
use bdk::Wallet;
|
||||
|
||||
use bdk::blockchain::Blockchain;
|
||||
|
||||
use electrsd;
|
||||
|
||||
use std::error::Error;
|
||||
use std::path::PathBuf;
|
||||
use std::str::FromStr;
|
||||
|
||||
/// This example demonstrates a typical way to create a wallet and work with bdk.
|
||||
///
|
||||
/// This example bdk wallet is connected to a bitcoin core rpc regtest node,
|
||||
/// and will attempt to receive, create and broadcast transactions.
|
||||
///
|
||||
/// To start a bitcoind regtest node programmatically, this example uses
|
||||
/// `electrsd` library, which is also a bdk dev-dependency.
|
||||
///
|
||||
/// But you can start your own bitcoind backend, and the rest of the example should work fine.
|
||||
|
||||
fn main() -> Result<(), Box<dyn Error>> {
|
||||
// -- Setting up background bitcoind process
|
||||
|
||||
println!(">> Setting up bitcoind");
|
||||
|
||||
// Start the bitcoind process
|
||||
let bitcoind_conf = electrsd::bitcoind::Conf::default();
|
||||
|
||||
// electrsd will automatically download the bitcoin core binaries
|
||||
let bitcoind_exe =
|
||||
electrsd::bitcoind::downloaded_exe_path().expect("We should always have downloaded path");
|
||||
|
||||
// Launch bitcoind and gather authentication access
|
||||
let bitcoind = electrsd::bitcoind::BitcoinD::with_conf(bitcoind_exe, &bitcoind_conf).unwrap();
|
||||
let bitcoind_auth = Auth::Cookie {
|
||||
file: bitcoind.params.cookie_file.clone(),
|
||||
};
|
||||
|
||||
// Get a new core address
|
||||
let core_address = bitcoind.client.get_new_address(None, None)?;
|
||||
|
||||
// Generate 101 blocks and use the above address as coinbase
|
||||
bitcoind.client.generate_to_address(101, &core_address)?;
|
||||
|
||||
println!(">> bitcoind setup complete");
|
||||
println!(
|
||||
"Available coins in Core wallet : {}",
|
||||
bitcoind.client.get_balance(None, None)?
|
||||
);
|
||||
|
||||
// -- Setting up the Wallet
|
||||
|
||||
println!("\n>> Setting up BDK wallet");
|
||||
|
||||
// Get a random private key
|
||||
let xprv = generate_random_ext_privkey()?;
|
||||
|
||||
// Use the derived descriptors from the privatekey to
|
||||
// create unique wallet name.
|
||||
// This is a special utility function exposed via `bdk::wallet_name_from_descriptor()`
|
||||
let wallet_name = wallet_name_from_descriptor(
|
||||
Bip84(xprv.clone(), KeychainKind::External),
|
||||
Some(Bip84(xprv.clone(), KeychainKind::Internal)),
|
||||
Network::Regtest,
|
||||
&Secp256k1::new(),
|
||||
)?;
|
||||
|
||||
// Create a database (using default sled type) to store wallet data
|
||||
let mut datadir = PathBuf::from_str("/tmp/")?;
|
||||
datadir.push(".bdk-example");
|
||||
let database = sled::open(datadir)?;
|
||||
let database = database.open_tree(wallet_name.clone())?;
|
||||
|
||||
// Create a RPC configuration of the running bitcoind backend we created in last step
|
||||
// Note: If you are using custom regtest node, use the appropriate url and auth
|
||||
let rpc_config = RpcConfig {
|
||||
url: bitcoind.params.rpc_socket.to_string(),
|
||||
auth: bitcoind_auth,
|
||||
network: Network::Regtest,
|
||||
wallet_name,
|
||||
skip_blocks: None,
|
||||
};
|
||||
|
||||
// Use the above configuration to create a RPC blockchain backend
|
||||
let blockchain = RpcBlockchain::from_config(&rpc_config)?;
|
||||
|
||||
// Combine Database + Descriptor to create the final wallet
|
||||
let wallet = Wallet::new(
|
||||
Bip84(xprv.clone(), KeychainKind::External),
|
||||
Some(Bip84(xprv.clone(), KeychainKind::Internal)),
|
||||
Network::Regtest,
|
||||
database,
|
||||
)?;
|
||||
|
||||
// The `wallet` and the `blockchain` are independent structs.
|
||||
// The wallet will be used to do all wallet level actions
|
||||
// The blockchain can be used to do all blockchain level actions.
|
||||
// For certain actions (like sync) the wallet will ask for a blockchain.
|
||||
|
||||
// Sync the wallet
|
||||
// The first sync is important as this will instantiate the
|
||||
// wallet files.
|
||||
wallet.sync(&blockchain, SyncOptions::default())?;
|
||||
|
||||
println!(">> BDK wallet setup complete.");
|
||||
println!(
|
||||
"Available initial coins in BDK wallet : {} sats",
|
||||
wallet.get_balance()?
|
||||
);
|
||||
|
||||
// -- Wallet transaction demonstration
|
||||
|
||||
println!("\n>> Sending coins: Core --> BDK, 10 BTC");
|
||||
// Get a new address to receive coins
|
||||
let bdk_new_addr = wallet.get_address(AddressIndex::New)?.address;
|
||||
|
||||
// Send 10 BTC from core wallet to bdk wallet
|
||||
bitcoind.client.send_to_address(
|
||||
&bdk_new_addr,
|
||||
Amount::from_btc(10.0)?,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
)?;
|
||||
|
||||
// Confirm transaction by generating 1 block
|
||||
bitcoind.client.generate_to_address(1, &core_address)?;
|
||||
|
||||
// Sync the BDK wallet
|
||||
// This time the sync will fetch the new transaction and update it in
|
||||
// wallet database
|
||||
wallet.sync(&blockchain, SyncOptions::default())?;
|
||||
|
||||
println!(">> Received coins in BDK wallet");
|
||||
println!(
|
||||
"Available balance in BDK wallet: {} sats",
|
||||
wallet.get_balance()?
|
||||
);
|
||||
|
||||
println!("\n>> Sending coins: BDK --> Core, 5 BTC");
|
||||
// Attempt to send back 5.0 BTC to core address by creating a transaction
|
||||
//
|
||||
// Transactions are created using a `TxBuilder`.
|
||||
// This helps us to systematically build a transaction with all
|
||||
// required customization.
|
||||
// A full list of APIs offered by `TxBuilder` can be found at
|
||||
// https://docs.rs/bdk/latest/bdk/wallet/tx_builder/struct.TxBuilder.html
|
||||
let mut tx_builder = wallet.build_tx();
|
||||
|
||||
// For a regular transaction, just set the recipient and amount
|
||||
tx_builder.set_recipients(vec![(core_address.script_pubkey(), 500000000)]);
|
||||
|
||||
// Finalize the transaction and extract the PSBT
|
||||
let (mut psbt, _) = tx_builder.finish()?;
|
||||
|
||||
// Set signing option
|
||||
let signopt = SignOptions {
|
||||
assume_height: None,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Sign the psbt
|
||||
wallet.sign(&mut psbt, signopt)?;
|
||||
|
||||
// Extract the signed transaction
|
||||
let tx = psbt.extract_tx();
|
||||
|
||||
// Broadcast the transaction
|
||||
blockchain.broadcast(&tx)?;
|
||||
|
||||
// Confirm transaction by generating some blocks
|
||||
bitcoind.client.generate_to_address(1, &core_address)?;
|
||||
|
||||
// Sync the BDK wallet
|
||||
wallet.sync(&blockchain, SyncOptions::default())?;
|
||||
|
||||
println!(">> Coins sent to Core wallet");
|
||||
println!(
|
||||
"Remaining BDK wallet balance: {} sats",
|
||||
wallet.get_balance()?
|
||||
);
|
||||
println!("\nCongrats!! you made your first test transaction with bdk and bitcoin core.");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Helper function demonstrating privatekey extraction using bip39 mnemonic
|
||||
// The mnemonic can be shown to user to safekeeping and the same wallet
|
||||
// private descriptors can be recreated from it.
|
||||
fn generate_random_ext_privkey() -> Result<impl DerivableKey<Segwitv0> + Clone, Box<dyn Error>> {
|
||||
// a Bip39 passphrase can be set optionally
|
||||
let password = Some("random password".to_string());
|
||||
|
||||
// Generate a random mnemonic, and use that to create a "DerivableKey"
|
||||
let mnemonic: GeneratedKey<_, _> = Mnemonic::generate((WordCount::Words12, Language::English))
|
||||
.map_err(|e| e.expect("Unknown Error"))?;
|
||||
|
||||
// `Ok(mnemonic)` would also work if there's no passphrase and it would
|
||||
// yield the same result as this construct with `password` = `None`.
|
||||
Ok((mnemonic, password))
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "bdk-macros"
|
||||
version = "0.6.0"
|
||||
version = "0.4.0"
|
||||
authors = ["Alekos Filini <alekos.filini@gmail.com>"]
|
||||
edition = "2018"
|
||||
homepage = "https://bitcoindevkit.org"
|
||||
|
||||
@@ -16,32 +16,65 @@
|
||||
//!
|
||||
//! ## Example
|
||||
//!
|
||||
//! When paired with the use of [`ConfigurableBlockchain`], it allows creating any
|
||||
//! In this example both `wallet_electrum` and `wallet_esplora` have the same type of
|
||||
//! `Wallet<AnyBlockchain, MemoryDatabase>`. This means that they could both, for instance, be
|
||||
//! assigned to a struct member.
|
||||
//!
|
||||
//! ```no_run
|
||||
//! # use bitcoin::Network;
|
||||
//! # use bdk::blockchain::*;
|
||||
//! # use bdk::database::MemoryDatabase;
|
||||
//! # use bdk::Wallet;
|
||||
//! # #[cfg(feature = "electrum")]
|
||||
//! # {
|
||||
//! let electrum_blockchain = ElectrumBlockchain::from(electrum_client::Client::new("...")?);
|
||||
//! let wallet_electrum: Wallet<AnyBlockchain, _> = Wallet::new(
|
||||
//! "...",
|
||||
//! None,
|
||||
//! Network::Testnet,
|
||||
//! MemoryDatabase::default(),
|
||||
//! electrum_blockchain.into(),
|
||||
//! )?;
|
||||
//! # }
|
||||
//!
|
||||
//! # #[cfg(feature = "esplora")]
|
||||
//! # {
|
||||
//! let esplora_blockchain = EsploraBlockchain::new("...", None, 20);
|
||||
//! let wallet_esplora: Wallet<AnyBlockchain, _> = Wallet::new(
|
||||
//! "...",
|
||||
//! None,
|
||||
//! Network::Testnet,
|
||||
//! MemoryDatabase::default(),
|
||||
//! esplora_blockchain.into(),
|
||||
//! )?;
|
||||
//! # }
|
||||
//!
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
//!
|
||||
//! When paired with the use of [`ConfigurableBlockchain`], it allows creating wallets with any
|
||||
//! blockchain type supported using a single line of code:
|
||||
//!
|
||||
//! ```no_run
|
||||
//! # use bitcoin::Network;
|
||||
//! # use bdk::blockchain::*;
|
||||
//! # #[cfg(all(feature = "esplora", feature = "ureq"))]
|
||||
//! # {
|
||||
//! # use bdk::database::MemoryDatabase;
|
||||
//! # use bdk::Wallet;
|
||||
//! let config = serde_json::from_str("...")?;
|
||||
//! let blockchain = AnyBlockchain::from_config(&config)?;
|
||||
//! let height = blockchain.get_height();
|
||||
//! # }
|
||||
//! let wallet = Wallet::new(
|
||||
//! "...",
|
||||
//! None,
|
||||
//! Network::Testnet,
|
||||
//! MemoryDatabase::default(),
|
||||
//! blockchain,
|
||||
//! )?;
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
|
||||
use super::*;
|
||||
|
||||
macro_rules! impl_from {
|
||||
( boxed $from:ty, $to:ty, $variant:ident, $( $cfg:tt )* ) => {
|
||||
$( $cfg )*
|
||||
impl From<$from> for $to {
|
||||
fn from(inner: $from) -> Self {
|
||||
<$to>::$variant(Box::new(inner))
|
||||
}
|
||||
}
|
||||
};
|
||||
( $from:ty, $to:ty, $variant:ident, $( $cfg:tt )* ) => {
|
||||
$( $cfg )*
|
||||
impl From<$from> for $to {
|
||||
@@ -61,8 +94,6 @@ macro_rules! impl_inner_method {
|
||||
AnyBlockchain::Esplora(inner) => inner.$name( $($args, )* ),
|
||||
#[cfg(feature = "compact_filters")]
|
||||
AnyBlockchain::CompactFilters(inner) => inner.$name( $($args, )* ),
|
||||
#[cfg(feature = "rpc")]
|
||||
AnyBlockchain::Rpc(inner) => inner.$name( $($args, )* ),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -76,19 +107,15 @@ pub enum AnyBlockchain {
|
||||
#[cfg(feature = "electrum")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "electrum")))]
|
||||
/// Electrum client
|
||||
Electrum(Box<electrum::ElectrumBlockchain>),
|
||||
Electrum(electrum::ElectrumBlockchain),
|
||||
#[cfg(feature = "esplora")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "esplora")))]
|
||||
/// Esplora client
|
||||
Esplora(Box<esplora::EsploraBlockchain>),
|
||||
Esplora(esplora::EsploraBlockchain),
|
||||
#[cfg(feature = "compact_filters")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "compact_filters")))]
|
||||
/// Compact filters client
|
||||
CompactFilters(Box<compact_filters::CompactFiltersBlockchain>),
|
||||
#[cfg(feature = "rpc")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "rpc")))]
|
||||
/// RPC client
|
||||
Rpc(Box<rpc::RpcBlockchain>),
|
||||
CompactFilters(compact_filters::CompactFiltersBlockchain),
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
@@ -97,69 +124,39 @@ impl Blockchain for AnyBlockchain {
|
||||
maybe_await!(impl_inner_method!(self, get_capabilities))
|
||||
}
|
||||
|
||||
fn setup<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(impl_inner_method!(self, setup, database, progress_update))
|
||||
}
|
||||
fn sync<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(impl_inner_method!(self, sync, database, progress_update))
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
maybe_await!(impl_inner_method!(self, get_tx, txid))
|
||||
}
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
maybe_await!(impl_inner_method!(self, broadcast, tx))
|
||||
}
|
||||
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
maybe_await!(impl_inner_method!(self, get_height))
|
||||
}
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
maybe_await!(impl_inner_method!(self, estimate_fee, target))
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl GetHeight for AnyBlockchain {
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
maybe_await!(impl_inner_method!(self, get_height))
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl GetTx for AnyBlockchain {
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
maybe_await!(impl_inner_method!(self, get_tx, txid))
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl GetBlockHash for AnyBlockchain {
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
|
||||
maybe_await!(impl_inner_method!(self, get_block_hash, height))
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl WalletSync for AnyBlockchain {
|
||||
fn wallet_sync<D: BatchDatabase>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(impl_inner_method!(
|
||||
self,
|
||||
wallet_sync,
|
||||
database,
|
||||
progress_update
|
||||
))
|
||||
}
|
||||
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(impl_inner_method!(
|
||||
self,
|
||||
wallet_setup,
|
||||
database,
|
||||
progress_update
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
impl_from!(boxed electrum::ElectrumBlockchain, AnyBlockchain, Electrum, #[cfg(feature = "electrum")]);
|
||||
impl_from!(boxed esplora::EsploraBlockchain, AnyBlockchain, Esplora, #[cfg(feature = "esplora")]);
|
||||
impl_from!(boxed compact_filters::CompactFiltersBlockchain, AnyBlockchain, CompactFilters, #[cfg(feature = "compact_filters")]);
|
||||
impl_from!(boxed rpc::RpcBlockchain, AnyBlockchain, Rpc, #[cfg(feature = "rpc")]);
|
||||
impl_from!(electrum::ElectrumBlockchain, AnyBlockchain, Electrum, #[cfg(feature = "electrum")]);
|
||||
impl_from!(esplora::EsploraBlockchain, AnyBlockchain, Esplora, #[cfg(feature = "esplora")]);
|
||||
impl_from!(compact_filters::CompactFiltersBlockchain, AnyBlockchain, CompactFilters, #[cfg(feature = "compact_filters")]);
|
||||
|
||||
/// Type that can contain any of the blockchain configurations defined by the library
|
||||
///
|
||||
@@ -209,10 +206,6 @@ pub enum AnyBlockchainConfig {
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "compact_filters")))]
|
||||
/// Compact filters client
|
||||
CompactFilters(compact_filters::CompactFiltersBlockchainConfig),
|
||||
#[cfg(feature = "rpc")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "rpc")))]
|
||||
/// RPC client configuration
|
||||
Rpc(rpc::RpcConfig),
|
||||
}
|
||||
|
||||
impl ConfigurableBlockchain for AnyBlockchain {
|
||||
@@ -222,20 +215,16 @@ impl ConfigurableBlockchain for AnyBlockchain {
|
||||
Ok(match config {
|
||||
#[cfg(feature = "electrum")]
|
||||
AnyBlockchainConfig::Electrum(inner) => {
|
||||
AnyBlockchain::Electrum(Box::new(electrum::ElectrumBlockchain::from_config(inner)?))
|
||||
AnyBlockchain::Electrum(electrum::ElectrumBlockchain::from_config(inner)?)
|
||||
}
|
||||
#[cfg(feature = "esplora")]
|
||||
AnyBlockchainConfig::Esplora(inner) => {
|
||||
AnyBlockchain::Esplora(Box::new(esplora::EsploraBlockchain::from_config(inner)?))
|
||||
AnyBlockchain::Esplora(esplora::EsploraBlockchain::from_config(inner)?)
|
||||
}
|
||||
#[cfg(feature = "compact_filters")]
|
||||
AnyBlockchainConfig::CompactFilters(inner) => AnyBlockchain::CompactFilters(Box::new(
|
||||
AnyBlockchainConfig::CompactFilters(inner) => AnyBlockchain::CompactFilters(
|
||||
compact_filters::CompactFiltersBlockchain::from_config(inner)?,
|
||||
)),
|
||||
#[cfg(feature = "rpc")]
|
||||
AnyBlockchainConfig::Rpc(inner) => {
|
||||
AnyBlockchain::Rpc(Box::new(rpc::RpcBlockchain::from_config(inner)?))
|
||||
}
|
||||
),
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -243,4 +232,3 @@ impl ConfigurableBlockchain for AnyBlockchain {
|
||||
impl_from!(electrum::ElectrumBlockchainConfig, AnyBlockchainConfig, Electrum, #[cfg(feature = "electrum")]);
|
||||
impl_from!(esplora::EsploraBlockchainConfig, AnyBlockchainConfig, Esplora, #[cfg(feature = "esplora")]);
|
||||
impl_from!(compact_filters::CompactFiltersBlockchainConfig, AnyBlockchainConfig, CompactFilters, #[cfg(feature = "compact_filters")]);
|
||||
impl_from!(rpc::RpcConfig, AnyBlockchainConfig, Rpc, #[cfg(feature = "rpc")]);
|
||||
|
||||
764
src/blockchain/compact_filters/address_manager.rs
Normal file
764
src/blockchain/compact_filters/address_manager.rs
Normal file
@@ -0,0 +1,764 @@
|
||||
// Bitcoin Dev Kit
|
||||
// Written in 2021 by Rajarshi Maitra <rajarshi149@gmail.com>
|
||||
// John Cantrell <johncantrell97@protonmail.com>
|
||||
//
|
||||
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
|
||||
//
|
||||
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
|
||||
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
||||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
use std::fs::File;
|
||||
use std::io::prelude::*;
|
||||
|
||||
use std::collections::HashSet;
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::{
|
||||
mpsc::{channel, Receiver, SendError, Sender},
|
||||
Arc, RwLock,
|
||||
};
|
||||
use std::thread::{self, JoinHandle};
|
||||
use std::time::Duration;
|
||||
|
||||
use std::net::{SocketAddr, ToSocketAddrs};
|
||||
|
||||
use std::sync::PoisonError;
|
||||
use std::sync::{MutexGuard, RwLockReadGuard, RwLockWriteGuard, WaitTimeoutResult};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Error as SerdeError;
|
||||
|
||||
use super::{Mempool, Peer, PeerError};
|
||||
|
||||
use bitcoin::network::{
|
||||
constants::{Network, ServiceFlags},
|
||||
message::NetworkMessage,
|
||||
Address,
|
||||
};
|
||||
|
||||
/// Default address pool minimums
|
||||
const MIN_CBF_BUFFER: usize = 5;
|
||||
const MIN_NONCBF_BUFFER: usize = 5;
|
||||
|
||||
/// A Discovery structure used by workers
|
||||
///
|
||||
/// Discovery can be initiated via a cache,
|
||||
/// Or it will start with default hardcoded seeds
|
||||
pub struct AddressDiscovery {
|
||||
pending: VecDeque<SocketAddr>,
|
||||
visited: HashSet<SocketAddr>,
|
||||
}
|
||||
|
||||
impl AddressDiscovery {
|
||||
fn new(network: Network, seeds: VecDeque<SocketAddr>) -> AddressDiscovery {
|
||||
let mut network_seeds = AddressDiscovery::seeds(network);
|
||||
let mut total_seeds = seeds;
|
||||
total_seeds.append(&mut network_seeds);
|
||||
AddressDiscovery {
|
||||
pending: total_seeds,
|
||||
visited: HashSet::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn add_pendings(&mut self, addresses: Vec<SocketAddr>) {
|
||||
for addr in addresses {
|
||||
if !self.pending.contains(&addr) && !self.visited.contains(&addr) {
|
||||
self.pending.push_back(addr);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn get_next(&mut self) -> Option<SocketAddr> {
|
||||
match self.pending.pop_front() {
|
||||
None => None,
|
||||
Some(next) => {
|
||||
self.visited.insert(next);
|
||||
Some(next)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn seeds(network: Network) -> VecDeque<SocketAddr> {
|
||||
let mut seeds = VecDeque::new();
|
||||
|
||||
let port: u16 = match network {
|
||||
Network::Bitcoin => 8333,
|
||||
Network::Testnet => 18333,
|
||||
Network::Regtest => 18444,
|
||||
Network::Signet => 38333,
|
||||
};
|
||||
|
||||
let seedhosts: &[&str] = match network {
|
||||
Network::Bitcoin => &[
|
||||
"seed.bitcoin.sipa.be",
|
||||
"dnsseed.bluematt.me",
|
||||
"dnsseed.bitcoin.dashjr.org",
|
||||
"seed.bitcoinstats.com",
|
||||
"seed.bitcoin.jonasschnelli.ch",
|
||||
"seed.btc.petertodd.org",
|
||||
"seed.bitcoin.sprovoost.nl",
|
||||
"dnsseed.emzy.de",
|
||||
"seed.bitcoin.wiz.biz",
|
||||
],
|
||||
Network::Testnet => &[
|
||||
"testnet-seed.bitcoin.jonasschnelli.ch",
|
||||
"seed.tbtc.petertodd.org",
|
||||
"seed.testnet.bitcoin.sprovoost.nl",
|
||||
"testnet-seed.bluematt.me",
|
||||
],
|
||||
Network::Regtest => &[],
|
||||
Network::Signet => &[],
|
||||
};
|
||||
|
||||
for seedhost in seedhosts.iter() {
|
||||
if let Ok(lookup) = (*seedhost, port).to_socket_addrs() {
|
||||
for host in lookup {
|
||||
if host.is_ipv4() {
|
||||
seeds.push_back(host);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
seeds
|
||||
}
|
||||
}
|
||||
|
||||
/// Crawler structure that will interface with Discovery and public bitcoin network
|
||||
///
|
||||
/// Address manager will spawn multiple crawlers in separate threads to discover new addresses.
|
||||
struct AddressWorker {
|
||||
discovery: Arc<RwLock<AddressDiscovery>>,
|
||||
sender: Sender<(SocketAddr, ServiceFlags)>,
|
||||
network: Network,
|
||||
}
|
||||
|
||||
impl AddressWorker {
|
||||
fn new(
|
||||
discovery: Arc<RwLock<AddressDiscovery>>,
|
||||
sender: Sender<(SocketAddr, ServiceFlags)>,
|
||||
network: Network,
|
||||
) -> AddressWorker {
|
||||
AddressWorker {
|
||||
discovery,
|
||||
sender,
|
||||
network,
|
||||
}
|
||||
}
|
||||
|
||||
fn try_receive_addr(&mut self, peer: &Peer) -> Result<(), AddressManagerError> {
|
||||
if let Some(NetworkMessage::Addr(new_addresses)) =
|
||||
peer.recv("addr", Some(Duration::from_secs(1)))?
|
||||
{
|
||||
self.consume_addr(new_addresses)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn consume_addr(&mut self, addrs: Vec<(u32, Address)>) -> Result<(), AddressManagerError> {
|
||||
let mut discovery_lock = self.discovery.write().map_err(PeerError::from)?;
|
||||
let mut addresses = Vec::new();
|
||||
for network_addrs in addrs {
|
||||
if let Ok(socket_addrs) = network_addrs.1.socket_addr() {
|
||||
addresses.push(socket_addrs);
|
||||
}
|
||||
}
|
||||
discovery_lock.add_pendings(addresses);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn work(&mut self) -> Result<(), AddressManagerError> {
|
||||
loop {
|
||||
let next_address = {
|
||||
let mut address_discovery = self.discovery.write()?;
|
||||
address_discovery.get_next()
|
||||
};
|
||||
|
||||
match next_address {
|
||||
Some(address) => {
|
||||
let potential_peer = Peer::connect_with_timeout(
|
||||
address,
|
||||
Duration::from_secs(1),
|
||||
Arc::new(Mempool::default()),
|
||||
self.network,
|
||||
);
|
||||
|
||||
if let Ok(peer) = potential_peer {
|
||||
peer.send(NetworkMessage::GetAddr)?;
|
||||
self.try_receive_addr(&peer)?;
|
||||
self.try_receive_addr(&peer)?;
|
||||
self.sender.send((address, peer.get_version().services))?;
|
||||
// TODO: Investigate why close is being called on non existent connections
|
||||
// currently the errors are ignored
|
||||
peer.close().unwrap_or(());
|
||||
}
|
||||
}
|
||||
None => continue,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A dedicated cache structure, with cbf/non_cbf separation
|
||||
///
|
||||
/// [AddressCache] will interface with file i/o
|
||||
/// And can te turned into seeds. Generation of seed will put previously cached
|
||||
/// cbf addresses at front of the vec, to boost up cbf node findings
|
||||
#[derive(Serialize, Deserialize)]
|
||||
struct AddressCache {
|
||||
banned_peers: HashSet<SocketAddr>,
|
||||
cbf: HashSet<SocketAddr>,
|
||||
non_cbf: HashSet<SocketAddr>,
|
||||
}
|
||||
|
||||
impl AddressCache {
|
||||
fn empty() -> Self {
|
||||
Self {
|
||||
banned_peers: HashSet::new(),
|
||||
cbf: HashSet::new(),
|
||||
non_cbf: HashSet::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn from_file(path: &str) -> Result<Option<Self>, AddressManagerError> {
|
||||
let serialized: Result<String, _> = std::fs::read_to_string(path);
|
||||
let serialized = match serialized {
|
||||
Ok(contents) => contents,
|
||||
Err(_) => return Ok(None),
|
||||
};
|
||||
|
||||
let address_cache = serde_json::from_str(&serialized)?;
|
||||
Ok(Some(address_cache))
|
||||
}
|
||||
|
||||
fn write_to_file(&self, path: &str) -> Result<(), AddressManagerError> {
|
||||
let serialized = serde_json::to_string_pretty(&self)?;
|
||||
|
||||
let mut cache_file = File::create(path)?;
|
||||
|
||||
cache_file.write_all(serialized.as_bytes())?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn make_seeds(&self) -> VecDeque<SocketAddr> {
|
||||
self.cbf
|
||||
.iter()
|
||||
.chain(self.non_cbf.iter())
|
||||
.copied()
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn remove_address(&mut self, addrs: &SocketAddr, cbf: bool) -> bool {
|
||||
if cbf {
|
||||
self.cbf.remove(addrs)
|
||||
} else {
|
||||
self.non_cbf.remove(addrs)
|
||||
}
|
||||
}
|
||||
|
||||
fn add_address(&mut self, addrs: SocketAddr, cbf: bool) -> bool {
|
||||
if cbf {
|
||||
self.cbf.insert(addrs)
|
||||
} else {
|
||||
self.non_cbf.insert(addrs)
|
||||
}
|
||||
}
|
||||
|
||||
fn add_to_banlist(&mut self, addrs: SocketAddr, cbf: bool) {
|
||||
if self.banned_peers.insert(addrs) {
|
||||
self.remove_address(&addrs, cbf);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A Live directory maintained by [AddressManager] of freshly found cbf and non_cbf nodes by workers
|
||||
///
|
||||
/// Each instance of new [AddressManager] with have fresh [AddressDirectory]
|
||||
/// This is independent from the cache and will be an in-memory database to
|
||||
/// fetch addresses to the user.
|
||||
struct AddressDirectory {
|
||||
cbf_nodes: HashSet<SocketAddr>,
|
||||
non_cbf_nodes: HashSet<SocketAddr>,
|
||||
|
||||
// List of addresses it has previously provided to the caller (PeerManager)
|
||||
previously_sent: HashSet<SocketAddr>,
|
||||
}
|
||||
|
||||
impl AddressDirectory {
|
||||
fn new() -> AddressDirectory {
|
||||
AddressDirectory {
|
||||
cbf_nodes: HashSet::new(),
|
||||
non_cbf_nodes: HashSet::new(),
|
||||
previously_sent: HashSet::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn add_address(&mut self, addr: SocketAddr, cbf: bool) {
|
||||
if cbf {
|
||||
self.cbf_nodes.insert(addr);
|
||||
} else {
|
||||
self.non_cbf_nodes.insert(addr);
|
||||
}
|
||||
}
|
||||
|
||||
fn get_new_address(&mut self, cbf: bool) -> Option<SocketAddr> {
|
||||
if cbf {
|
||||
if let Some(new_addresses) = self
|
||||
.cbf_nodes
|
||||
.iter()
|
||||
.filter(|item| !self.previously_sent.contains(item))
|
||||
.collect::<Vec<&SocketAddr>>()
|
||||
.pop()
|
||||
{
|
||||
self.previously_sent.insert(*new_addresses);
|
||||
Some(*new_addresses)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
} else if let Some(new_addresses) = self
|
||||
.non_cbf_nodes
|
||||
.iter()
|
||||
.filter(|item| !self.previously_sent.contains(item))
|
||||
.collect::<Vec<&SocketAddr>>()
|
||||
.pop()
|
||||
{
|
||||
self.previously_sent.insert(*new_addresses);
|
||||
Some(*new_addresses)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
fn get_cbf_address_count(&self) -> usize {
|
||||
self.cbf_nodes.len()
|
||||
}
|
||||
|
||||
fn get_non_cbf_address_count(&self) -> usize {
|
||||
self.non_cbf_nodes.len()
|
||||
}
|
||||
|
||||
fn remove_address(&mut self, addrs: &SocketAddr, cbf: bool) {
|
||||
if cbf {
|
||||
self.cbf_nodes.remove(addrs);
|
||||
} else {
|
||||
self.non_cbf_nodes.remove(addrs);
|
||||
}
|
||||
}
|
||||
|
||||
fn get_cbf_buffer(&self) -> usize {
|
||||
self.cbf_nodes
|
||||
.iter()
|
||||
.filter(|item| !self.previously_sent.contains(item))
|
||||
.count()
|
||||
}
|
||||
|
||||
fn get_non_cbf_buffer(&self) -> usize {
|
||||
self.non_cbf_nodes
|
||||
.iter()
|
||||
.filter(|item| !self.previously_sent.contains(item))
|
||||
.count()
|
||||
}
|
||||
}
|
||||
|
||||
/// Discovery statistics, useful for logging
|
||||
#[derive(Clone, Copy)]
|
||||
pub struct DiscoveryData {
|
||||
queued: usize,
|
||||
visited: usize,
|
||||
non_cbf_count: usize,
|
||||
cbf_count: usize,
|
||||
}
|
||||
|
||||
/// Progress trait for discovery statistics logging
|
||||
pub trait DiscoveryProgress {
|
||||
/// Update progress
|
||||
fn update(&self, data: DiscoveryData);
|
||||
}
|
||||
|
||||
/// Used when progress updates are not desired
|
||||
#[derive(Clone)]
|
||||
pub struct NoDiscoveryProgress;
|
||||
|
||||
impl DiscoveryProgress for NoDiscoveryProgress {
|
||||
fn update(&self, _data: DiscoveryData) {}
|
||||
}
|
||||
|
||||
/// Used to log progress update
|
||||
#[derive(Clone)]
|
||||
pub struct LogDiscoveryProgress;
|
||||
|
||||
impl DiscoveryProgress for LogDiscoveryProgress {
|
||||
fn update(&self, data: DiscoveryData) {
|
||||
log::trace!(
|
||||
"P2P Discovery: {} queued, {} visited, {} connected, {} cbf_enabled",
|
||||
data.queued,
|
||||
data.visited,
|
||||
data.non_cbf_count,
|
||||
data.cbf_count
|
||||
);
|
||||
|
||||
#[cfg(test)]
|
||||
println!(
|
||||
"P2P Discovery: {} queued, {} visited, {} connected, {} cbf_enabled",
|
||||
data.queued, data.visited, data.non_cbf_count, data.cbf_count
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/// A manager structure managing address discovery
|
||||
///
|
||||
/// Manager will try to maintain a given address buffer in its directory
|
||||
/// buffer = len(exiting addresses) - len(previously provided addresses)
|
||||
/// Manager will crawl the network until buffer criteria is satisfied
|
||||
/// Manager will bootstrap workers from a cache, to speed up discovery progress in
|
||||
/// subsequent call after the first crawl.
|
||||
/// Manager will keep track of the cache and only update it if previously
|
||||
/// unknown addresses are found.
|
||||
pub struct AddressManager<P: DiscoveryProgress> {
|
||||
directory: AddressDirectory,
|
||||
cache_filename: String,
|
||||
discovery: Arc<RwLock<AddressDiscovery>>,
|
||||
threads: usize,
|
||||
receiver: Receiver<(SocketAddr, ServiceFlags)>,
|
||||
sender: Sender<(SocketAddr, ServiceFlags)>,
|
||||
network: Network,
|
||||
cbf_buffer: usize,
|
||||
non_cbf_buffer: usize,
|
||||
progress: P,
|
||||
}
|
||||
|
||||
impl<P: DiscoveryProgress> AddressManager<P> {
|
||||
/// Create a new manager. Initiate Discovery seeds from the cache
|
||||
/// if it exists, else start with hardcoded seeds
|
||||
pub fn new(
|
||||
network: Network,
|
||||
cache_filename: String,
|
||||
threads: usize,
|
||||
cbf_buffer: Option<usize>,
|
||||
non_cbf_buffer: Option<usize>,
|
||||
progress: P,
|
||||
) -> Result<AddressManager<P>, AddressManagerError> {
|
||||
let (sender, receiver) = channel();
|
||||
|
||||
let seeds = match AddressCache::from_file(&cache_filename)? {
|
||||
Some(cache) => cache.make_seeds(),
|
||||
None => VecDeque::new(),
|
||||
};
|
||||
|
||||
let min_cbf = cbf_buffer.unwrap_or(MIN_CBF_BUFFER);
|
||||
|
||||
let min_non_cbf = non_cbf_buffer.unwrap_or(MIN_NONCBF_BUFFER);
|
||||
|
||||
let discovery = AddressDiscovery::new(network, seeds);
|
||||
|
||||
Ok(AddressManager {
|
||||
cache_filename,
|
||||
directory: AddressDirectory::new(),
|
||||
discovery: Arc::new(RwLock::new(discovery)),
|
||||
sender,
|
||||
receiver,
|
||||
network,
|
||||
threads,
|
||||
cbf_buffer: min_cbf,
|
||||
non_cbf_buffer: min_non_cbf,
|
||||
progress,
|
||||
})
|
||||
}
|
||||
|
||||
/// Get running address discovery progress
|
||||
fn get_progress(&self) -> Result<DiscoveryData, AddressManagerError> {
|
||||
let (queued_count, visited_count) = {
|
||||
let address_discovery = self.discovery.read()?;
|
||||
(
|
||||
address_discovery.pending.len(),
|
||||
address_discovery.visited.len(),
|
||||
)
|
||||
};
|
||||
|
||||
let cbf_node_count = self.directory.get_cbf_address_count();
|
||||
let other_node_count = self.directory.get_non_cbf_address_count();
|
||||
|
||||
Ok(DiscoveryData {
|
||||
queued: queued_count,
|
||||
visited: visited_count,
|
||||
non_cbf_count: cbf_node_count + other_node_count,
|
||||
cbf_count: cbf_node_count,
|
||||
})
|
||||
}
|
||||
|
||||
/// Spawn [self.thread] no. of worker threads
|
||||
fn spawn_workers(&mut self) -> Vec<JoinHandle<()>> {
|
||||
let mut worker_handles: Vec<JoinHandle<()>> = vec![];
|
||||
for _ in 0..self.threads {
|
||||
let sender = self.sender.clone();
|
||||
let discovery = self.discovery.clone();
|
||||
let network = self.network;
|
||||
let worker_handle = thread::spawn(move || {
|
||||
let mut worker = AddressWorker::new(discovery, sender, network);
|
||||
worker.work().unwrap();
|
||||
});
|
||||
worker_handles.push(worker_handle);
|
||||
}
|
||||
worker_handles
|
||||
}
|
||||
|
||||
/// Crawl the Bitcoin network until required number of cbf/non_cbf nodes are found
|
||||
///
|
||||
/// - This will start a bunch of crawlers.
|
||||
/// - load up the existing cache.
|
||||
/// - Update the cache with new found peers.
|
||||
/// - check if address is in banlist
|
||||
/// - run crawlers until buffer requirement is matched
|
||||
/// - flush the current cache into disk
|
||||
pub fn fetch(&mut self) -> Result<(), AddressManagerError> {
|
||||
self.spawn_workers();
|
||||
|
||||
// Get already existing cache
|
||||
let mut cache = match AddressCache::from_file(&self.cache_filename)? {
|
||||
Some(cache) => cache,
|
||||
None => AddressCache::empty(),
|
||||
};
|
||||
|
||||
while self.directory.get_cbf_buffer() < self.cbf_buffer
|
||||
|| self.directory.get_non_cbf_buffer() < self.non_cbf_buffer
|
||||
{
|
||||
if let Ok(message) = self.receiver.recv() {
|
||||
let (addr, flag) = message;
|
||||
if !cache.banned_peers.contains(&addr) {
|
||||
let cbf = flag.has(ServiceFlags::COMPACT_FILTERS);
|
||||
self.directory.add_address(addr, cbf);
|
||||
cache.add_address(addr, cbf);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.progress.update(self.get_progress()?);
|
||||
|
||||
// When completed, flush the cache
|
||||
cache.write_to_file(&self.cache_filename)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get a new addresses not previously provided
|
||||
pub fn get_new_cbf_address(&mut self) -> Option<SocketAddr> {
|
||||
self.directory.get_new_address(true)
|
||||
}
|
||||
|
||||
/// Get a new non_cbf address
|
||||
pub fn get_new_non_cbf_address(&mut self) -> Option<SocketAddr> {
|
||||
self.directory.get_new_address(false)
|
||||
}
|
||||
|
||||
/// Ban an address
|
||||
pub fn ban_peer(&mut self, addrs: &SocketAddr, cbf: bool) -> Result<(), AddressManagerError> {
|
||||
let mut cache = AddressCache::from_file(&self.cache_filename)?.ok_or_else(|| {
|
||||
AddressManagerError::Generic("Address Cache file not found".to_string())
|
||||
})?;
|
||||
|
||||
cache.add_to_banlist(*addrs, cbf);
|
||||
|
||||
// When completed, flush the cache
|
||||
cache.write_to_file(&self.cache_filename).unwrap();
|
||||
|
||||
self.directory.remove_address(addrs, cbf);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get all the known CBF addresses
|
||||
pub fn get_known_cbfs(&self) -> Option<Vec<SocketAddr>> {
|
||||
let addresses = self
|
||||
.directory
|
||||
.cbf_nodes
|
||||
.iter()
|
||||
.copied()
|
||||
.collect::<Vec<SocketAddr>>();
|
||||
|
||||
match addresses.len() {
|
||||
0 => None,
|
||||
_ => Some(addresses),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get all the known regular addresses
|
||||
pub fn get_known_non_cbfs(&self) -> Option<Vec<SocketAddr>> {
|
||||
let addresses = self
|
||||
.directory
|
||||
.non_cbf_nodes
|
||||
.iter()
|
||||
.copied()
|
||||
.collect::<Vec<SocketAddr>>();
|
||||
|
||||
match addresses.len() {
|
||||
0 => None,
|
||||
_ => Some(addresses),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get previously tried addresses
|
||||
pub fn get_previously_tried(&self) -> Option<Vec<SocketAddr>> {
|
||||
let addresses = self
|
||||
.directory
|
||||
.previously_sent
|
||||
.iter()
|
||||
.copied()
|
||||
.collect::<Vec<SocketAddr>>();
|
||||
|
||||
match addresses.len() {
|
||||
0 => None,
|
||||
_ => Some(addresses),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum AddressManagerError {
|
||||
/// Std I/O Error
|
||||
Io(std::io::Error),
|
||||
|
||||
/// Internal Peer error
|
||||
Peer(PeerError),
|
||||
|
||||
/// Internal Mutex poisoning error
|
||||
MutexPoisoned,
|
||||
|
||||
/// Internal Mutex wait timed out
|
||||
MutexTimedOut,
|
||||
|
||||
/// Internal RW read lock poisoned
|
||||
RwReadLockPoisined,
|
||||
|
||||
/// Internal RW write lock poisoned
|
||||
RwWriteLockPoisoned,
|
||||
|
||||
/// Internal MPSC sending error
|
||||
MpscSendError,
|
||||
|
||||
/// Serde Json Error
|
||||
SerdeJson(SerdeError),
|
||||
|
||||
/// Generic Errors
|
||||
Generic(String),
|
||||
}
|
||||
|
||||
impl_error!(PeerError, Peer, AddressManagerError);
|
||||
impl_error!(std::io::Error, Io, AddressManagerError);
|
||||
impl_error!(SerdeError, SerdeJson, AddressManagerError);
|
||||
|
||||
impl<T> From<PoisonError<MutexGuard<'_, T>>> for AddressManagerError {
|
||||
fn from(_: PoisonError<MutexGuard<'_, T>>) -> Self {
|
||||
AddressManagerError::MutexPoisoned
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> From<PoisonError<RwLockWriteGuard<'_, T>>> for AddressManagerError {
|
||||
fn from(_: PoisonError<RwLockWriteGuard<'_, T>>) -> Self {
|
||||
AddressManagerError::RwWriteLockPoisoned
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> From<PoisonError<RwLockReadGuard<'_, T>>> for AddressManagerError {
|
||||
fn from(_: PoisonError<RwLockReadGuard<'_, T>>) -> Self {
|
||||
AddressManagerError::RwReadLockPoisined
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> From<PoisonError<(MutexGuard<'_, T>, WaitTimeoutResult)>> for AddressManagerError {
|
||||
fn from(err: PoisonError<(MutexGuard<'_, T>, WaitTimeoutResult)>) -> Self {
|
||||
let (_, wait_result) = err.into_inner();
|
||||
if wait_result.timed_out() {
|
||||
AddressManagerError::MutexTimedOut
|
||||
} else {
|
||||
AddressManagerError::MutexPoisoned
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> From<SendError<T>> for AddressManagerError {
|
||||
fn from(_: SendError<T>) -> Self {
|
||||
AddressManagerError::MpscSendError
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
#[ignore]
|
||||
fn test_address_manager() {
|
||||
// Initiate a manager with an non existent cache file name.
|
||||
// It will create a new cache file
|
||||
let mut manager = AddressManager::new(
|
||||
Network::Bitcoin,
|
||||
"addr_cache".to_string(),
|
||||
20,
|
||||
None,
|
||||
None,
|
||||
LogDiscoveryProgress,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// start the crawlers and time them
|
||||
println!("Starting manager and initial fetch");
|
||||
let start = std::time::Instant::now();
|
||||
manager.fetch().unwrap();
|
||||
let duration1 = start.elapsed();
|
||||
println!("Completed Initial fetch");
|
||||
|
||||
// Create a new manager from existing cache and fetch again
|
||||
let mut manager = AddressManager::new(
|
||||
Network::Bitcoin,
|
||||
"addr_cache".to_string(),
|
||||
20,
|
||||
None,
|
||||
None,
|
||||
LogDiscoveryProgress,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// start the crawlers and time them
|
||||
println!("Starting new fetch with previous cache");
|
||||
let start = std::time::Instant::now();
|
||||
manager.fetch().unwrap();
|
||||
let duration2 = start.elapsed();
|
||||
println!("Completed new fetch()");
|
||||
|
||||
println!("Time taken for initial crawl: {:#?}", duration1);
|
||||
println!("Time taken for next crawl {:#?}", duration2);
|
||||
|
||||
// Check Buffer Management
|
||||
|
||||
println!("Checking buffer management");
|
||||
// Fetch few new address and ensure buffer goes to zero
|
||||
let mut addrs_list = Vec::new();
|
||||
for _ in 0..5 {
|
||||
let addr_cbf = manager.get_new_cbf_address().unwrap();
|
||||
let addrs_non_cbf = manager.get_new_non_cbf_address().unwrap();
|
||||
|
||||
addrs_list.push(addr_cbf);
|
||||
|
||||
addrs_list.push(addrs_non_cbf);
|
||||
}
|
||||
|
||||
assert_eq!(addrs_list.len(), 10);
|
||||
|
||||
// This should exhaust the cbf buffer
|
||||
assert_eq!(manager.directory.get_cbf_buffer(), 0);
|
||||
|
||||
// Calling fetch again should start crawlers until buffer
|
||||
// requirements are matched.
|
||||
println!("Address buffer exhausted, starting new fetch");
|
||||
manager.fetch().unwrap();
|
||||
println!("Fetch Complete");
|
||||
// It should again have a cbf buffer of 5
|
||||
assert_eq!(manager.directory.get_cbf_buffer(), 5);
|
||||
|
||||
println!("Buffer management passed");
|
||||
}
|
||||
}
|
||||
@@ -63,22 +63,27 @@ use bitcoin::{Network, OutPoint, Transaction, Txid};
|
||||
|
||||
use rocksdb::{Options, SliceTransform, DB};
|
||||
|
||||
mod address_manager;
|
||||
mod peer;
|
||||
mod peermngr;
|
||||
mod store;
|
||||
mod sync;
|
||||
|
||||
use crate::blockchain::*;
|
||||
use super::{Blockchain, Capability, ConfigurableBlockchain, Progress};
|
||||
use crate::database::{BatchDatabase, BatchOperations, DatabaseUtils};
|
||||
use crate::error::Error;
|
||||
use crate::types::{KeychainKind, LocalUtxo, TransactionDetails};
|
||||
use crate::{BlockTime, FeeRate};
|
||||
use crate::{ConfirmationTime, FeeRate};
|
||||
|
||||
use peer::*;
|
||||
use store::*;
|
||||
use sync::*;
|
||||
|
||||
// Only added to avoid unused warnings in addrsmngr module
|
||||
pub use address_manager::{
|
||||
AddressManager, DiscoveryProgress, LogDiscoveryProgress, NoDiscoveryProgress,
|
||||
};
|
||||
pub use peer::{Mempool, Peer};
|
||||
|
||||
const SYNC_HEADERS_COST: f32 = 1.0;
|
||||
const SYNC_FILTERS_COST: f32 = 11.6 * 1_000.0;
|
||||
const PROCESS_BLOCKS_COST: f32 = 20_000.0;
|
||||
@@ -163,19 +168,11 @@ impl CompactFiltersBlockchain {
|
||||
if let Some(previous_output) = database.get_previous_output(&input.previous_output)? {
|
||||
inputs_sum += previous_output.value;
|
||||
|
||||
// this output is ours, we have a path to derive it
|
||||
if let Some((keychain, _)) =
|
||||
database.get_path_from_script_pubkey(&previous_output.script_pubkey)?
|
||||
{
|
||||
if database.is_mine(&previous_output.script_pubkey)? {
|
||||
outgoing += previous_output.value;
|
||||
|
||||
debug!("{} input #{} is mine, setting utxo as spent", tx.txid(), i);
|
||||
updates.set_utxo(&LocalUtxo {
|
||||
outpoint: input.previous_output,
|
||||
txout: previous_output.clone(),
|
||||
keychain,
|
||||
is_spent: true,
|
||||
})?;
|
||||
debug!("{} input #{} is mine, removing from utxo", tx.txid(), i);
|
||||
updates.del_utxo(&input.previous_output)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -193,7 +190,6 @@ impl CompactFiltersBlockchain {
|
||||
outpoint: OutPoint::new(tx.txid(), i as u32),
|
||||
txout: output.clone(),
|
||||
keychain,
|
||||
is_spent: false,
|
||||
})?;
|
||||
incoming += output.value;
|
||||
|
||||
@@ -215,7 +211,8 @@ impl CompactFiltersBlockchain {
|
||||
transaction: Some(tx.clone()),
|
||||
received: incoming,
|
||||
sent: outgoing,
|
||||
confirmation_time: BlockTime::new(height, timestamp),
|
||||
confirmation_time: ConfirmationTime::new(height, timestamp),
|
||||
verified: height.is_some(),
|
||||
fee: Some(inputs_sum.saturating_sub(outputs_sum)),
|
||||
};
|
||||
|
||||
@@ -234,48 +231,11 @@ impl Blockchain for CompactFiltersBlockchain {
|
||||
vec![Capability::FullHistory].into_iter().collect()
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
self.peers[0].broadcast_tx(tx.clone())?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, _target: usize) -> Result<FeeRate, Error> {
|
||||
// TODO
|
||||
Ok(FeeRate::default())
|
||||
}
|
||||
}
|
||||
|
||||
impl GetHeight for CompactFiltersBlockchain {
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(self.headers.get_height()? as u32)
|
||||
}
|
||||
}
|
||||
|
||||
impl GetTx for CompactFiltersBlockchain {
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(self.peers[0]
|
||||
.get_mempool()
|
||||
.get_tx(&Inventory::Transaction(*txid)))
|
||||
}
|
||||
}
|
||||
|
||||
impl GetBlockHash for CompactFiltersBlockchain {
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
|
||||
self.headers
|
||||
.get_block_hash(height as usize)?
|
||||
.ok_or(Error::CompactFilters(
|
||||
CompactFiltersError::BlockHashNotFound,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
impl WalletSync for CompactFiltersBlockchain {
|
||||
#[allow(clippy::mutex_atomic)] // Mutex is easier to understand than a CAS loop.
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
fn setup<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
let first_peer = &self.peers[0];
|
||||
|
||||
@@ -299,7 +259,7 @@ impl WalletSync for CompactFiltersBlockchain {
|
||||
let total_cost = headers_cost + filters_cost + PROCESS_BLOCKS_COST;
|
||||
|
||||
if let Some(snapshot) = sync::sync_headers(
|
||||
Arc::clone(first_peer),
|
||||
Arc::clone(&first_peer),
|
||||
Arc::clone(&self.headers),
|
||||
|new_height| {
|
||||
let local_headers_cost =
|
||||
@@ -320,7 +280,7 @@ impl WalletSync for CompactFiltersBlockchain {
|
||||
let buried_height = synced_height.saturating_sub(sync::BURIED_CONFIRMATIONS);
|
||||
info!("Synced headers to height: {}", synced_height);
|
||||
|
||||
cf_sync.prepare_sync(Arc::clone(first_peer))?;
|
||||
cf_sync.prepare_sync(Arc::clone(&first_peer))?;
|
||||
|
||||
let all_scripts = Arc::new(
|
||||
database
|
||||
@@ -339,7 +299,7 @@ impl WalletSync for CompactFiltersBlockchain {
|
||||
let mut threads = Vec::with_capacity(self.peers.len());
|
||||
for peer in &self.peers {
|
||||
let cf_sync = Arc::clone(&cf_sync);
|
||||
let peer = Arc::clone(peer);
|
||||
let peer = Arc::clone(&peer);
|
||||
let headers = Arc::clone(&self.headers);
|
||||
let all_scripts = Arc::clone(&all_scripts);
|
||||
let last_synced_block = Arc::clone(&last_synced_block);
|
||||
@@ -416,10 +376,10 @@ impl WalletSync for CompactFiltersBlockchain {
|
||||
database.commit_batch(updates)?;
|
||||
|
||||
match first_peer.ask_for_mempool() {
|
||||
Err(CompactFiltersError::PeerBloomDisabled) => {
|
||||
Err(PeerError::PeerBloomDisabled(_)) => {
|
||||
log::warn!("Peer has BLOOM disabled, we can't ask for the mempool")
|
||||
}
|
||||
e => e?,
|
||||
e => e.map_err(CompactFiltersError::from)?,
|
||||
};
|
||||
|
||||
let mut internal_max_deriv = None;
|
||||
@@ -437,7 +397,12 @@ impl WalletSync for CompactFiltersBlockchain {
|
||||
)?;
|
||||
}
|
||||
}
|
||||
for tx in first_peer.get_mempool().iter_txs().iter() {
|
||||
for tx in first_peer
|
||||
.get_mempool()
|
||||
.iter_txs()
|
||||
.map_err(CompactFiltersError::from)?
|
||||
.iter()
|
||||
{
|
||||
self.process_tx(
|
||||
database,
|
||||
tx,
|
||||
@@ -476,6 +441,30 @@ impl WalletSync for CompactFiltersBlockchain {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(self.peers[0]
|
||||
.get_mempool()
|
||||
.get_tx(&Inventory::Transaction(*txid))
|
||||
.map_err(CompactFiltersError::from)?)
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
self.peers[0]
|
||||
.broadcast_tx(tx.clone())
|
||||
.map_err(CompactFiltersError::from)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(self.headers.get_height()? as u32)
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, _target: usize) -> Result<FeeRate, Error> {
|
||||
// TODO
|
||||
Ok(FeeRate::default())
|
||||
}
|
||||
}
|
||||
|
||||
/// Data to connect to a Bitcoin P2P peer
|
||||
@@ -496,7 +485,7 @@ pub struct CompactFiltersBlockchainConfig {
|
||||
pub peers: Vec<BitcoinPeerConfig>,
|
||||
/// Network used
|
||||
pub network: Network,
|
||||
/// Storage dir to save partially downloaded headers and full blocks. Should be a separate directory per descriptor. Consider using [crate::wallet::wallet_name_from_descriptor] for this.
|
||||
/// Storage dir to save partially downloaded headers and full blocks
|
||||
pub storage_dir: String,
|
||||
/// Optionally skip initial `skip_blocks` blocks (default: 0)
|
||||
pub skip_blocks: Option<usize>,
|
||||
@@ -511,7 +500,8 @@ impl ConfigurableBlockchain for CompactFiltersBlockchain {
|
||||
.peers
|
||||
.iter()
|
||||
.map(|peer_conf| match &peer_conf.socks5 {
|
||||
None => Peer::connect(&peer_conf.address, Arc::clone(&mempool), config.network),
|
||||
None => Peer::connect(&peer_conf.address, Arc::clone(&mempool), config.network)
|
||||
.map_err(CompactFiltersError::from),
|
||||
Some(proxy) => Peer::connect_proxy(
|
||||
peer_conf.address.as_str(),
|
||||
proxy,
|
||||
@@ -521,7 +511,8 @@ impl ConfigurableBlockchain for CompactFiltersBlockchain {
|
||||
.map(|(a, b)| (a.as_str(), b.as_str())),
|
||||
Arc::clone(&mempool),
|
||||
config.network,
|
||||
),
|
||||
)
|
||||
.map_err(CompactFiltersError::from),
|
||||
})
|
||||
.collect::<Result<_, _>>()?;
|
||||
|
||||
@@ -546,8 +537,6 @@ pub enum CompactFiltersError {
|
||||
InvalidFilter,
|
||||
/// The peer is missing a block in the valid chain
|
||||
MissingBlock,
|
||||
/// Block hash at specified height not found
|
||||
BlockHashNotFound,
|
||||
/// The data stored in the block filters storage are corrupted
|
||||
DataCorruption,
|
||||
|
||||
@@ -572,6 +561,9 @@ pub enum CompactFiltersError {
|
||||
|
||||
/// Wrapper for [`crate::error::Error`]
|
||||
Global(Box<crate::error::Error>),
|
||||
|
||||
/// Internal Peer Error
|
||||
Peer(PeerError),
|
||||
}
|
||||
|
||||
impl fmt::Display for CompactFiltersError {
|
||||
@@ -586,6 +578,7 @@ impl_error!(rocksdb::Error, Db, CompactFiltersError);
|
||||
impl_error!(std::io::Error, Io, CompactFiltersError);
|
||||
impl_error!(bitcoin::util::bip158::Error, Bip158, CompactFiltersError);
|
||||
impl_error!(std::time::SystemTimeError, Time, CompactFiltersError);
|
||||
impl_error!(PeerError, Peer, CompactFiltersError);
|
||||
|
||||
impl From<crate::error::Error> for CompactFiltersError {
|
||||
fn from(err: crate::error::Error) -> Self {
|
||||
|
||||
@@ -10,28 +10,30 @@
|
||||
// licenses.
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::io::BufReader;
|
||||
use std::net::{TcpStream, ToSocketAddrs};
|
||||
use std::fmt;
|
||||
use std::net::{SocketAddr, TcpStream, ToSocketAddrs};
|
||||
use std::sync::{Arc, Condvar, Mutex, RwLock};
|
||||
use std::thread;
|
||||
use std::time::{Duration, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use std::sync::PoisonError;
|
||||
use std::sync::{MutexGuard, RwLockReadGuard, RwLockWriteGuard, WaitTimeoutResult};
|
||||
|
||||
use socks::{Socks5Stream, ToTargetAddr};
|
||||
|
||||
use rand::{thread_rng, Rng};
|
||||
|
||||
use bitcoin::consensus::{Decodable, Encodable};
|
||||
use bitcoin::consensus::Encodable;
|
||||
use bitcoin::hash_types::BlockHash;
|
||||
use bitcoin::network::constants::ServiceFlags;
|
||||
use bitcoin::network::message::{NetworkMessage, RawNetworkMessage};
|
||||
use bitcoin::network::message_blockdata::*;
|
||||
use bitcoin::network::message_filter::*;
|
||||
use bitcoin::network::message_network::VersionMessage;
|
||||
use bitcoin::network::stream_reader::StreamReader;
|
||||
use bitcoin::network::Address;
|
||||
use bitcoin::{Block, Network, Transaction, Txid, Wtxid};
|
||||
|
||||
use super::CompactFiltersError;
|
||||
|
||||
type ResponsesMap = HashMap<&'static str, Arc<(Mutex<Vec<NetworkMessage>>, Condvar)>>;
|
||||
|
||||
pub(crate) const TIMEOUT_SECS: u64 = 30;
|
||||
@@ -65,17 +67,18 @@ impl Mempool {
|
||||
///
|
||||
/// Note that this doesn't propagate the transaction to other
|
||||
/// peers. To do that, [`broadcast`](crate::blockchain::Blockchain::broadcast) should be used.
|
||||
pub fn add_tx(&self, tx: Transaction) {
|
||||
let mut guard = self.0.write().unwrap();
|
||||
pub fn add_tx(&self, tx: Transaction) -> Result<(), PeerError> {
|
||||
let mut guard = self.0.write()?;
|
||||
|
||||
guard.wtxids.insert(tx.wtxid(), tx.txid());
|
||||
guard.txs.insert(tx.txid(), tx);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Look-up a transaction in the mempool given an [`Inventory`] request
|
||||
pub fn get_tx(&self, inventory: &Inventory) -> Option<Transaction> {
|
||||
pub fn get_tx(&self, inventory: &Inventory) -> Result<Option<Transaction>, PeerError> {
|
||||
let identifer = match inventory {
|
||||
Inventory::Error | Inventory::Block(_) | Inventory::WitnessBlock(_) => return None,
|
||||
Inventory::Error | Inventory::Block(_) | Inventory::WitnessBlock(_) => return Ok(None),
|
||||
Inventory::Transaction(txid) => TxIdentifier::Txid(*txid),
|
||||
Inventory::WitnessTransaction(txid) => TxIdentifier::Txid(*txid),
|
||||
Inventory::WTx(wtxid) => TxIdentifier::Wtxid(*wtxid),
|
||||
@@ -85,32 +88,39 @@ impl Mempool {
|
||||
inv_type,
|
||||
hash
|
||||
);
|
||||
return None;
|
||||
return Ok(None);
|
||||
}
|
||||
};
|
||||
|
||||
let txid = match identifer {
|
||||
TxIdentifier::Txid(txid) => Some(txid),
|
||||
TxIdentifier::Wtxid(wtxid) => self.0.read().unwrap().wtxids.get(&wtxid).cloned(),
|
||||
TxIdentifier::Wtxid(wtxid) => self.0.read()?.wtxids.get(&wtxid).cloned(),
|
||||
};
|
||||
|
||||
txid.and_then(|txid| self.0.read().unwrap().txs.get(&txid).cloned())
|
||||
let result = match txid {
|
||||
Some(txid) => {
|
||||
let read_lock = self.0.read()?;
|
||||
read_lock.txs.get(&txid).cloned()
|
||||
}
|
||||
None => None,
|
||||
};
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// Return whether or not the mempool contains a transaction with a given txid
|
||||
pub fn has_tx(&self, txid: &Txid) -> bool {
|
||||
self.0.read().unwrap().txs.contains_key(txid)
|
||||
pub fn has_tx(&self, txid: &Txid) -> Result<bool, PeerError> {
|
||||
Ok(self.0.read()?.txs.contains_key(txid))
|
||||
}
|
||||
|
||||
/// Return the list of transactions contained in the mempool
|
||||
pub fn iter_txs(&self) -> Vec<Transaction> {
|
||||
self.0.read().unwrap().txs.values().cloned().collect()
|
||||
pub fn iter_txs(&self) -> Result<Vec<Transaction>, PeerError> {
|
||||
Ok(self.0.read()?.txs.values().cloned().collect())
|
||||
}
|
||||
}
|
||||
|
||||
/// A Bitcoin peer
|
||||
#[derive(Debug)]
|
||||
#[allow(dead_code)]
|
||||
pub struct Peer {
|
||||
writer: Arc<Mutex<TcpStream>>,
|
||||
responses: Arc<RwLock<ResponsesMap>>,
|
||||
@@ -133,12 +143,31 @@ impl Peer {
|
||||
address: A,
|
||||
mempool: Arc<Mempool>,
|
||||
network: Network,
|
||||
) -> Result<Self, CompactFiltersError> {
|
||||
) -> Result<Self, PeerError> {
|
||||
let stream = TcpStream::connect(address)?;
|
||||
|
||||
Peer::from_stream(stream, mempool, network)
|
||||
}
|
||||
|
||||
/// Connect to a peer over a plaintext TCP connection with a timeout
|
||||
///
|
||||
/// This function behaves exactly the same as `connect` except for two differences
|
||||
/// 1) It assumes your ToSocketAddrs will resolve to a single address
|
||||
/// 2) It lets you specify a connection timeout
|
||||
pub fn connect_with_timeout<A: ToSocketAddrs>(
|
||||
address: A,
|
||||
timeout: Duration,
|
||||
mempool: Arc<Mempool>,
|
||||
network: Network,
|
||||
) -> Result<Self, PeerError> {
|
||||
let socket_addr = address
|
||||
.to_socket_addrs()?
|
||||
.next()
|
||||
.ok_or(PeerError::AddresseResolution)?;
|
||||
let stream = TcpStream::connect_timeout(&socket_addr, timeout)?;
|
||||
Peer::from_stream(stream, mempool, network)
|
||||
}
|
||||
|
||||
/// Connect to a peer through a SOCKS5 proxy, optionally by using some credentials, specified
|
||||
/// as a tuple of `(username, password)`
|
||||
///
|
||||
@@ -150,7 +179,7 @@ impl Peer {
|
||||
credentials: Option<(&str, &str)>,
|
||||
mempool: Arc<Mempool>,
|
||||
network: Network,
|
||||
) -> Result<Self, CompactFiltersError> {
|
||||
) -> Result<Self, PeerError> {
|
||||
let socks_stream = if let Some((username, password)) = credentials {
|
||||
Socks5Stream::connect_with_password(proxy, target, username, password)?
|
||||
} else {
|
||||
@@ -165,12 +194,12 @@ impl Peer {
|
||||
stream: TcpStream,
|
||||
mempool: Arc<Mempool>,
|
||||
network: Network,
|
||||
) -> Result<Self, CompactFiltersError> {
|
||||
) -> Result<Self, PeerError> {
|
||||
let writer = Arc::new(Mutex::new(stream.try_clone()?));
|
||||
let responses: Arc<RwLock<ResponsesMap>> = Arc::new(RwLock::new(HashMap::new()));
|
||||
let connected = Arc::new(RwLock::new(true));
|
||||
|
||||
let mut locked_writer = writer.lock().unwrap();
|
||||
let mut locked_writer = writer.lock()?;
|
||||
|
||||
let reader_thread_responses = Arc::clone(&responses);
|
||||
let reader_thread_writer = Arc::clone(&writer);
|
||||
@@ -185,6 +214,7 @@ impl Peer {
|
||||
reader_thread_mempool,
|
||||
reader_thread_connected,
|
||||
)
|
||||
.unwrap()
|
||||
});
|
||||
|
||||
let timestamp = SystemTime::now().duration_since(UNIX_EPOCH)?.as_secs() as i64;
|
||||
@@ -209,18 +239,20 @@ impl Peer {
|
||||
0,
|
||||
)),
|
||||
)?;
|
||||
let version = if let NetworkMessage::Version(version) =
|
||||
Self::_recv(&responses, "version", None).unwrap()
|
||||
{
|
||||
version
|
||||
} else {
|
||||
return Err(CompactFiltersError::InvalidResponse);
|
||||
|
||||
let version = match Self::_recv(&responses, "version", Some(Duration::from_secs(1)))? {
|
||||
Some(NetworkMessage::Version(version)) => version,
|
||||
_ => {
|
||||
return Err(PeerError::InvalidResponse(locked_writer.peer_addr()?));
|
||||
}
|
||||
};
|
||||
|
||||
if let NetworkMessage::Verack = Self::_recv(&responses, "verack", None).unwrap() {
|
||||
if let Some(NetworkMessage::Verack) =
|
||||
Self::_recv(&responses, "verack", Some(Duration::from_secs(1)))?
|
||||
{
|
||||
Self::_send(&mut locked_writer, network.magic(), NetworkMessage::Verack)?;
|
||||
} else {
|
||||
return Err(CompactFiltersError::InvalidResponse);
|
||||
return Err(PeerError::InvalidResponse(locked_writer.peer_addr()?));
|
||||
}
|
||||
|
||||
std::mem::drop(locked_writer);
|
||||
@@ -236,19 +268,26 @@ impl Peer {
|
||||
})
|
||||
}
|
||||
|
||||
/// Close the peer connection
|
||||
// Consume Self
|
||||
pub fn close(self) -> Result<(), PeerError> {
|
||||
let locked_writer = self.writer.lock()?;
|
||||
Ok((*locked_writer).shutdown(std::net::Shutdown::Both)?)
|
||||
}
|
||||
|
||||
/// Get the socket address of the remote peer
|
||||
pub fn get_address(&self) -> Result<SocketAddr, PeerError> {
|
||||
let locked_writer = self.writer.lock()?;
|
||||
Ok(locked_writer.peer_addr()?)
|
||||
}
|
||||
|
||||
/// Send a Bitcoin network message
|
||||
fn _send(
|
||||
writer: &mut TcpStream,
|
||||
magic: u32,
|
||||
payload: NetworkMessage,
|
||||
) -> Result<(), CompactFiltersError> {
|
||||
fn _send(writer: &mut TcpStream, magic: u32, payload: NetworkMessage) -> Result<(), PeerError> {
|
||||
log::trace!("==> {:?}", payload);
|
||||
|
||||
let raw_message = RawNetworkMessage { magic, payload };
|
||||
|
||||
raw_message
|
||||
.consensus_encode(writer)
|
||||
.map_err(|_| CompactFiltersError::DataCorruption)?;
|
||||
raw_message.consensus_encode(writer)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -258,30 +297,30 @@ impl Peer {
|
||||
responses: &Arc<RwLock<ResponsesMap>>,
|
||||
wait_for: &'static str,
|
||||
timeout: Option<Duration>,
|
||||
) -> Option<NetworkMessage> {
|
||||
) -> Result<Option<NetworkMessage>, PeerError> {
|
||||
let message_resp = {
|
||||
let mut lock = responses.write().unwrap();
|
||||
let mut lock = responses.write()?;
|
||||
let message_resp = lock.entry(wait_for).or_default();
|
||||
Arc::clone(message_resp)
|
||||
Arc::clone(&message_resp)
|
||||
};
|
||||
|
||||
let (lock, cvar) = &*message_resp;
|
||||
|
||||
let mut messages = lock.lock().unwrap();
|
||||
let mut messages = lock.lock()?;
|
||||
while messages.is_empty() {
|
||||
match timeout {
|
||||
None => messages = cvar.wait(messages).unwrap(),
|
||||
None => messages = cvar.wait(messages)?,
|
||||
Some(t) => {
|
||||
let result = cvar.wait_timeout(messages, t).unwrap();
|
||||
let result = cvar.wait_timeout(messages, t)?;
|
||||
if result.1.timed_out() {
|
||||
return None;
|
||||
return Ok(None);
|
||||
}
|
||||
messages = result.0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
messages.pop()
|
||||
Ok(messages.pop())
|
||||
}
|
||||
|
||||
/// Return the [`VersionMessage`] sent by the peer
|
||||
@@ -300,8 +339,8 @@ impl Peer {
|
||||
}
|
||||
|
||||
/// Return whether or not the peer is still connected
|
||||
pub fn is_connected(&self) -> bool {
|
||||
*self.connected.read().unwrap()
|
||||
pub fn is_connected(&self) -> Result<bool, PeerError> {
|
||||
Ok(*self.connected.read()?)
|
||||
}
|
||||
|
||||
/// Internal function called once the `reader_thread` is spawned
|
||||
@@ -312,14 +351,14 @@ impl Peer {
|
||||
reader_thread_writer: Arc<Mutex<TcpStream>>,
|
||||
reader_thread_mempool: Arc<Mempool>,
|
||||
reader_thread_connected: Arc<RwLock<bool>>,
|
||||
) {
|
||||
) -> Result<(), PeerError> {
|
||||
macro_rules! check_disconnect {
|
||||
($call:expr) => {
|
||||
match $call {
|
||||
Ok(good) => good,
|
||||
Err(e) => {
|
||||
log::debug!("Error {:?}", e);
|
||||
*reader_thread_connected.write().unwrap() = false;
|
||||
*reader_thread_connected.write()? = false;
|
||||
|
||||
break;
|
||||
}
|
||||
@@ -327,10 +366,9 @@ impl Peer {
|
||||
};
|
||||
}
|
||||
|
||||
let mut reader = BufReader::new(connection);
|
||||
loop {
|
||||
let raw_message: RawNetworkMessage =
|
||||
check_disconnect!(Decodable::consensus_decode(&mut reader));
|
||||
let mut reader = StreamReader::new(connection, None);
|
||||
while *reader_thread_connected.read()? {
|
||||
let raw_message: RawNetworkMessage = check_disconnect!(reader.read_next());
|
||||
|
||||
let in_message = if raw_message.magic != network.magic() {
|
||||
continue;
|
||||
@@ -343,7 +381,7 @@ impl Peer {
|
||||
match in_message {
|
||||
NetworkMessage::Ping(nonce) => {
|
||||
check_disconnect!(Self::_send(
|
||||
&mut reader_thread_writer.lock().unwrap(),
|
||||
&mut *reader_thread_writer.lock()?,
|
||||
network.magic(),
|
||||
NetworkMessage::Pong(nonce),
|
||||
));
|
||||
@@ -354,19 +392,21 @@ impl Peer {
|
||||
NetworkMessage::GetData(ref inv) => {
|
||||
let (found, not_found): (Vec<_>, Vec<_>) = inv
|
||||
.iter()
|
||||
.map(|item| (*item, reader_thread_mempool.get_tx(item)))
|
||||
.map(|item| (*item, reader_thread_mempool.get_tx(item).unwrap()))
|
||||
.partition(|(_, d)| d.is_some());
|
||||
for (_, found_tx) in found {
|
||||
check_disconnect!(Self::_send(
|
||||
&mut reader_thread_writer.lock().unwrap(),
|
||||
&mut *reader_thread_writer.lock()?,
|
||||
network.magic(),
|
||||
NetworkMessage::Tx(found_tx.unwrap()),
|
||||
NetworkMessage::Tx(found_tx.ok_or_else(|| PeerError::Generic(
|
||||
"Got None while expecting Transaction".to_string()
|
||||
))?),
|
||||
));
|
||||
}
|
||||
|
||||
if !not_found.is_empty() {
|
||||
check_disconnect!(Self::_send(
|
||||
&mut reader_thread_writer.lock().unwrap(),
|
||||
&mut *reader_thread_writer.lock()?,
|
||||
network.magic(),
|
||||
NetworkMessage::NotFound(
|
||||
not_found.into_iter().map(|(i, _)| i).collect(),
|
||||
@@ -378,21 +418,23 @@ impl Peer {
|
||||
}
|
||||
|
||||
let message_resp = {
|
||||
let mut lock = reader_thread_responses.write().unwrap();
|
||||
let mut lock = reader_thread_responses.write()?;
|
||||
let message_resp = lock.entry(in_message.cmd()).or_default();
|
||||
Arc::clone(message_resp)
|
||||
Arc::clone(&message_resp)
|
||||
};
|
||||
|
||||
let (lock, cvar) = &*message_resp;
|
||||
let mut messages = lock.lock().unwrap();
|
||||
let mut messages = lock.lock()?;
|
||||
messages.push(in_message);
|
||||
cvar.notify_all();
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Send a raw Bitcoin message to the peer
|
||||
pub fn send(&self, payload: NetworkMessage) -> Result<(), CompactFiltersError> {
|
||||
let mut writer = self.writer.lock().unwrap();
|
||||
pub fn send(&self, payload: NetworkMessage) -> Result<(), PeerError> {
|
||||
let mut writer = self.writer.lock()?;
|
||||
Self::_send(&mut writer, self.network.magic(), payload)
|
||||
}
|
||||
|
||||
@@ -401,30 +443,27 @@ impl Peer {
|
||||
&self,
|
||||
wait_for: &'static str,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<Option<NetworkMessage>, CompactFiltersError> {
|
||||
Ok(Self::_recv(&self.responses, wait_for, timeout))
|
||||
) -> Result<Option<NetworkMessage>, PeerError> {
|
||||
Self::_recv(&self.responses, wait_for, timeout)
|
||||
}
|
||||
}
|
||||
|
||||
pub trait CompactFiltersPeer {
|
||||
fn get_cf_checkpt(
|
||||
&self,
|
||||
filter_type: u8,
|
||||
stop_hash: BlockHash,
|
||||
) -> Result<CFCheckpt, CompactFiltersError>;
|
||||
fn get_cf_checkpt(&self, filter_type: u8, stop_hash: BlockHash)
|
||||
-> Result<CFCheckpt, PeerError>;
|
||||
fn get_cf_headers(
|
||||
&self,
|
||||
filter_type: u8,
|
||||
start_height: u32,
|
||||
stop_hash: BlockHash,
|
||||
) -> Result<CFHeaders, CompactFiltersError>;
|
||||
) -> Result<CFHeaders, PeerError>;
|
||||
fn get_cf_filters(
|
||||
&self,
|
||||
filter_type: u8,
|
||||
start_height: u32,
|
||||
stop_hash: BlockHash,
|
||||
) -> Result<(), CompactFiltersError>;
|
||||
fn pop_cf_filter_resp(&self) -> Result<CFilter, CompactFiltersError>;
|
||||
) -> Result<(), PeerError>;
|
||||
fn pop_cf_filter_resp(&self) -> Result<CFilter, PeerError>;
|
||||
}
|
||||
|
||||
impl CompactFiltersPeer for Peer {
|
||||
@@ -432,22 +471,20 @@ impl CompactFiltersPeer for Peer {
|
||||
&self,
|
||||
filter_type: u8,
|
||||
stop_hash: BlockHash,
|
||||
) -> Result<CFCheckpt, CompactFiltersError> {
|
||||
) -> Result<CFCheckpt, PeerError> {
|
||||
self.send(NetworkMessage::GetCFCheckpt(GetCFCheckpt {
|
||||
filter_type,
|
||||
stop_hash,
|
||||
}))?;
|
||||
|
||||
let response = self
|
||||
.recv("cfcheckpt", Some(Duration::from_secs(TIMEOUT_SECS)))?
|
||||
.ok_or(CompactFiltersError::Timeout)?;
|
||||
let response = self.recv("cfcheckpt", Some(Duration::from_secs(TIMEOUT_SECS)))?;
|
||||
let response = match response {
|
||||
NetworkMessage::CFCheckpt(response) => response,
|
||||
_ => return Err(CompactFiltersError::InvalidResponse),
|
||||
Some(NetworkMessage::CFCheckpt(response)) => response,
|
||||
_ => return Err(PeerError::InvalidResponse(self.get_address()?)),
|
||||
};
|
||||
|
||||
if response.filter_type != filter_type {
|
||||
return Err(CompactFiltersError::InvalidResponse);
|
||||
return Err(PeerError::InvalidResponse(self.get_address()?));
|
||||
}
|
||||
|
||||
Ok(response)
|
||||
@@ -458,35 +495,31 @@ impl CompactFiltersPeer for Peer {
|
||||
filter_type: u8,
|
||||
start_height: u32,
|
||||
stop_hash: BlockHash,
|
||||
) -> Result<CFHeaders, CompactFiltersError> {
|
||||
) -> Result<CFHeaders, PeerError> {
|
||||
self.send(NetworkMessage::GetCFHeaders(GetCFHeaders {
|
||||
filter_type,
|
||||
start_height,
|
||||
stop_hash,
|
||||
}))?;
|
||||
|
||||
let response = self
|
||||
.recv("cfheaders", Some(Duration::from_secs(TIMEOUT_SECS)))?
|
||||
.ok_or(CompactFiltersError::Timeout)?;
|
||||
let response = self.recv("cfheaders", Some(Duration::from_secs(TIMEOUT_SECS)))?;
|
||||
let response = match response {
|
||||
NetworkMessage::CFHeaders(response) => response,
|
||||
_ => return Err(CompactFiltersError::InvalidResponse),
|
||||
Some(NetworkMessage::CFHeaders(response)) => response,
|
||||
_ => return Err(PeerError::InvalidResponse(self.get_address()?)),
|
||||
};
|
||||
|
||||
if response.filter_type != filter_type {
|
||||
return Err(CompactFiltersError::InvalidResponse);
|
||||
return Err(PeerError::InvalidResponse(self.get_address()?));
|
||||
}
|
||||
|
||||
Ok(response)
|
||||
}
|
||||
|
||||
fn pop_cf_filter_resp(&self) -> Result<CFilter, CompactFiltersError> {
|
||||
let response = self
|
||||
.recv("cfilter", Some(Duration::from_secs(TIMEOUT_SECS)))?
|
||||
.ok_or(CompactFiltersError::Timeout)?;
|
||||
fn pop_cf_filter_resp(&self) -> Result<CFilter, PeerError> {
|
||||
let response = self.recv("cfilter", Some(Duration::from_secs(TIMEOUT_SECS)))?;
|
||||
let response = match response {
|
||||
NetworkMessage::CFilter(response) => response,
|
||||
_ => return Err(CompactFiltersError::InvalidResponse),
|
||||
Some(NetworkMessage::CFilter(response)) => response,
|
||||
_ => return Err(PeerError::InvalidResponse(self.get_address()?)),
|
||||
};
|
||||
|
||||
Ok(response)
|
||||
@@ -497,7 +530,7 @@ impl CompactFiltersPeer for Peer {
|
||||
filter_type: u8,
|
||||
start_height: u32,
|
||||
stop_hash: BlockHash,
|
||||
) -> Result<(), CompactFiltersError> {
|
||||
) -> Result<(), PeerError> {
|
||||
self.send(NetworkMessage::GetCFilters(GetCFilters {
|
||||
filter_type,
|
||||
start_height,
|
||||
@@ -509,13 +542,13 @@ impl CompactFiltersPeer for Peer {
|
||||
}
|
||||
|
||||
pub trait InvPeer {
|
||||
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, CompactFiltersError>;
|
||||
fn ask_for_mempool(&self) -> Result<(), CompactFiltersError>;
|
||||
fn broadcast_tx(&self, tx: Transaction) -> Result<(), CompactFiltersError>;
|
||||
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, PeerError>;
|
||||
fn ask_for_mempool(&self) -> Result<(), PeerError>;
|
||||
fn broadcast_tx(&self, tx: Transaction) -> Result<(), PeerError>;
|
||||
}
|
||||
|
||||
impl InvPeer for Peer {
|
||||
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, CompactFiltersError> {
|
||||
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, PeerError> {
|
||||
self.send(NetworkMessage::GetData(vec![Inventory::WitnessBlock(
|
||||
block_hash,
|
||||
)]))?;
|
||||
@@ -523,51 +556,126 @@ impl InvPeer for Peer {
|
||||
match self.recv("block", Some(Duration::from_secs(TIMEOUT_SECS)))? {
|
||||
None => Ok(None),
|
||||
Some(NetworkMessage::Block(response)) => Ok(Some(response)),
|
||||
_ => Err(CompactFiltersError::InvalidResponse),
|
||||
_ => Err(PeerError::InvalidResponse(self.get_address()?)),
|
||||
}
|
||||
}
|
||||
|
||||
fn ask_for_mempool(&self) -> Result<(), CompactFiltersError> {
|
||||
fn ask_for_mempool(&self) -> Result<(), PeerError> {
|
||||
if !self.version.services.has(ServiceFlags::BLOOM) {
|
||||
return Err(CompactFiltersError::PeerBloomDisabled);
|
||||
return Err(PeerError::PeerBloomDisabled(self.get_address()?));
|
||||
}
|
||||
|
||||
self.send(NetworkMessage::MemPool)?;
|
||||
let inv = match self.recv("inv", Some(Duration::from_secs(5)))? {
|
||||
None => return Ok(()), // empty mempool
|
||||
Some(NetworkMessage::Inv(inv)) => inv,
|
||||
_ => return Err(CompactFiltersError::InvalidResponse),
|
||||
_ => return Err(PeerError::InvalidResponse(self.get_address()?)),
|
||||
};
|
||||
|
||||
let getdata = inv
|
||||
.iter()
|
||||
.cloned()
|
||||
.filter(
|
||||
|item| matches!(item, Inventory::Transaction(txid) if !self.mempool.has_tx(txid)),
|
||||
|item| matches!(item, Inventory::Transaction(txid) if !self.mempool.has_tx(txid).unwrap()),
|
||||
)
|
||||
.collect::<Vec<_>>();
|
||||
let num_txs = getdata.len();
|
||||
self.send(NetworkMessage::GetData(getdata))?;
|
||||
|
||||
for _ in 0..num_txs {
|
||||
let tx = self
|
||||
.recv("tx", Some(Duration::from_secs(TIMEOUT_SECS)))?
|
||||
.ok_or(CompactFiltersError::Timeout)?;
|
||||
let tx = self.recv("tx", Some(Duration::from_secs(TIMEOUT_SECS)))?;
|
||||
let tx = match tx {
|
||||
NetworkMessage::Tx(tx) => tx,
|
||||
_ => return Err(CompactFiltersError::InvalidResponse),
|
||||
Some(NetworkMessage::Tx(tx)) => tx,
|
||||
_ => return Err(PeerError::InvalidResponse(self.get_address()?)),
|
||||
};
|
||||
|
||||
self.mempool.add_tx(tx);
|
||||
self.mempool.add_tx(tx)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn broadcast_tx(&self, tx: Transaction) -> Result<(), CompactFiltersError> {
|
||||
self.mempool.add_tx(tx.clone());
|
||||
fn broadcast_tx(&self, tx: Transaction) -> Result<(), PeerError> {
|
||||
self.mempool.add_tx(tx.clone())?;
|
||||
self.send(NetworkMessage::Tx(tx))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Peer Errors
|
||||
#[derive(Debug)]
|
||||
pub enum PeerError {
|
||||
/// Internal I/O error
|
||||
Io(std::io::Error),
|
||||
|
||||
/// Internal system time error
|
||||
Time(std::time::SystemTimeError),
|
||||
|
||||
/// A peer sent an invalid or unexpected response
|
||||
InvalidResponse(SocketAddr),
|
||||
|
||||
/// Peer had bloom filter disabled
|
||||
PeerBloomDisabled(SocketAddr),
|
||||
|
||||
/// Internal Mutex poisoning error
|
||||
MutexPoisoned,
|
||||
|
||||
/// Internal Mutex wait timed out
|
||||
MutexTimedout,
|
||||
|
||||
/// Internal RW read lock poisoned
|
||||
RwReadLockPoisined,
|
||||
|
||||
/// Internal RW write lock poisoned
|
||||
RwWriteLockPoisoned,
|
||||
|
||||
/// Mempool Mutex poisoned
|
||||
MempoolPoisoned,
|
||||
|
||||
/// Network address resolution Error
|
||||
AddresseResolution,
|
||||
|
||||
/// Generic Errors
|
||||
Generic(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for PeerError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "{:?}", self)
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for PeerError {}
|
||||
|
||||
impl_error!(std::io::Error, Io, PeerError);
|
||||
impl_error!(std::time::SystemTimeError, Time, PeerError);
|
||||
|
||||
impl<T> From<PoisonError<MutexGuard<'_, T>>> for PeerError {
|
||||
fn from(_: PoisonError<MutexGuard<'_, T>>) -> Self {
|
||||
PeerError::MutexPoisoned
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> From<PoisonError<RwLockWriteGuard<'_, T>>> for PeerError {
|
||||
fn from(_: PoisonError<RwLockWriteGuard<'_, T>>) -> Self {
|
||||
PeerError::RwWriteLockPoisoned
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> From<PoisonError<RwLockReadGuard<'_, T>>> for PeerError {
|
||||
fn from(_: PoisonError<RwLockReadGuard<'_, T>>) -> Self {
|
||||
PeerError::RwReadLockPoisined
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> From<PoisonError<(MutexGuard<'_, T>, WaitTimeoutResult)>> for PeerError {
|
||||
fn from(err: PoisonError<(MutexGuard<'_, T>, WaitTimeoutResult)>) -> Self {
|
||||
let (_, wait_result) = err.into_inner();
|
||||
if wait_result.timed_out() {
|
||||
PeerError::MutexTimedout
|
||||
} else {
|
||||
PeerError::MutexPoisoned
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
578
src/blockchain/compact_filters/peermngr.rs
Normal file
578
src/blockchain/compact_filters/peermngr.rs
Normal file
@@ -0,0 +1,578 @@
|
||||
use super::address_manager::{AddressManager, AddressManagerError, DiscoveryProgress};
|
||||
use super::peer::{Mempool, Peer, PeerError, TIMEOUT_SECS};
|
||||
|
||||
use std::net::SocketAddr;
|
||||
use std::sync::Arc;
|
||||
use std::time;
|
||||
|
||||
use bitcoin::network::constants::{Network, ServiceFlags};
|
||||
use bitcoin::network::message::NetworkMessage;
|
||||
|
||||
use std::path::PathBuf;
|
||||
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
// Peer Manager Configuration constants
|
||||
const MIN_CBF_PEERS: usize = 2;
|
||||
const MIN_TOTAL_PEERS: usize = 5;
|
||||
const MIN_CRAWLER_THREADS: usize = 20;
|
||||
const BAN_SCORE_THRESHOLD: usize = 100;
|
||||
const RECEIVE_TIMEOUT: time::Duration = time::Duration::from_secs(TIMEOUT_SECS);
|
||||
|
||||
#[allow(dead_code)]
|
||||
/// An Error structure describing Peer Management errors
|
||||
#[derive(Debug)]
|
||||
pub enum PeerManagerError {
|
||||
// Internal Peer Error
|
||||
Peer(PeerError),
|
||||
|
||||
// Internal AddressManager Error
|
||||
AddrsManager(AddressManagerError),
|
||||
|
||||
// Os String Error
|
||||
OsString(std::ffi::OsString),
|
||||
|
||||
// Peer not found in directory
|
||||
PeerNotFound,
|
||||
|
||||
// Generic Internal Error
|
||||
Generic(String),
|
||||
}
|
||||
|
||||
impl_error!(PeerError, Peer, PeerManagerError);
|
||||
impl_error!(AddressManagerError, AddrsManager, PeerManagerError);
|
||||
impl_error!(std::ffi::OsString, OsString, PeerManagerError);
|
||||
|
||||
/// Peer Data stored in the manager's directory
|
||||
#[derive(Debug)]
|
||||
struct PeerData {
|
||||
peer: Peer,
|
||||
is_cbf: bool,
|
||||
ban_score: usize,
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
/// A Directory structure to hold live Peers
|
||||
/// All peers in the directory have live ongoing connection
|
||||
/// Banning a peer removes it from the directory
|
||||
#[derive(Default, Debug)]
|
||||
struct PeerDirectory {
|
||||
peers: BTreeMap<SocketAddr, PeerData>,
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
impl PeerDirectory {
|
||||
fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
fn get_cbf_peers(&self) -> Option<Vec<&PeerData>> {
|
||||
let cbf_peers = self
|
||||
.peers
|
||||
.iter()
|
||||
.filter(|(_, peer)| peer.is_cbf)
|
||||
.map(|(_, peer)| peer)
|
||||
.collect::<Vec<&PeerData>>();
|
||||
|
||||
match cbf_peers.len() {
|
||||
0 => None,
|
||||
_ => Some(cbf_peers),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_cbf_addresses(&self) -> Option<Vec<SocketAddr>> {
|
||||
let cbf_addrseses = self
|
||||
.peers
|
||||
.iter()
|
||||
.filter_map(
|
||||
|(addrs, peerdata)| {
|
||||
if peerdata.is_cbf {
|
||||
Some(addrs)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
},
|
||||
)
|
||||
.copied()
|
||||
.collect::<Vec<SocketAddr>>();
|
||||
|
||||
match cbf_addrseses.len() {
|
||||
0 => None,
|
||||
_ => Some(cbf_addrseses),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_non_cbf_peers(&self) -> Option<Vec<&PeerData>> {
|
||||
let non_cbf_peers = self
|
||||
.peers
|
||||
.iter()
|
||||
.filter(|(_, peerdata)| !peerdata.is_cbf)
|
||||
.map(|(_, peerdata)| peerdata)
|
||||
.collect::<Vec<&PeerData>>();
|
||||
|
||||
match non_cbf_peers.len() {
|
||||
0 => None,
|
||||
_ => Some(non_cbf_peers),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_non_cbf_addresses(&self) -> Option<Vec<SocketAddr>> {
|
||||
let addresses = self
|
||||
.peers
|
||||
.iter()
|
||||
.filter_map(
|
||||
|(addrs, peerdata)| {
|
||||
if !peerdata.is_cbf {
|
||||
Some(addrs)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
},
|
||||
)
|
||||
.copied()
|
||||
.collect::<Vec<SocketAddr>>();
|
||||
|
||||
match addresses.len() {
|
||||
0 => None,
|
||||
_ => Some(addresses),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_cbf_peers_mut(&mut self) -> Option<Vec<&mut PeerData>> {
|
||||
let peers = self
|
||||
.peers
|
||||
.iter_mut()
|
||||
.filter(|(_, peerdata)| peerdata.is_cbf)
|
||||
.map(|(_, peerdata)| peerdata)
|
||||
.collect::<Vec<&mut PeerData>>();
|
||||
|
||||
match peers.len() {
|
||||
0 => None,
|
||||
_ => Some(peers),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_non_cbf_peers_mut(&mut self) -> Option<Vec<&mut PeerData>> {
|
||||
let peers = self
|
||||
.peers
|
||||
.iter_mut()
|
||||
.filter(|(_, peerdata)| !peerdata.is_cbf)
|
||||
.map(|(_, peerdata)| peerdata)
|
||||
.collect::<Vec<&mut PeerData>>();
|
||||
|
||||
match peers.len() {
|
||||
0 => None,
|
||||
_ => Some(peers),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_cbf_count(&self) -> usize {
|
||||
self.peers
|
||||
.iter()
|
||||
.filter(|(_, peerdata)| peerdata.is_cbf)
|
||||
.count()
|
||||
}
|
||||
|
||||
fn get_non_cbf_count(&self) -> usize {
|
||||
self.peers
|
||||
.iter()
|
||||
.filter(|(_, peerdata)| !peerdata.is_cbf)
|
||||
.count()
|
||||
}
|
||||
|
||||
fn insert_peer(&mut self, peerdata: PeerData) -> Result<(), PeerManagerError> {
|
||||
let addrs = peerdata.peer.get_address()?;
|
||||
self.peers.entry(addrs).or_insert(peerdata);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn remove_peer(&mut self, addrs: &SocketAddr) -> Option<PeerData> {
|
||||
self.peers.remove(addrs)
|
||||
}
|
||||
|
||||
fn get_peer_banscore(&self, addrs: &SocketAddr) -> Option<usize> {
|
||||
self.peers.get(addrs).map(|peerdata| peerdata.ban_score)
|
||||
}
|
||||
|
||||
fn get_peerdata_mut(&mut self, address: &SocketAddr) -> Option<&mut PeerData> {
|
||||
self.peers.get_mut(address)
|
||||
}
|
||||
|
||||
fn get_peerdata(&self, address: &SocketAddr) -> Option<&PeerData> {
|
||||
self.peers.get(address)
|
||||
}
|
||||
|
||||
fn is_cbf(&self, addrs: &SocketAddr) -> Option<bool> {
|
||||
if let Some(peer) = self.peers.get(addrs) {
|
||||
match peer.is_cbf {
|
||||
true => Some(true),
|
||||
false => Some(false),
|
||||
}
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub struct PeerManager<P: DiscoveryProgress> {
|
||||
addrs_mngr: AddressManager<P>,
|
||||
directory: PeerDirectory,
|
||||
mempool: Arc<Mempool>,
|
||||
min_cbf: usize,
|
||||
min_total: usize,
|
||||
network: Network,
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
impl<P: DiscoveryProgress> PeerManager<P> {
|
||||
pub fn init(
|
||||
network: Network,
|
||||
cache_dir: &str,
|
||||
crawler_threads: Option<usize>,
|
||||
progress: P,
|
||||
cbf_peers: Option<usize>,
|
||||
total_peers: Option<usize>,
|
||||
) -> Result<Self, PeerManagerError> {
|
||||
let mut cache_filename = PathBuf::from(cache_dir);
|
||||
cache_filename.push("addr_cache");
|
||||
|
||||
// Fetch minimum peer requirements, either by user input, or via default
|
||||
let min_cbf = cbf_peers.unwrap_or(MIN_CBF_PEERS);
|
||||
|
||||
let min_total = total_peers.unwrap_or(MIN_TOTAL_PEERS);
|
||||
|
||||
let cbf_buff = min_cbf * 2;
|
||||
let non_cbf_buff = (min_total - min_cbf) * 2;
|
||||
|
||||
// Create internal items
|
||||
let addrs_mngr = AddressManager::new(
|
||||
network,
|
||||
cache_filename.into_os_string().into_string()?,
|
||||
crawler_threads.unwrap_or(MIN_CRAWLER_THREADS),
|
||||
Some(cbf_buff),
|
||||
Some(non_cbf_buff),
|
||||
progress,
|
||||
)?;
|
||||
|
||||
let mempool = Arc::new(Mempool::new());
|
||||
|
||||
let peer_dir = PeerDirectory::new();
|
||||
|
||||
// Create self and update
|
||||
let mut manager = Self {
|
||||
addrs_mngr,
|
||||
directory: peer_dir,
|
||||
mempool,
|
||||
min_cbf,
|
||||
min_total,
|
||||
network,
|
||||
};
|
||||
|
||||
manager.update_directory()?;
|
||||
|
||||
Ok(manager)
|
||||
}
|
||||
|
||||
fn update_directory(&mut self) -> Result<(), PeerManagerError> {
|
||||
while self.directory.get_cbf_count() < self.min_cbf
|
||||
|| self.directory.get_non_cbf_count() < (self.min_total - self.min_cbf)
|
||||
{
|
||||
// First connect with cbf peers, then with non_cbf
|
||||
let cbf_fetch = self.directory.get_cbf_count() < self.min_cbf;
|
||||
|
||||
// Try to get an address
|
||||
// if not present start crawlers
|
||||
let target_addrs = match cbf_fetch {
|
||||
true => {
|
||||
if let Some(addrs) = self.addrs_mngr.get_new_cbf_address() {
|
||||
addrs
|
||||
} else {
|
||||
self.addrs_mngr.fetch()?;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
false => {
|
||||
if let Some(addrs) = self.addrs_mngr.get_new_non_cbf_address() {
|
||||
addrs
|
||||
} else {
|
||||
self.addrs_mngr.fetch()?;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
if let Ok(peer) = Peer::connect(target_addrs, Arc::clone(&self.mempool), self.network) {
|
||||
let address = peer.get_address()?;
|
||||
|
||||
assert_eq!(address, target_addrs);
|
||||
|
||||
let is_cbf = peer
|
||||
.get_version()
|
||||
.services
|
||||
.has(ServiceFlags::COMPACT_FILTERS);
|
||||
|
||||
let peerdata = PeerData {
|
||||
peer,
|
||||
is_cbf,
|
||||
ban_score: 0,
|
||||
};
|
||||
|
||||
self.directory.insert_peer(peerdata)?;
|
||||
} else {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn set_banscore(
|
||||
&mut self,
|
||||
increase_by: usize,
|
||||
address: &SocketAddr,
|
||||
) -> Result<(), PeerManagerError> {
|
||||
let mut current_score = if let Some(peer) = self.directory.get_peerdata_mut(address) {
|
||||
peer.ban_score
|
||||
} else {
|
||||
return Err(PeerManagerError::PeerNotFound);
|
||||
};
|
||||
|
||||
current_score += increase_by;
|
||||
|
||||
let mut banned = false;
|
||||
|
||||
if current_score >= BAN_SCORE_THRESHOLD {
|
||||
match (
|
||||
self.directory.is_cbf(address),
|
||||
self.directory.remove_peer(address),
|
||||
) {
|
||||
(Some(true), Some(_)) => {
|
||||
self.addrs_mngr.ban_peer(address, true)?;
|
||||
banned = true;
|
||||
}
|
||||
(Some(false), Some(_)) => {
|
||||
self.addrs_mngr.ban_peer(address, false)?;
|
||||
banned = true;
|
||||
}
|
||||
_ => {
|
||||
return Err(PeerManagerError::Generic(
|
||||
"data inconsistency in directory, should not happen".to_string(),
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if banned {
|
||||
self.update_directory()?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn send_to(
|
||||
&self,
|
||||
address: &SocketAddr,
|
||||
message: NetworkMessage,
|
||||
) -> Result<(), PeerManagerError> {
|
||||
if let Some(peerdata) = self.directory.get_peerdata(address) {
|
||||
peerdata.peer.send(message)?;
|
||||
Ok(())
|
||||
} else {
|
||||
Err(PeerManagerError::PeerNotFound)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn receive_from(
|
||||
&self,
|
||||
address: &SocketAddr,
|
||||
wait_for: &'static str,
|
||||
) -> Result<Option<NetworkMessage>, PeerManagerError> {
|
||||
if let Some(peerdata) = self.directory.get_peerdata(address) {
|
||||
if let Some(response) = peerdata.peer.recv(wait_for, Some(RECEIVE_TIMEOUT))? {
|
||||
Ok(Some(response))
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
} else {
|
||||
Err(PeerManagerError::PeerNotFound)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn connected_cbf_addresses(&self) -> Option<Vec<SocketAddr>> {
|
||||
self.directory.get_cbf_addresses()
|
||||
}
|
||||
|
||||
pub fn connected_non_cbf_addresses(&self) -> Option<Vec<SocketAddr>> {
|
||||
self.directory.get_non_cbf_addresses()
|
||||
}
|
||||
|
||||
pub fn known_cbf_addresses(&self) -> Option<Vec<SocketAddr>> {
|
||||
self.addrs_mngr.get_known_cbfs()
|
||||
}
|
||||
|
||||
pub fn known_non_cbf_addresses(&self) -> Option<Vec<SocketAddr>> {
|
||||
self.addrs_mngr.get_known_non_cbfs()
|
||||
}
|
||||
|
||||
pub fn previously_tried_addresses(&self) -> Option<Vec<SocketAddr>> {
|
||||
self.addrs_mngr.get_previously_tried()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::super::LogDiscoveryProgress;
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
#[ignore]
|
||||
fn test_ban() {
|
||||
let mut manager = PeerManager::init(
|
||||
Network::Bitcoin,
|
||||
".",
|
||||
None,
|
||||
LogDiscoveryProgress,
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let connected_cbfs = manager.connected_cbf_addresses().unwrap();
|
||||
let connected_non_cbfs = manager.connected_non_cbf_addresses().unwrap();
|
||||
|
||||
println!("Currently Connected CBFs: {:#?}", connected_cbfs);
|
||||
assert_eq!(connected_cbfs.len(), 2);
|
||||
assert_eq!(connected_non_cbfs.len(), 3);
|
||||
|
||||
let to_banned = &connected_cbfs[0];
|
||||
|
||||
println!("Banning address : {}", to_banned);
|
||||
|
||||
manager.set_banscore(100, to_banned).unwrap();
|
||||
|
||||
let newly_connected = manager.connected_cbf_addresses().unwrap();
|
||||
|
||||
println!("Newly Connected CBFs: {:#?}", newly_connected);
|
||||
|
||||
assert_eq!(newly_connected.len(), 2);
|
||||
|
||||
assert_ne!(newly_connected, connected_cbfs);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[ignore]
|
||||
fn test_send_recv() {
|
||||
let manager = PeerManager::init(
|
||||
Network::Bitcoin,
|
||||
".",
|
||||
None,
|
||||
LogDiscoveryProgress,
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let target_address = manager.connected_cbf_addresses().unwrap()[0];
|
||||
|
||||
let ping = NetworkMessage::Ping(30);
|
||||
|
||||
println!("Asking peer {}", target_address);
|
||||
|
||||
manager.send_to(&target_address, ping).unwrap();
|
||||
|
||||
let response = manager
|
||||
.receive_from(&target_address, "pong")
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
let value = match response {
|
||||
NetworkMessage::Pong(v) => Some(v),
|
||||
_ => None,
|
||||
};
|
||||
|
||||
let value = value.unwrap();
|
||||
|
||||
println!("Got value {:#?}", value);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[ignore]
|
||||
fn test_connect_all() {
|
||||
let manager = PeerManager::init(
|
||||
Network::Bitcoin,
|
||||
".",
|
||||
None,
|
||||
LogDiscoveryProgress,
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let cbf_pings = vec![100u64; manager.min_cbf];
|
||||
let non_cbf_pings = vec![200u64; manager.min_total - manager.min_cbf];
|
||||
|
||||
let cbf_peers = manager.connected_cbf_addresses().unwrap();
|
||||
let non_cbf_peers = manager.connected_non_cbf_addresses().unwrap();
|
||||
|
||||
let sent_cbf: Vec<bool> = cbf_pings
|
||||
.iter()
|
||||
.zip(cbf_peers.iter())
|
||||
.map(|(ping, address)| {
|
||||
let message = NetworkMessage::Ping(*ping);
|
||||
manager.send_to(address, message).unwrap();
|
||||
true
|
||||
})
|
||||
.collect();
|
||||
|
||||
assert_eq!(sent_cbf, vec![true; manager.min_cbf]);
|
||||
|
||||
println!("Sent pings to cbf peers");
|
||||
|
||||
let sent_noncbf: Vec<bool> = non_cbf_pings
|
||||
.iter()
|
||||
.zip(non_cbf_peers.iter())
|
||||
.map(|(ping, address)| {
|
||||
let message = NetworkMessage::Ping(*ping);
|
||||
manager.send_to(address, message).unwrap();
|
||||
true
|
||||
})
|
||||
.collect();
|
||||
|
||||
assert_eq!(sent_noncbf, vec![true; manager.min_total - manager.min_cbf]);
|
||||
|
||||
println!("Sent pings to non cbf peers");
|
||||
|
||||
let cbf_received: Vec<u64> = cbf_peers
|
||||
.iter()
|
||||
.map(|address| {
|
||||
let response = manager.receive_from(address, "pong").unwrap().unwrap();
|
||||
|
||||
let value = match response {
|
||||
NetworkMessage::Pong(v) => Some(v),
|
||||
_ => None,
|
||||
};
|
||||
|
||||
value.unwrap()
|
||||
})
|
||||
.collect();
|
||||
|
||||
let non_cbf_received: Vec<u64> = non_cbf_peers
|
||||
.iter()
|
||||
.map(|address| {
|
||||
let response = manager.receive_from(address, "pong").unwrap().unwrap();
|
||||
|
||||
let value = match response {
|
||||
NetworkMessage::Pong(v) => Some(v),
|
||||
_ => None,
|
||||
};
|
||||
|
||||
value.unwrap()
|
||||
})
|
||||
.collect();
|
||||
|
||||
assert_eq!(cbf_pings, cbf_received);
|
||||
|
||||
assert_eq!(non_cbf_pings, non_cbf_received);
|
||||
}
|
||||
}
|
||||
@@ -398,7 +398,7 @@ impl ChainStore<Full> {
|
||||
);
|
||||
}
|
||||
|
||||
// Delete full blocks overridden by snapshot
|
||||
// Delete full blocks overriden by snapshot
|
||||
let from_key = StoreEntry::Block(Some(snaphost.min_height)).get_key();
|
||||
let to_key = StoreEntry::Block(Some(usize::MAX)).get_key();
|
||||
batch.delete_range(&from_key, &to_key);
|
||||
@@ -760,7 +760,7 @@ impl CfStore {
|
||||
let cf_headers: Vec<FilterHeader> = filter_hashes
|
||||
.into_iter()
|
||||
.scan(checkpoint, |prev_header, filter_hash| {
|
||||
let filter_header = filter_hash.filter_header(prev_header);
|
||||
let filter_header = filter_hash.filter_header(&prev_header);
|
||||
*prev_header = filter_header;
|
||||
|
||||
Some(filter_header)
|
||||
@@ -801,7 +801,7 @@ impl CfStore {
|
||||
.zip(headers.into_iter())
|
||||
.scan(checkpoint, |prev_header, ((_, filter_content), header)| {
|
||||
let filter = BlockFilter::new(&filter_content);
|
||||
if header != filter.filter_header(prev_header) {
|
||||
if header != filter.filter_header(&prev_header) {
|
||||
return Some(Err(CompactFiltersError::InvalidFilter));
|
||||
}
|
||||
*prev_header = header;
|
||||
|
||||
@@ -205,7 +205,7 @@ impl CfSync {
|
||||
let block_hash = self.headers_store.get_block_hash(height)?.unwrap();
|
||||
|
||||
// TODO: also download random blocks?
|
||||
if process(&block_hash, &BlockFilter::new(filter))? {
|
||||
if process(&block_hash, &BlockFilter::new(&filter))? {
|
||||
log::debug!("Downloading block {}", block_hash);
|
||||
|
||||
let block = peer
|
||||
|
||||
@@ -24,20 +24,20 @@
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::collections::HashSet;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
|
||||
use bitcoin::{Transaction, Txid};
|
||||
use bitcoin::{BlockHeader, Script, Transaction, Txid};
|
||||
|
||||
use electrum_client::{Client, ConfigBuilder, ElectrumApi, Socks5Config};
|
||||
|
||||
use super::script_sync::Request;
|
||||
use self::utils::{ElectrumLikeSync, ElsGetHistoryRes};
|
||||
use super::*;
|
||||
use crate::database::{BatchDatabase, Database};
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::{BlockTime, FeeRate};
|
||||
use crate::FeeRate;
|
||||
|
||||
/// Wrapper over an Electrum Client that implements the required blockchain traits
|
||||
///
|
||||
@@ -68,10 +68,32 @@ impl Blockchain for ElectrumBlockchain {
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn setup<D: BatchDatabase, P: Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
self.client
|
||||
.electrum_like_setup(self.stop_gap, database, progress_update)
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(self.client.transaction_get(txid).map(Option::Some)?)
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
Ok(self.client.transaction_broadcast(tx).map(|_| ())?)
|
||||
}
|
||||
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
// TODO: unsubscribe when added to the client, or is there a better call to use here?
|
||||
|
||||
Ok(self
|
||||
.client
|
||||
.block_headers_subscribe()
|
||||
.map(|data| data.height as u32)?)
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
Ok(FeeRate::from_btc_per_kvb(
|
||||
self.client.estimate_fee(target)? as f32
|
||||
@@ -79,210 +101,43 @@ impl Blockchain for ElectrumBlockchain {
|
||||
}
|
||||
}
|
||||
|
||||
impl StatelessBlockchain for ElectrumBlockchain {}
|
||||
|
||||
impl GetHeight for ElectrumBlockchain {
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
// TODO: unsubscribe when added to the client, or is there a better call to use here?
|
||||
|
||||
Ok(self
|
||||
.client
|
||||
.block_headers_subscribe()
|
||||
.map(|data| data.height as u32)?)
|
||||
}
|
||||
}
|
||||
|
||||
impl GetTx for ElectrumBlockchain {
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(self.client.transaction_get(txid).map(Option::Some)?)
|
||||
}
|
||||
}
|
||||
|
||||
impl GetBlockHash for ElectrumBlockchain {
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
|
||||
let block_header = self.client.block_header(height as usize)?;
|
||||
Ok(block_header.block_hash())
|
||||
}
|
||||
}
|
||||
|
||||
impl WalletSync for ElectrumBlockchain {
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
impl ElectrumLikeSync for Client {
|
||||
fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script> + Clone>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
_progress_update: Box<dyn Progress>,
|
||||
) -> Result<(), Error> {
|
||||
let mut request = script_sync::start(database, self.stop_gap)?;
|
||||
let mut block_times = HashMap::<u32, u32>::new();
|
||||
let mut txid_to_height = HashMap::<Txid, u32>::new();
|
||||
let mut tx_cache = TxCache::new(database, &self.client);
|
||||
|
||||
// Set chunk_size to the smallest value capable of finding a gap greater than stop_gap.
|
||||
let chunk_size = self.stop_gap + 1;
|
||||
|
||||
// The electrum server has been inconsistent somehow in its responses during sync. For
|
||||
// example, we do a batch request of transactions and the response contains less
|
||||
// tranascations than in the request. This should never happen but we don't want to panic.
|
||||
let electrum_goof = || Error::Generic("electrum server misbehaving".to_string());
|
||||
|
||||
let batch_update = loop {
|
||||
request = match request {
|
||||
Request::Script(script_req) => {
|
||||
let scripts = script_req.request().take(chunk_size);
|
||||
let txids_per_script: Vec<Vec<_>> = self
|
||||
.client
|
||||
.batch_script_get_history(scripts)
|
||||
.map_err(Error::Electrum)?
|
||||
.into_iter()
|
||||
.map(|txs| {
|
||||
txs.into_iter()
|
||||
.map(|tx| {
|
||||
let tx_height = match tx.height {
|
||||
none if none <= 0 => None,
|
||||
height => {
|
||||
txid_to_height.insert(tx.tx_hash, height as u32);
|
||||
Some(height as u32)
|
||||
}
|
||||
};
|
||||
(tx.tx_hash, tx_height)
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.collect();
|
||||
|
||||
script_req.satisfy(txids_per_script)?
|
||||
}
|
||||
|
||||
Request::Conftime(conftime_req) => {
|
||||
// collect up to chunk_size heights to fetch from electrum
|
||||
let needs_block_height = conftime_req
|
||||
.request()
|
||||
.filter_map(|txid| txid_to_height.get(txid).cloned())
|
||||
.filter(|height| block_times.get(height).is_none())
|
||||
.take(chunk_size)
|
||||
.collect::<HashSet<u32>>();
|
||||
|
||||
let new_block_headers = self
|
||||
.client
|
||||
.batch_block_header(needs_block_height.iter().cloned())?;
|
||||
|
||||
for (height, header) in needs_block_height.into_iter().zip(new_block_headers) {
|
||||
block_times.insert(height, header.time);
|
||||
}
|
||||
|
||||
let conftimes = conftime_req
|
||||
.request()
|
||||
.take(chunk_size)
|
||||
.map(|txid| {
|
||||
let confirmation_time = txid_to_height
|
||||
.get(txid)
|
||||
.map(|height| {
|
||||
let timestamp =
|
||||
*block_times.get(height).ok_or_else(electrum_goof)?;
|
||||
Result::<_, Error>::Ok(BlockTime {
|
||||
height: *height,
|
||||
timestamp: timestamp.into(),
|
||||
})
|
||||
})
|
||||
.transpose()?;
|
||||
Ok(confirmation_time)
|
||||
})
|
||||
.collect::<Result<_, Error>>()?;
|
||||
|
||||
conftime_req.satisfy(conftimes)?
|
||||
}
|
||||
Request::Tx(tx_req) => {
|
||||
let needs_full = tx_req.request().take(chunk_size);
|
||||
tx_cache.save_txs(needs_full.clone())?;
|
||||
let full_transactions = needs_full
|
||||
.map(|txid| tx_cache.get(*txid).ok_or_else(electrum_goof))
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
let input_txs = full_transactions.iter().flat_map(|tx| {
|
||||
tx.input
|
||||
.iter()
|
||||
.filter(|input| !input.previous_output.is_null())
|
||||
.map(|input| &input.previous_output.txid)
|
||||
});
|
||||
tx_cache.save_txs(input_txs)?;
|
||||
|
||||
let full_details = full_transactions
|
||||
.into_iter()
|
||||
.map(|tx| {
|
||||
let mut input_index = 0usize;
|
||||
let prev_outputs = tx
|
||||
.input
|
||||
.iter()
|
||||
.map(|input| {
|
||||
if input.previous_output.is_null() {
|
||||
return Ok(None);
|
||||
}
|
||||
let prev_tx = tx_cache
|
||||
.get(input.previous_output.txid)
|
||||
.ok_or_else(electrum_goof)?;
|
||||
let txout = prev_tx
|
||||
.output
|
||||
.get(input.previous_output.vout as usize)
|
||||
.ok_or_else(electrum_goof)?;
|
||||
input_index += 1;
|
||||
Ok(Some(txout.clone()))
|
||||
})
|
||||
.collect::<Result<Vec<_>, Error>>()?;
|
||||
Ok((prev_outputs, tx))
|
||||
})
|
||||
.collect::<Result<Vec<_>, Error>>()?;
|
||||
|
||||
tx_req.satisfy(full_details)?
|
||||
}
|
||||
Request::Finish(batch_update) => break batch_update,
|
||||
}
|
||||
};
|
||||
|
||||
database.commit_batch(batch_update)?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
struct TxCache<'a, 'b, D> {
|
||||
db: &'a D,
|
||||
client: &'b Client,
|
||||
cache: HashMap<Txid, Transaction>,
|
||||
}
|
||||
|
||||
impl<'a, 'b, D: Database> TxCache<'a, 'b, D> {
|
||||
fn new(db: &'a D, client: &'b Client) -> Self {
|
||||
TxCache {
|
||||
db,
|
||||
client,
|
||||
cache: HashMap::default(),
|
||||
}
|
||||
}
|
||||
fn save_txs<'c>(&mut self, txids: impl Iterator<Item = &'c Txid>) -> Result<(), Error> {
|
||||
let mut need_fetch = vec![];
|
||||
for txid in txids {
|
||||
if self.cache.get(txid).is_some() {
|
||||
continue;
|
||||
} else if let Some(transaction) = self.db.get_raw_tx(txid)? {
|
||||
self.cache.insert(*txid, transaction);
|
||||
} else {
|
||||
need_fetch.push(txid);
|
||||
}
|
||||
}
|
||||
|
||||
if !need_fetch.is_empty() {
|
||||
let txs = self
|
||||
.client
|
||||
.batch_transaction_get(need_fetch.clone())
|
||||
.map_err(Error::Electrum)?;
|
||||
for (tx, _txid) in txs.into_iter().zip(need_fetch) {
|
||||
debug_assert_eq!(*_txid, tx.txid());
|
||||
self.cache.insert(tx.txid(), tx);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
scripts: I,
|
||||
) -> Result<Vec<Vec<ElsGetHistoryRes>>, Error> {
|
||||
self.batch_script_get_history(scripts)
|
||||
.map(|v| {
|
||||
v.into_iter()
|
||||
.map(|v| {
|
||||
v.into_iter()
|
||||
.map(
|
||||
|electrum_client::GetHistoryRes {
|
||||
height, tx_hash, ..
|
||||
}| ElsGetHistoryRes {
|
||||
height,
|
||||
tx_hash,
|
||||
},
|
||||
)
|
||||
.collect()
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.map_err(Error::Electrum)
|
||||
}
|
||||
|
||||
fn get(&self, txid: Txid) -> Option<Transaction> {
|
||||
self.cache.get(&txid).map(Clone::clone)
|
||||
fn els_batch_transaction_get<'s, I: IntoIterator<Item = &'s Txid> + Clone>(
|
||||
&self,
|
||||
txids: I,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
self.batch_transaction_get(txids).map_err(Error::Electrum)
|
||||
}
|
||||
|
||||
fn els_batch_block_header<I: IntoIterator<Item = u32> + Clone>(
|
||||
&self,
|
||||
heights: I,
|
||||
) -> Result<Vec<BlockHeader>, Error> {
|
||||
self.batch_block_header(heights).map_err(Error::Electrum)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -323,93 +178,8 @@ impl ConfigurableBlockchain for ElectrumBlockchain {
|
||||
|
||||
#[cfg(test)]
|
||||
#[cfg(feature = "test-electrum")]
|
||||
mod test {
|
||||
use std::sync::Arc;
|
||||
|
||||
use super::*;
|
||||
use crate::database::MemoryDatabase;
|
||||
use crate::testutils::blockchain_tests::TestClient;
|
||||
use crate::testutils::configurable_blockchain_tests::ConfigurableBlockchainTester;
|
||||
use crate::wallet::{AddressIndex, Wallet};
|
||||
|
||||
crate::bdk_blockchain_tests! {
|
||||
fn test_instance(test_client: &TestClient) -> ElectrumBlockchain {
|
||||
ElectrumBlockchain::from(Client::new(&test_client.electrsd.electrum_url).unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
fn get_factory() -> (TestClient, Arc<ElectrumBlockchain>) {
|
||||
let test_client = TestClient::default();
|
||||
|
||||
let factory = Arc::new(ElectrumBlockchain::from(
|
||||
Client::new(&test_client.electrsd.electrum_url).unwrap(),
|
||||
));
|
||||
|
||||
(test_client, factory)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_electrum_blockchain_factory() {
|
||||
let (_test_client, factory) = get_factory();
|
||||
|
||||
let a = factory.build("aaaaaa", None).unwrap();
|
||||
let b = factory.build("bbbbbb", None).unwrap();
|
||||
|
||||
assert_eq!(
|
||||
a.client.block_headers_subscribe().unwrap().height,
|
||||
b.client.block_headers_subscribe().unwrap().height
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_electrum_blockchain_factory_sync_wallet() {
|
||||
let (mut test_client, factory) = get_factory();
|
||||
|
||||
let db = MemoryDatabase::new();
|
||||
let wallet = Wallet::new(
|
||||
"wpkh(L5EZftvrYaSudiozVRzTqLcHLNDoVn7H5HSfM9BAN6tMJX8oTWz6)",
|
||||
None,
|
||||
bitcoin::Network::Regtest,
|
||||
db,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let address = wallet.get_address(AddressIndex::New).unwrap();
|
||||
|
||||
let tx = testutils! {
|
||||
@tx ( (@addr address.address) => 50_000 )
|
||||
};
|
||||
test_client.receive(tx);
|
||||
|
||||
factory
|
||||
.sync_wallet(&wallet, None, Default::default())
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(wallet.get_balance().unwrap(), 50_000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_electrum_with_variable_configs() {
|
||||
struct ElectrumTester;
|
||||
|
||||
impl ConfigurableBlockchainTester<ElectrumBlockchain> for ElectrumTester {
|
||||
const BLOCKCHAIN_NAME: &'static str = "Electrum";
|
||||
|
||||
fn config_with_stop_gap(
|
||||
&self,
|
||||
test_client: &mut TestClient,
|
||||
stop_gap: usize,
|
||||
) -> Option<ElectrumBlockchainConfig> {
|
||||
Some(ElectrumBlockchainConfig {
|
||||
url: test_client.electrsd.electrum_url.clone(),
|
||||
socks5: None,
|
||||
retry: 0,
|
||||
timeout: None,
|
||||
stop_gap: stop_gap,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
ElectrumTester.run();
|
||||
crate::bdk_blockchain_tests! {
|
||||
fn test_instance(test_client: &TestClient) -> ElectrumBlockchain {
|
||||
ElectrumBlockchain::from(Client::new(&test_client.electrsd.electrum_url).unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
433
src/blockchain/esplora.rs
Normal file
433
src/blockchain/esplora.rs
Normal file
@@ -0,0 +1,433 @@
|
||||
// Bitcoin Dev Kit
|
||||
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
|
||||
//
|
||||
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
|
||||
//
|
||||
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
|
||||
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
||||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
//! Esplora
|
||||
//!
|
||||
//! This module defines a [`Blockchain`] struct that can query an Esplora backend
|
||||
//! populate the wallet's [database](crate::database::Database) by
|
||||
//!
|
||||
//! ## Example
|
||||
//!
|
||||
//! ```no_run
|
||||
//! # use bdk::blockchain::esplora::EsploraBlockchain;
|
||||
//! let blockchain = EsploraBlockchain::new("https://blockstream.info/testnet/api", None, 20);
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::fmt;
|
||||
|
||||
use bitcoin::consensus::{self, deserialize, serialize};
|
||||
use bitcoin::hashes::hex::{FromHex, ToHex};
|
||||
use bitcoin::hashes::{sha256, Hash};
|
||||
use bitcoin::{BlockHash, BlockHeader, Script, Transaction, Txid};
|
||||
use futures::stream::{self, FuturesOrdered, StreamExt, TryStreamExt};
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
use reqwest::{Client, StatusCode};
|
||||
use serde::Deserialize;
|
||||
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::wallet::utils::ChunksIterator;
|
||||
use crate::FeeRate;
|
||||
|
||||
use super::*;
|
||||
|
||||
use self::utils::{ElectrumLikeSync, ElsGetHistoryRes};
|
||||
|
||||
const DEFAULT_CONCURRENT_REQUESTS: u8 = 4;
|
||||
|
||||
#[derive(Debug)]
|
||||
struct UrlClient {
|
||||
url: String,
|
||||
// We use the async client instead of the blocking one because it automatically uses `fetch`
|
||||
// when the target platform is wasm32.
|
||||
client: Client,
|
||||
concurrency: u8,
|
||||
}
|
||||
|
||||
/// Structure that implements the logic to sync with Esplora
|
||||
///
|
||||
/// ## Example
|
||||
/// See the [`blockchain::esplora`](crate::blockchain::esplora) module for a usage example.
|
||||
#[derive(Debug)]
|
||||
pub struct EsploraBlockchain {
|
||||
url_client: UrlClient,
|
||||
stop_gap: usize,
|
||||
}
|
||||
|
||||
impl std::convert::From<UrlClient> for EsploraBlockchain {
|
||||
fn from(url_client: UrlClient) -> Self {
|
||||
EsploraBlockchain {
|
||||
url_client,
|
||||
stop_gap: 20,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl EsploraBlockchain {
|
||||
/// Create a new instance of the client from a base URL
|
||||
pub fn new(base_url: &str, concurrency: Option<u8>, stop_gap: usize) -> Self {
|
||||
EsploraBlockchain {
|
||||
url_client: UrlClient {
|
||||
url: base_url.to_string(),
|
||||
client: Client::new(),
|
||||
concurrency: concurrency.unwrap_or(DEFAULT_CONCURRENT_REQUESTS),
|
||||
},
|
||||
stop_gap,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl Blockchain for EsploraBlockchain {
|
||||
fn get_capabilities(&self) -> HashSet<Capability> {
|
||||
vec![
|
||||
Capability::FullHistory,
|
||||
Capability::GetAnyTx,
|
||||
Capability::AccurateFees,
|
||||
]
|
||||
.into_iter()
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn setup<D: BatchDatabase, P: Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(self
|
||||
.url_client
|
||||
.electrum_like_setup(self.stop_gap, database, progress_update))
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(await_or_block!(self.url_client._get_tx(txid))?)
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
Ok(await_or_block!(self.url_client._broadcast(tx))?)
|
||||
}
|
||||
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(await_or_block!(self.url_client._get_height())?)
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
let estimates = await_or_block!(self.url_client._get_fee_estimates())?;
|
||||
|
||||
let fee_val = estimates
|
||||
.into_iter()
|
||||
.map(|(k, v)| Ok::<_, std::num::ParseIntError>((k.parse::<usize>()?, v)))
|
||||
.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| Error::Generic(e.to_string()))?
|
||||
.into_iter()
|
||||
.take_while(|(k, _)| k <= &target)
|
||||
.map(|(_, v)| v)
|
||||
.last()
|
||||
.unwrap_or(1.0);
|
||||
|
||||
Ok(FeeRate::from_sat_per_vb(fee_val as f32))
|
||||
}
|
||||
}
|
||||
|
||||
impl UrlClient {
|
||||
fn script_to_scripthash(script: &Script) -> String {
|
||||
sha256::Hash::hash(script.as_bytes()).into_inner().to_hex()
|
||||
}
|
||||
|
||||
async fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
|
||||
let resp = self
|
||||
.client
|
||||
.get(&format!("{}/tx/{}/raw", self.url, txid))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if let StatusCode::NOT_FOUND = resp.status() {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
Ok(Some(deserialize(&resp.error_for_status()?.bytes().await?)?))
|
||||
}
|
||||
|
||||
async fn _get_tx_no_opt(&self, txid: &Txid) -> Result<Transaction, EsploraError> {
|
||||
match self._get_tx(txid).await {
|
||||
Ok(Some(tx)) => Ok(tx),
|
||||
Ok(None) => Err(EsploraError::TransactionNotFound(*txid)),
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
}
|
||||
|
||||
async fn _get_header(&self, block_height: u32) -> Result<BlockHeader, EsploraError> {
|
||||
let resp = self
|
||||
.client
|
||||
.get(&format!("{}/block-height/{}", self.url, block_height))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if let StatusCode::NOT_FOUND = resp.status() {
|
||||
return Err(EsploraError::HeaderHeightNotFound(block_height));
|
||||
}
|
||||
let bytes = resp.bytes().await?;
|
||||
let hash = std::str::from_utf8(&bytes)
|
||||
.map_err(|_| EsploraError::HeaderHeightNotFound(block_height))?;
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.get(&format!("{}/block/{}/header", self.url, hash))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
let header = deserialize(&Vec::from_hex(&resp.text().await?)?)?;
|
||||
|
||||
Ok(header)
|
||||
}
|
||||
|
||||
async fn _broadcast(&self, transaction: &Transaction) -> Result<(), EsploraError> {
|
||||
self.client
|
||||
.post(&format!("{}/tx", self.url))
|
||||
.body(serialize(transaction).to_hex())
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn _get_height(&self) -> Result<u32, EsploraError> {
|
||||
let req = self
|
||||
.client
|
||||
.get(&format!("{}/blocks/tip/height", self.url))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
Ok(req.error_for_status()?.text().await?.parse()?)
|
||||
}
|
||||
|
||||
async fn _script_get_history(
|
||||
&self,
|
||||
script: &Script,
|
||||
) -> Result<Vec<ElsGetHistoryRes>, EsploraError> {
|
||||
let mut result = Vec::new();
|
||||
let scripthash = Self::script_to_scripthash(script);
|
||||
|
||||
// Add the unconfirmed transactions first
|
||||
result.extend(
|
||||
self.client
|
||||
.get(&format!(
|
||||
"{}/scripthash/{}/txs/mempool",
|
||||
self.url, scripthash
|
||||
))
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<Vec<EsploraGetHistory>>()
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|x| ElsGetHistoryRes {
|
||||
tx_hash: x.txid,
|
||||
height: x.status.block_height.unwrap_or(0) as i32,
|
||||
}),
|
||||
);
|
||||
|
||||
debug!(
|
||||
"Found {} mempool txs for {} - {:?}",
|
||||
result.len(),
|
||||
scripthash,
|
||||
script
|
||||
);
|
||||
|
||||
// Then go through all the pages of confirmed transactions
|
||||
let mut last_txid = String::new();
|
||||
loop {
|
||||
let response = self
|
||||
.client
|
||||
.get(&format!(
|
||||
"{}/scripthash/{}/txs/chain/{}",
|
||||
self.url, scripthash, last_txid
|
||||
))
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<Vec<EsploraGetHistory>>()
|
||||
.await?;
|
||||
let len = response.len();
|
||||
if let Some(elem) = response.last() {
|
||||
last_txid = elem.txid.to_hex();
|
||||
}
|
||||
|
||||
debug!("... adding {} confirmed transactions", len);
|
||||
|
||||
result.extend(response.into_iter().map(|x| ElsGetHistoryRes {
|
||||
tx_hash: x.txid,
|
||||
height: x.status.block_height.unwrap_or(0) as i32,
|
||||
}));
|
||||
|
||||
if len < 25 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
async fn _get_fee_estimates(&self) -> Result<HashMap<String, f64>, EsploraError> {
|
||||
Ok(self
|
||||
.client
|
||||
.get(&format!("{}/fee-estimates", self.url,))
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<HashMap<String, f64>>()
|
||||
.await?)
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl ElectrumLikeSync for UrlClient {
|
||||
fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script>>(
|
||||
&self,
|
||||
scripts: I,
|
||||
) -> Result<Vec<Vec<ElsGetHistoryRes>>, Error> {
|
||||
let future = async {
|
||||
let mut results = vec![];
|
||||
for chunk in ChunksIterator::new(scripts.into_iter(), self.concurrency as usize) {
|
||||
let mut futs = FuturesOrdered::new();
|
||||
for script in chunk {
|
||||
futs.push(self._script_get_history(&script));
|
||||
}
|
||||
let partial_results: Vec<Vec<ElsGetHistoryRes>> = futs.try_collect().await?;
|
||||
results.extend(partial_results);
|
||||
}
|
||||
Ok(stream::iter(results).collect().await)
|
||||
};
|
||||
|
||||
await_or_block!(future)
|
||||
}
|
||||
|
||||
fn els_batch_transaction_get<'s, I: IntoIterator<Item = &'s Txid>>(
|
||||
&self,
|
||||
txids: I,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
let future = async {
|
||||
let mut results = vec![];
|
||||
for chunk in ChunksIterator::new(txids.into_iter(), self.concurrency as usize) {
|
||||
let mut futs = FuturesOrdered::new();
|
||||
for txid in chunk {
|
||||
futs.push(self._get_tx_no_opt(&txid));
|
||||
}
|
||||
let partial_results: Vec<Transaction> = futs.try_collect().await?;
|
||||
results.extend(partial_results);
|
||||
}
|
||||
Ok(stream::iter(results).collect().await)
|
||||
};
|
||||
|
||||
await_or_block!(future)
|
||||
}
|
||||
|
||||
fn els_batch_block_header<I: IntoIterator<Item = u32>>(
|
||||
&self,
|
||||
heights: I,
|
||||
) -> Result<Vec<BlockHeader>, Error> {
|
||||
let future = async {
|
||||
let mut results = vec![];
|
||||
for chunk in ChunksIterator::new(heights.into_iter(), self.concurrency as usize) {
|
||||
let mut futs = FuturesOrdered::new();
|
||||
for height in chunk {
|
||||
futs.push(self._get_header(height));
|
||||
}
|
||||
let partial_results: Vec<BlockHeader> = futs.try_collect().await?;
|
||||
results.extend(partial_results);
|
||||
}
|
||||
Ok(stream::iter(results).collect().await)
|
||||
};
|
||||
|
||||
await_or_block!(future)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct EsploraGetHistoryStatus {
|
||||
block_height: Option<usize>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct EsploraGetHistory {
|
||||
txid: Txid,
|
||||
status: EsploraGetHistoryStatus,
|
||||
}
|
||||
|
||||
/// Configuration for an [`EsploraBlockchain`]
|
||||
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
|
||||
pub struct EsploraBlockchainConfig {
|
||||
/// Base URL of the esplora service
|
||||
///
|
||||
/// eg. `https://blockstream.info/api/`
|
||||
pub base_url: String,
|
||||
/// Number of parallel requests sent to the esplora service (default: 4)
|
||||
pub concurrency: Option<u8>,
|
||||
/// Stop searching addresses for transactions after finding an unused gap of this length
|
||||
pub stop_gap: usize,
|
||||
}
|
||||
|
||||
impl ConfigurableBlockchain for EsploraBlockchain {
|
||||
type Config = EsploraBlockchainConfig;
|
||||
|
||||
fn from_config(config: &Self::Config) -> Result<Self, Error> {
|
||||
Ok(EsploraBlockchain::new(
|
||||
config.base_url.as_str(),
|
||||
config.concurrency,
|
||||
config.stop_gap,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// Errors that can happen during a sync with [`EsploraBlockchain`]
|
||||
#[derive(Debug)]
|
||||
pub enum EsploraError {
|
||||
/// Error with the HTTP call
|
||||
Reqwest(reqwest::Error),
|
||||
/// Invalid number returned
|
||||
Parsing(std::num::ParseIntError),
|
||||
/// Invalid Bitcoin data returned
|
||||
BitcoinEncoding(bitcoin::consensus::encode::Error),
|
||||
/// Invalid Hex data returned
|
||||
Hex(bitcoin::hashes::hex::Error),
|
||||
|
||||
/// Transaction not found
|
||||
TransactionNotFound(Txid),
|
||||
/// Header height not found
|
||||
HeaderHeightNotFound(u32),
|
||||
/// Header hash not found
|
||||
HeaderHashNotFound(BlockHash),
|
||||
}
|
||||
|
||||
impl fmt::Display for EsploraError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "{:?}", self)
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for EsploraError {}
|
||||
|
||||
impl_error!(reqwest::Error, Reqwest, EsploraError);
|
||||
impl_error!(std::num::ParseIntError, Parsing, EsploraError);
|
||||
impl_error!(consensus::encode::Error, BitcoinEncoding, EsploraError);
|
||||
impl_error!(bitcoin::hashes::hex::Error, Hex, EsploraError);
|
||||
|
||||
#[cfg(test)]
|
||||
#[cfg(feature = "test-esplora")]
|
||||
crate::bdk_blockchain_tests! {
|
||||
fn test_instance(test_client: &TestClient) -> EsploraBlockchain {
|
||||
EsploraBlockchain::new(&format!("http://{}",test_client.electrsd.esplora_url.as_ref().unwrap()), None, 20)
|
||||
}
|
||||
}
|
||||
@@ -1,117 +0,0 @@
|
||||
//! structs from the esplora API
|
||||
//!
|
||||
//! see: <https://github.com/Blockstream/esplora/blob/master/API.md>
|
||||
use crate::BlockTime;
|
||||
use bitcoin::{OutPoint, Script, Transaction, TxIn, TxOut, Txid, Witness};
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct PrevOut {
|
||||
pub value: u64,
|
||||
pub scriptpubkey: Script,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct Vin {
|
||||
pub txid: Txid,
|
||||
pub vout: u32,
|
||||
// None if coinbase
|
||||
pub prevout: Option<PrevOut>,
|
||||
pub scriptsig: Script,
|
||||
#[serde(deserialize_with = "deserialize_witness", default)]
|
||||
pub witness: Vec<Vec<u8>>,
|
||||
pub sequence: u32,
|
||||
pub is_coinbase: bool,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct Vout {
|
||||
pub value: u64,
|
||||
pub scriptpubkey: Script,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct TxStatus {
|
||||
pub confirmed: bool,
|
||||
pub block_height: Option<u32>,
|
||||
pub block_time: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct Tx {
|
||||
pub txid: Txid,
|
||||
pub version: i32,
|
||||
pub locktime: u32,
|
||||
pub vin: Vec<Vin>,
|
||||
pub vout: Vec<Vout>,
|
||||
pub status: TxStatus,
|
||||
pub fee: u64,
|
||||
}
|
||||
|
||||
impl Tx {
|
||||
pub fn to_tx(&self) -> Transaction {
|
||||
Transaction {
|
||||
version: self.version,
|
||||
lock_time: self.locktime,
|
||||
input: self
|
||||
.vin
|
||||
.iter()
|
||||
.cloned()
|
||||
.map(|vin| TxIn {
|
||||
previous_output: OutPoint {
|
||||
txid: vin.txid,
|
||||
vout: vin.vout,
|
||||
},
|
||||
script_sig: vin.scriptsig,
|
||||
sequence: vin.sequence,
|
||||
witness: Witness::from_vec(vin.witness),
|
||||
})
|
||||
.collect(),
|
||||
output: self
|
||||
.vout
|
||||
.iter()
|
||||
.cloned()
|
||||
.map(|vout| TxOut {
|
||||
value: vout.value,
|
||||
script_pubkey: vout.scriptpubkey,
|
||||
})
|
||||
.collect(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn confirmation_time(&self) -> Option<BlockTime> {
|
||||
match self.status {
|
||||
TxStatus {
|
||||
confirmed: true,
|
||||
block_height: Some(height),
|
||||
block_time: Some(timestamp),
|
||||
} => Some(BlockTime { timestamp, height }),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn previous_outputs(&self) -> Vec<Option<TxOut>> {
|
||||
self.vin
|
||||
.iter()
|
||||
.cloned()
|
||||
.map(|vin| {
|
||||
vin.prevout.map(|po| TxOut {
|
||||
script_pubkey: po.scriptpubkey,
|
||||
value: po.value,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
fn deserialize_witness<'de, D>(d: D) -> Result<Vec<Vec<u8>>, D::Error>
|
||||
where
|
||||
D: serde::de::Deserializer<'de>,
|
||||
{
|
||||
use crate::serde::Deserialize;
|
||||
use bitcoin::hashes::hex::FromHex;
|
||||
let list = Vec::<String>::deserialize(d)?;
|
||||
list.into_iter()
|
||||
.map(|hex_str| Vec::<u8>::from_hex(&hex_str))
|
||||
.collect::<Result<Vec<Vec<u8>>, _>>()
|
||||
.map_err(serde::de::Error::custom)
|
||||
}
|
||||
@@ -1,246 +0,0 @@
|
||||
//! Esplora
|
||||
//!
|
||||
//! This module defines a [`EsploraBlockchain`] struct that can query an Esplora
|
||||
//! backend populate the wallet's [database](crate::database::Database) by:
|
||||
//!
|
||||
//! ## Example
|
||||
//!
|
||||
//! ```no_run
|
||||
//! # use bdk::blockchain::esplora::EsploraBlockchain;
|
||||
//! let blockchain = EsploraBlockchain::new("https://blockstream.info/testnet/api", 20);
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
//!
|
||||
//! Esplora blockchain can use either `ureq` or `reqwest` for the HTTP client
|
||||
//! depending on your needs (blocking or async respectively).
|
||||
//!
|
||||
//! Please note, to configure the Esplora HTTP client correctly use one of:
|
||||
//! Blocking: --features='esplora,ureq'
|
||||
//! Async: --features='async-interface,esplora,reqwest' --no-default-features
|
||||
use std::collections::HashMap;
|
||||
use std::fmt;
|
||||
use std::io;
|
||||
|
||||
use bitcoin::consensus;
|
||||
use bitcoin::{BlockHash, Txid};
|
||||
|
||||
use crate::error::Error;
|
||||
use crate::FeeRate;
|
||||
|
||||
#[cfg(feature = "reqwest")]
|
||||
mod reqwest;
|
||||
|
||||
#[cfg(feature = "reqwest")]
|
||||
pub use self::reqwest::*;
|
||||
|
||||
#[cfg(feature = "ureq")]
|
||||
mod ureq;
|
||||
|
||||
#[cfg(feature = "ureq")]
|
||||
pub use self::ureq::*;
|
||||
|
||||
mod api;
|
||||
|
||||
fn into_fee_rate(target: usize, estimates: HashMap<String, f64>) -> Result<FeeRate, Error> {
|
||||
let fee_val = {
|
||||
let mut pairs = estimates
|
||||
.into_iter()
|
||||
.filter_map(|(k, v)| Some((k.parse::<usize>().ok()?, v)))
|
||||
.collect::<Vec<_>>();
|
||||
pairs.sort_unstable_by_key(|(k, _)| std::cmp::Reverse(*k));
|
||||
pairs
|
||||
.into_iter()
|
||||
.find(|(k, _)| k <= &target)
|
||||
.map(|(_, v)| v)
|
||||
.unwrap_or(1.0)
|
||||
};
|
||||
Ok(FeeRate::from_sat_per_vb(fee_val as f32))
|
||||
}
|
||||
|
||||
/// Errors that can happen during a sync with [`EsploraBlockchain`]
|
||||
#[derive(Debug)]
|
||||
pub enum EsploraError {
|
||||
/// Error during ureq HTTP request
|
||||
#[cfg(feature = "ureq")]
|
||||
Ureq(::ureq::Error),
|
||||
/// Transport error during the ureq HTTP call
|
||||
#[cfg(feature = "ureq")]
|
||||
UreqTransport(::ureq::Transport),
|
||||
/// Error during reqwest HTTP request
|
||||
#[cfg(feature = "reqwest")]
|
||||
Reqwest(::reqwest::Error),
|
||||
/// HTTP response error
|
||||
HttpResponse(u16),
|
||||
/// IO error during ureq response read
|
||||
Io(io::Error),
|
||||
/// No header found in ureq response
|
||||
NoHeader,
|
||||
/// Invalid number returned
|
||||
Parsing(std::num::ParseIntError),
|
||||
/// Invalid Bitcoin data returned
|
||||
BitcoinEncoding(bitcoin::consensus::encode::Error),
|
||||
/// Invalid Hex data returned
|
||||
Hex(bitcoin::hashes::hex::Error),
|
||||
|
||||
/// Transaction not found
|
||||
TransactionNotFound(Txid),
|
||||
/// Header height not found
|
||||
HeaderHeightNotFound(u32),
|
||||
/// Header hash not found
|
||||
HeaderHashNotFound(BlockHash),
|
||||
}
|
||||
|
||||
impl fmt::Display for EsploraError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "{:?}", self)
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration for an [`EsploraBlockchain`]
|
||||
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
|
||||
pub struct EsploraBlockchainConfig {
|
||||
/// Base URL of the esplora service
|
||||
///
|
||||
/// eg. `https://blockstream.info/api/`
|
||||
pub base_url: String,
|
||||
/// Optional URL of the proxy to use to make requests to the Esplora server
|
||||
///
|
||||
/// The string should be formatted as: `<protocol>://<user>:<password>@host:<port>`.
|
||||
///
|
||||
/// Note that the format of this value and the supported protocols change slightly between the
|
||||
/// sync version of esplora (using `ureq`) and the async version (using `reqwest`). For more
|
||||
/// details check with the documentation of the two crates. Both of them are compiled with
|
||||
/// the `socks` feature enabled.
|
||||
///
|
||||
/// The proxy is ignored when targeting `wasm32`.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub proxy: Option<String>,
|
||||
/// Number of parallel requests sent to the esplora service (default: 4)
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub concurrency: Option<u8>,
|
||||
/// Stop searching addresses for transactions after finding an unused gap of this length.
|
||||
pub stop_gap: usize,
|
||||
/// Socket timeout.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub timeout: Option<u64>,
|
||||
}
|
||||
|
||||
impl EsploraBlockchainConfig {
|
||||
/// create a config with default values given the base url and stop gap
|
||||
pub fn new(base_url: String, stop_gap: usize) -> Self {
|
||||
Self {
|
||||
base_url,
|
||||
proxy: None,
|
||||
timeout: None,
|
||||
stop_gap,
|
||||
concurrency: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for EsploraError {}
|
||||
|
||||
#[cfg(feature = "ureq")]
|
||||
impl_error!(::ureq::Transport, UreqTransport, EsploraError);
|
||||
#[cfg(feature = "reqwest")]
|
||||
impl_error!(::reqwest::Error, Reqwest, EsploraError);
|
||||
impl_error!(io::Error, Io, EsploraError);
|
||||
impl_error!(std::num::ParseIntError, Parsing, EsploraError);
|
||||
impl_error!(consensus::encode::Error, BitcoinEncoding, EsploraError);
|
||||
impl_error!(bitcoin::hashes::hex::Error, Hex, EsploraError);
|
||||
|
||||
#[cfg(test)]
|
||||
#[cfg(feature = "test-esplora")]
|
||||
crate::bdk_blockchain_tests! {
|
||||
fn test_instance(test_client: &TestClient) -> EsploraBlockchain {
|
||||
EsploraBlockchain::new(&format!("http://{}",test_client.electrsd.esplora_url.as_ref().unwrap()), 20)
|
||||
}
|
||||
}
|
||||
|
||||
const DEFAULT_CONCURRENT_REQUESTS: u8 = 4;
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn feerate_parsing() {
|
||||
let esplora_fees = serde_json::from_str::<HashMap<String, f64>>(
|
||||
r#"{
|
||||
"25": 1.015,
|
||||
"5": 2.3280000000000003,
|
||||
"12": 2.0109999999999997,
|
||||
"15": 1.018,
|
||||
"17": 1.018,
|
||||
"11": 2.0109999999999997,
|
||||
"3": 3.01,
|
||||
"2": 4.9830000000000005,
|
||||
"6": 2.2359999999999998,
|
||||
"21": 1.018,
|
||||
"13": 1.081,
|
||||
"7": 2.2359999999999998,
|
||||
"8": 2.2359999999999998,
|
||||
"16": 1.018,
|
||||
"20": 1.018,
|
||||
"22": 1.017,
|
||||
"23": 1.017,
|
||||
"504": 1,
|
||||
"9": 2.2359999999999998,
|
||||
"14": 1.018,
|
||||
"10": 2.0109999999999997,
|
||||
"24": 1.017,
|
||||
"1008": 1,
|
||||
"1": 4.9830000000000005,
|
||||
"4": 2.3280000000000003,
|
||||
"19": 1.018,
|
||||
"144": 1,
|
||||
"18": 1.018
|
||||
}
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(
|
||||
into_fee_rate(6, esplora_fees.clone()).unwrap(),
|
||||
FeeRate::from_sat_per_vb(2.236)
|
||||
);
|
||||
assert_eq!(
|
||||
into_fee_rate(26, esplora_fees).unwrap(),
|
||||
FeeRate::from_sat_per_vb(1.015),
|
||||
"should inherit from value for 25"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(feature = "test-esplora")]
|
||||
fn test_esplora_with_variable_configs() {
|
||||
use crate::testutils::{
|
||||
blockchain_tests::TestClient,
|
||||
configurable_blockchain_tests::ConfigurableBlockchainTester,
|
||||
};
|
||||
|
||||
struct EsploraTester;
|
||||
|
||||
impl ConfigurableBlockchainTester<EsploraBlockchain> for EsploraTester {
|
||||
const BLOCKCHAIN_NAME: &'static str = "Esplora";
|
||||
|
||||
fn config_with_stop_gap(
|
||||
&self,
|
||||
test_client: &mut TestClient,
|
||||
stop_gap: usize,
|
||||
) -> Option<EsploraBlockchainConfig> {
|
||||
Some(EsploraBlockchainConfig {
|
||||
base_url: format!(
|
||||
"http://{}",
|
||||
test_client.electrsd.esplora_url.as_ref().unwrap()
|
||||
),
|
||||
proxy: None,
|
||||
concurrency: None,
|
||||
stop_gap: stop_gap,
|
||||
timeout: None,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
EsploraTester.run();
|
||||
}
|
||||
}
|
||||
@@ -1,350 +0,0 @@
|
||||
// Bitcoin Dev Kit
|
||||
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
|
||||
//
|
||||
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
|
||||
//
|
||||
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
|
||||
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
||||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
//! Esplora by way of `reqwest` HTTP client.
|
||||
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
use bitcoin::consensus::{deserialize, serialize};
|
||||
use bitcoin::hashes::hex::{FromHex, ToHex};
|
||||
use bitcoin::hashes::{sha256, Hash};
|
||||
use bitcoin::{BlockHeader, Script, Transaction, Txid};
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
|
||||
use ::reqwest::{Client, StatusCode};
|
||||
use futures::stream::{FuturesOrdered, TryStreamExt};
|
||||
|
||||
use super::api::Tx;
|
||||
use crate::blockchain::esplora::EsploraError;
|
||||
use crate::blockchain::*;
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::FeeRate;
|
||||
|
||||
#[derive(Debug)]
|
||||
struct UrlClient {
|
||||
url: String,
|
||||
// We use the async client instead of the blocking one because it automatically uses `fetch`
|
||||
// when the target platform is wasm32.
|
||||
client: Client,
|
||||
concurrency: u8,
|
||||
}
|
||||
|
||||
/// Structure that implements the logic to sync with Esplora
|
||||
///
|
||||
/// ## Example
|
||||
/// See the [`blockchain::esplora`](crate::blockchain::esplora) module for a usage example.
|
||||
#[derive(Debug)]
|
||||
pub struct EsploraBlockchain {
|
||||
url_client: UrlClient,
|
||||
stop_gap: usize,
|
||||
}
|
||||
|
||||
impl std::convert::From<UrlClient> for EsploraBlockchain {
|
||||
fn from(url_client: UrlClient) -> Self {
|
||||
EsploraBlockchain {
|
||||
url_client,
|
||||
stop_gap: 20,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl EsploraBlockchain {
|
||||
/// Create a new instance of the client from a base URL and `stop_gap`.
|
||||
pub fn new(base_url: &str, stop_gap: usize) -> Self {
|
||||
EsploraBlockchain {
|
||||
url_client: UrlClient {
|
||||
url: base_url.to_string(),
|
||||
client: Client::new(),
|
||||
concurrency: super::DEFAULT_CONCURRENT_REQUESTS,
|
||||
},
|
||||
stop_gap,
|
||||
}
|
||||
}
|
||||
|
||||
/// Set the concurrency to use when doing batch queries against the Esplora instance.
|
||||
pub fn with_concurrency(mut self, concurrency: u8) -> Self {
|
||||
self.url_client.concurrency = concurrency;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl Blockchain for EsploraBlockchain {
|
||||
fn get_capabilities(&self) -> HashSet<Capability> {
|
||||
vec![
|
||||
Capability::FullHistory,
|
||||
Capability::GetAnyTx,
|
||||
Capability::AccurateFees,
|
||||
]
|
||||
.into_iter()
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
Ok(await_or_block!(self.url_client._broadcast(tx))?)
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
let estimates = await_or_block!(self.url_client._get_fee_estimates())?;
|
||||
super::into_fee_rate(target, estimates)
|
||||
}
|
||||
}
|
||||
|
||||
impl StatelessBlockchain for EsploraBlockchain {}
|
||||
|
||||
#[maybe_async]
|
||||
impl GetHeight for EsploraBlockchain {
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(await_or_block!(self.url_client._get_height())?)
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl GetTx for EsploraBlockchain {
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(await_or_block!(self.url_client._get_tx(txid))?)
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl GetBlockHash for EsploraBlockchain {
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
|
||||
let block_header = await_or_block!(self.url_client._get_header(height as u32))?;
|
||||
Ok(block_header.block_hash())
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl WalletSync for EsploraBlockchain {
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
_progress_update: Box<dyn Progress>,
|
||||
) -> Result<(), Error> {
|
||||
use crate::blockchain::script_sync::Request;
|
||||
let mut request = script_sync::start(database, self.stop_gap)?;
|
||||
let mut tx_index: HashMap<Txid, Tx> = HashMap::new();
|
||||
|
||||
let batch_update = loop {
|
||||
request = match request {
|
||||
Request::Script(script_req) => {
|
||||
let futures: FuturesOrdered<_> = script_req
|
||||
.request()
|
||||
.take(self.url_client.concurrency as usize)
|
||||
.map(|script| async move {
|
||||
let mut related_txs: Vec<Tx> =
|
||||
self.url_client._scripthash_txs(script, None).await?;
|
||||
|
||||
let n_confirmed =
|
||||
related_txs.iter().filter(|tx| tx.status.confirmed).count();
|
||||
// esplora pages on 25 confirmed transactions. If there's 25 or more we
|
||||
// keep requesting to see if there's more.
|
||||
if n_confirmed >= 25 {
|
||||
loop {
|
||||
let new_related_txs: Vec<Tx> = self
|
||||
.url_client
|
||||
._scripthash_txs(
|
||||
script,
|
||||
Some(related_txs.last().unwrap().txid),
|
||||
)
|
||||
.await?;
|
||||
let n = new_related_txs.len();
|
||||
related_txs.extend(new_related_txs);
|
||||
// we've reached the end
|
||||
if n < 25 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
Result::<_, Error>::Ok(related_txs)
|
||||
})
|
||||
.collect();
|
||||
let txs_per_script: Vec<Vec<Tx>> = await_or_block!(futures.try_collect())?;
|
||||
let mut satisfaction = vec![];
|
||||
|
||||
for txs in txs_per_script {
|
||||
satisfaction.push(
|
||||
txs.iter()
|
||||
.map(|tx| (tx.txid, tx.status.block_height))
|
||||
.collect(),
|
||||
);
|
||||
for tx in txs {
|
||||
tx_index.insert(tx.txid, tx);
|
||||
}
|
||||
}
|
||||
|
||||
script_req.satisfy(satisfaction)?
|
||||
}
|
||||
Request::Conftime(conftime_req) => {
|
||||
let conftimes = conftime_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
tx_index
|
||||
.get(txid)
|
||||
.expect("must be in index")
|
||||
.confirmation_time()
|
||||
})
|
||||
.collect();
|
||||
conftime_req.satisfy(conftimes)?
|
||||
}
|
||||
Request::Tx(tx_req) => {
|
||||
let full_txs = tx_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
let tx = tx_index.get(txid).expect("must be in index");
|
||||
Ok((tx.previous_outputs(), tx.to_tx()))
|
||||
})
|
||||
.collect::<Result<_, Error>>()?;
|
||||
tx_req.satisfy(full_txs)?
|
||||
}
|
||||
Request::Finish(batch_update) => break batch_update,
|
||||
}
|
||||
};
|
||||
|
||||
database.commit_batch(batch_update)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl UrlClient {
|
||||
async fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
|
||||
let resp = self
|
||||
.client
|
||||
.get(&format!("{}/tx/{}/raw", self.url, txid))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if let StatusCode::NOT_FOUND = resp.status() {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
Ok(Some(deserialize(&resp.error_for_status()?.bytes().await?)?))
|
||||
}
|
||||
|
||||
async fn _get_tx_no_opt(&self, txid: &Txid) -> Result<Transaction, EsploraError> {
|
||||
match self._get_tx(txid).await {
|
||||
Ok(Some(tx)) => Ok(tx),
|
||||
Ok(None) => Err(EsploraError::TransactionNotFound(*txid)),
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
}
|
||||
|
||||
async fn _get_header(&self, block_height: u32) -> Result<BlockHeader, EsploraError> {
|
||||
let resp = self
|
||||
.client
|
||||
.get(&format!("{}/block-height/{}", self.url, block_height))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if let StatusCode::NOT_FOUND = resp.status() {
|
||||
return Err(EsploraError::HeaderHeightNotFound(block_height));
|
||||
}
|
||||
let bytes = resp.bytes().await?;
|
||||
let hash = std::str::from_utf8(&bytes)
|
||||
.map_err(|_| EsploraError::HeaderHeightNotFound(block_height))?;
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.get(&format!("{}/block/{}/header", self.url, hash))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
let header = deserialize(&Vec::from_hex(&resp.text().await?)?)?;
|
||||
|
||||
Ok(header)
|
||||
}
|
||||
|
||||
async fn _broadcast(&self, transaction: &Transaction) -> Result<(), EsploraError> {
|
||||
self.client
|
||||
.post(&format!("{}/tx", self.url))
|
||||
.body(serialize(transaction).to_hex())
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn _get_height(&self) -> Result<u32, EsploraError> {
|
||||
let req = self
|
||||
.client
|
||||
.get(&format!("{}/blocks/tip/height", self.url))
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
Ok(req.error_for_status()?.text().await?.parse()?)
|
||||
}
|
||||
|
||||
async fn _scripthash_txs(
|
||||
&self,
|
||||
script: &Script,
|
||||
last_seen: Option<Txid>,
|
||||
) -> Result<Vec<Tx>, EsploraError> {
|
||||
let script_hash = sha256::Hash::hash(script.as_bytes()).into_inner().to_hex();
|
||||
let url = match last_seen {
|
||||
Some(last_seen) => format!(
|
||||
"{}/scripthash/{}/txs/chain/{}",
|
||||
self.url, script_hash, last_seen
|
||||
),
|
||||
None => format!("{}/scripthash/{}/txs", self.url, script_hash),
|
||||
};
|
||||
Ok(self
|
||||
.client
|
||||
.get(url)
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<Vec<Tx>>()
|
||||
.await?)
|
||||
}
|
||||
|
||||
async fn _get_fee_estimates(&self) -> Result<HashMap<String, f64>, EsploraError> {
|
||||
Ok(self
|
||||
.client
|
||||
.get(&format!("{}/fee-estimates", self.url,))
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<HashMap<String, f64>>()
|
||||
.await?)
|
||||
}
|
||||
}
|
||||
|
||||
impl ConfigurableBlockchain for EsploraBlockchain {
|
||||
type Config = super::EsploraBlockchainConfig;
|
||||
|
||||
fn from_config(config: &Self::Config) -> Result<Self, Error> {
|
||||
let map_e = |e: reqwest::Error| Error::Esplora(Box::new(e.into()));
|
||||
|
||||
let mut blockchain = EsploraBlockchain::new(config.base_url.as_str(), config.stop_gap);
|
||||
if let Some(concurrency) = config.concurrency {
|
||||
blockchain.url_client.concurrency = concurrency;
|
||||
}
|
||||
let mut builder = Client::builder();
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
if let Some(proxy) = &config.proxy {
|
||||
builder = builder.proxy(reqwest::Proxy::all(proxy).map_err(map_e)?);
|
||||
}
|
||||
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
if let Some(timeout) = config.timeout {
|
||||
builder = builder.timeout(core::time::Duration::from_secs(timeout));
|
||||
}
|
||||
|
||||
blockchain.url_client.client = builder.build().map_err(map_e)?;
|
||||
|
||||
Ok(blockchain)
|
||||
}
|
||||
}
|
||||
@@ -1,388 +0,0 @@
|
||||
// Bitcoin Dev Kit
|
||||
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
|
||||
//
|
||||
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
|
||||
//
|
||||
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
|
||||
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
||||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
//! Esplora by way of `ureq` HTTP client.
|
||||
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::io;
|
||||
use std::io::Read;
|
||||
use std::time::Duration;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
|
||||
use ureq::{Agent, Proxy, Response};
|
||||
|
||||
use bitcoin::consensus::{deserialize, serialize};
|
||||
use bitcoin::hashes::hex::{FromHex, ToHex};
|
||||
use bitcoin::hashes::{sha256, Hash};
|
||||
use bitcoin::{BlockHeader, Script, Transaction, Txid};
|
||||
|
||||
use super::api::Tx;
|
||||
use crate::blockchain::esplora::EsploraError;
|
||||
use crate::blockchain::*;
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::FeeRate;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
struct UrlClient {
|
||||
url: String,
|
||||
agent: Agent,
|
||||
}
|
||||
|
||||
/// Structure that implements the logic to sync with Esplora
|
||||
///
|
||||
/// ## Example
|
||||
/// See the [`blockchain::esplora`](crate::blockchain::esplora) module for a usage example.
|
||||
#[derive(Debug)]
|
||||
pub struct EsploraBlockchain {
|
||||
url_client: UrlClient,
|
||||
stop_gap: usize,
|
||||
concurrency: u8,
|
||||
}
|
||||
|
||||
impl EsploraBlockchain {
|
||||
/// Create a new instance of the client from a base URL and the `stop_gap`.
|
||||
pub fn new(base_url: &str, stop_gap: usize) -> Self {
|
||||
EsploraBlockchain {
|
||||
url_client: UrlClient {
|
||||
url: base_url.to_string(),
|
||||
agent: Agent::new(),
|
||||
},
|
||||
concurrency: super::DEFAULT_CONCURRENT_REQUESTS,
|
||||
stop_gap,
|
||||
}
|
||||
}
|
||||
|
||||
/// Set the inner `ureq` agent.
|
||||
pub fn with_agent(mut self, agent: Agent) -> Self {
|
||||
self.url_client.agent = agent;
|
||||
self
|
||||
}
|
||||
|
||||
/// Set the number of parallel requests the client can make.
|
||||
pub fn with_concurrency(mut self, concurrency: u8) -> Self {
|
||||
self.concurrency = concurrency;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl Blockchain for EsploraBlockchain {
|
||||
fn get_capabilities(&self) -> HashSet<Capability> {
|
||||
vec![
|
||||
Capability::FullHistory,
|
||||
Capability::GetAnyTx,
|
||||
Capability::AccurateFees,
|
||||
]
|
||||
.into_iter()
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
self.url_client._broadcast(tx)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
let estimates = self.url_client._get_fee_estimates()?;
|
||||
super::into_fee_rate(target, estimates)
|
||||
}
|
||||
}
|
||||
|
||||
impl StatelessBlockchain for EsploraBlockchain {}
|
||||
|
||||
impl GetHeight for EsploraBlockchain {
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(self.url_client._get_height()?)
|
||||
}
|
||||
}
|
||||
|
||||
impl GetTx for EsploraBlockchain {
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(self.url_client._get_tx(txid)?)
|
||||
}
|
||||
}
|
||||
|
||||
impl GetBlockHash for EsploraBlockchain {
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
|
||||
let block_header = self.url_client._get_header(height as u32)?;
|
||||
Ok(block_header.block_hash())
|
||||
}
|
||||
}
|
||||
|
||||
impl WalletSync for EsploraBlockchain {
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
_progress_update: Box<dyn Progress>,
|
||||
) -> Result<(), Error> {
|
||||
use crate::blockchain::script_sync::Request;
|
||||
let mut request = script_sync::start(database, self.stop_gap)?;
|
||||
let mut tx_index: HashMap<Txid, Tx> = HashMap::new();
|
||||
let batch_update = loop {
|
||||
request = match request {
|
||||
Request::Script(script_req) => {
|
||||
let scripts = script_req
|
||||
.request()
|
||||
.take(self.concurrency as usize)
|
||||
.cloned();
|
||||
|
||||
let mut handles = vec![];
|
||||
for script in scripts {
|
||||
let client = self.url_client.clone();
|
||||
// make each request in its own thread.
|
||||
handles.push(std::thread::spawn(move || {
|
||||
let mut related_txs: Vec<Tx> = client._scripthash_txs(&script, None)?;
|
||||
|
||||
let n_confirmed =
|
||||
related_txs.iter().filter(|tx| tx.status.confirmed).count();
|
||||
// esplora pages on 25 confirmed transactions. If there's 25 or more we
|
||||
// keep requesting to see if there's more.
|
||||
if n_confirmed >= 25 {
|
||||
loop {
|
||||
let new_related_txs: Vec<Tx> = client._scripthash_txs(
|
||||
&script,
|
||||
Some(related_txs.last().unwrap().txid),
|
||||
)?;
|
||||
let n = new_related_txs.len();
|
||||
related_txs.extend(new_related_txs);
|
||||
// we've reached the end
|
||||
if n < 25 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
Result::<_, Error>::Ok(related_txs)
|
||||
}));
|
||||
}
|
||||
|
||||
let txs_per_script: Vec<Vec<Tx>> = handles
|
||||
.into_iter()
|
||||
.map(|handle| handle.join().unwrap())
|
||||
.collect::<Result<_, _>>()?;
|
||||
let mut satisfaction = vec![];
|
||||
|
||||
for txs in txs_per_script {
|
||||
satisfaction.push(
|
||||
txs.iter()
|
||||
.map(|tx| (tx.txid, tx.status.block_height))
|
||||
.collect(),
|
||||
);
|
||||
for tx in txs {
|
||||
tx_index.insert(tx.txid, tx);
|
||||
}
|
||||
}
|
||||
|
||||
script_req.satisfy(satisfaction)?
|
||||
}
|
||||
Request::Conftime(conftime_req) => {
|
||||
let conftimes = conftime_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
tx_index
|
||||
.get(txid)
|
||||
.expect("must be in index")
|
||||
.confirmation_time()
|
||||
})
|
||||
.collect();
|
||||
conftime_req.satisfy(conftimes)?
|
||||
}
|
||||
Request::Tx(tx_req) => {
|
||||
let full_txs = tx_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
let tx = tx_index.get(txid).expect("must be in index");
|
||||
Ok((tx.previous_outputs(), tx.to_tx()))
|
||||
})
|
||||
.collect::<Result<_, Error>>()?;
|
||||
tx_req.satisfy(full_txs)?
|
||||
}
|
||||
Request::Finish(batch_update) => break batch_update,
|
||||
}
|
||||
};
|
||||
|
||||
database.commit_batch(batch_update)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl UrlClient {
|
||||
fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
|
||||
let resp = self
|
||||
.agent
|
||||
.get(&format!("{}/tx/{}/raw", self.url, txid))
|
||||
.call();
|
||||
|
||||
match resp {
|
||||
Ok(resp) => Ok(Some(deserialize(&into_bytes(resp)?)?)),
|
||||
Err(ureq::Error::Status(code, _)) => {
|
||||
if is_status_not_found(code) {
|
||||
return Ok(None);
|
||||
}
|
||||
Err(EsploraError::HttpResponse(code))
|
||||
}
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}
|
||||
}
|
||||
|
||||
fn _get_tx_no_opt(&self, txid: &Txid) -> Result<Transaction, EsploraError> {
|
||||
match self._get_tx(txid) {
|
||||
Ok(Some(tx)) => Ok(tx),
|
||||
Ok(None) => Err(EsploraError::TransactionNotFound(*txid)),
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
}
|
||||
|
||||
fn _get_header(&self, block_height: u32) -> Result<BlockHeader, EsploraError> {
|
||||
let resp = self
|
||||
.agent
|
||||
.get(&format!("{}/block-height/{}", self.url, block_height))
|
||||
.call();
|
||||
|
||||
let bytes = match resp {
|
||||
Ok(resp) => Ok(into_bytes(resp)?),
|
||||
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}?;
|
||||
|
||||
let hash = std::str::from_utf8(&bytes)
|
||||
.map_err(|_| EsploraError::HeaderHeightNotFound(block_height))?;
|
||||
|
||||
let resp = self
|
||||
.agent
|
||||
.get(&format!("{}/block/{}/header", self.url, hash))
|
||||
.call();
|
||||
|
||||
match resp {
|
||||
Ok(resp) => Ok(deserialize(&Vec::from_hex(&resp.into_string()?)?)?),
|
||||
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}
|
||||
}
|
||||
|
||||
fn _broadcast(&self, transaction: &Transaction) -> Result<(), EsploraError> {
|
||||
let resp = self
|
||||
.agent
|
||||
.post(&format!("{}/tx", self.url))
|
||||
.send_string(&serialize(transaction).to_hex());
|
||||
|
||||
match resp {
|
||||
Ok(_) => Ok(()), // We do not return the txid?
|
||||
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}
|
||||
}
|
||||
|
||||
fn _get_height(&self) -> Result<u32, EsploraError> {
|
||||
let resp = self
|
||||
.agent
|
||||
.get(&format!("{}/blocks/tip/height", self.url))
|
||||
.call();
|
||||
|
||||
match resp {
|
||||
Ok(resp) => Ok(resp.into_string()?.parse()?),
|
||||
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}
|
||||
}
|
||||
|
||||
fn _get_fee_estimates(&self) -> Result<HashMap<String, f64>, EsploraError> {
|
||||
let resp = self
|
||||
.agent
|
||||
.get(&format!("{}/fee-estimates", self.url,))
|
||||
.call();
|
||||
|
||||
let map = match resp {
|
||||
Ok(resp) => {
|
||||
let map: HashMap<String, f64> = resp.into_json()?;
|
||||
Ok(map)
|
||||
}
|
||||
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}?;
|
||||
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
fn _scripthash_txs(
|
||||
&self,
|
||||
script: &Script,
|
||||
last_seen: Option<Txid>,
|
||||
) -> Result<Vec<Tx>, EsploraError> {
|
||||
let script_hash = sha256::Hash::hash(script.as_bytes()).into_inner().to_hex();
|
||||
let url = match last_seen {
|
||||
Some(last_seen) => format!(
|
||||
"{}/scripthash/{}/txs/chain/{}",
|
||||
self.url, script_hash, last_seen
|
||||
),
|
||||
None => format!("{}/scripthash/{}/txs", self.url, script_hash),
|
||||
};
|
||||
Ok(self.agent.get(&url).call()?.into_json()?)
|
||||
}
|
||||
}
|
||||
|
||||
fn is_status_not_found(status: u16) -> bool {
|
||||
status == 404
|
||||
}
|
||||
|
||||
fn into_bytes(resp: Response) -> Result<Vec<u8>, io::Error> {
|
||||
const BYTES_LIMIT: usize = 10 * 1_024 * 1_024;
|
||||
|
||||
let mut buf: Vec<u8> = vec![];
|
||||
resp.into_reader()
|
||||
.take((BYTES_LIMIT + 1) as u64)
|
||||
.read_to_end(&mut buf)?;
|
||||
if buf.len() > BYTES_LIMIT {
|
||||
return Err(io::Error::new(
|
||||
io::ErrorKind::Other,
|
||||
"response too big for into_bytes",
|
||||
));
|
||||
}
|
||||
|
||||
Ok(buf)
|
||||
}
|
||||
|
||||
impl ConfigurableBlockchain for EsploraBlockchain {
|
||||
type Config = super::EsploraBlockchainConfig;
|
||||
|
||||
fn from_config(config: &Self::Config) -> Result<Self, Error> {
|
||||
let mut agent_builder = ureq::AgentBuilder::new();
|
||||
|
||||
if let Some(timeout) = config.timeout {
|
||||
agent_builder = agent_builder.timeout(Duration::from_secs(timeout));
|
||||
}
|
||||
|
||||
if let Some(proxy) = &config.proxy {
|
||||
agent_builder = agent_builder
|
||||
.proxy(Proxy::new(proxy).map_err(|e| Error::Esplora(Box::new(e.into())))?);
|
||||
}
|
||||
|
||||
let mut blockchain = EsploraBlockchain::new(config.base_url.as_str(), config.stop_gap)
|
||||
.with_agent(agent_builder.build());
|
||||
|
||||
if let Some(concurrency) = config.concurrency {
|
||||
blockchain = blockchain.with_concurrency(concurrency);
|
||||
}
|
||||
|
||||
Ok(blockchain)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<ureq::Error> for EsploraError {
|
||||
fn from(e: ureq::Error) -> Self {
|
||||
match e {
|
||||
ureq::Error::Status(code, _) => EsploraError::HttpResponse(code),
|
||||
e => EsploraError::Ureq(e),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -21,28 +21,18 @@ use std::ops::Deref;
|
||||
use std::sync::mpsc::{channel, Receiver, Sender};
|
||||
use std::sync::Arc;
|
||||
|
||||
use bitcoin::{BlockHash, Transaction, Txid};
|
||||
use bitcoin::{Transaction, Txid};
|
||||
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::wallet::{wallet_name_from_descriptor, Wallet};
|
||||
use crate::{FeeRate, KeychainKind};
|
||||
use crate::FeeRate;
|
||||
|
||||
#[cfg(any(
|
||||
feature = "electrum",
|
||||
feature = "esplora",
|
||||
feature = "compact_filters",
|
||||
feature = "rpc"
|
||||
))]
|
||||
#[cfg(any(feature = "electrum", feature = "esplora"))]
|
||||
pub(crate) mod utils;
|
||||
|
||||
#[cfg(any(feature = "electrum", feature = "esplora", feature = "compact_filters"))]
|
||||
pub mod any;
|
||||
mod script_sync;
|
||||
|
||||
#[cfg(any(
|
||||
feature = "electrum",
|
||||
feature = "esplora",
|
||||
feature = "compact_filters",
|
||||
feature = "rpc"
|
||||
))]
|
||||
#[cfg(any(feature = "electrum", feature = "esplora", feature = "compact_filters"))]
|
||||
pub use any::{AnyBlockchain, AnyBlockchainConfig};
|
||||
|
||||
#[cfg(feature = "electrum")]
|
||||
@@ -87,57 +77,28 @@ pub enum Capability {
|
||||
|
||||
/// Trait that defines the actions that must be supported by a blockchain backend
|
||||
#[maybe_async]
|
||||
pub trait Blockchain: WalletSync + GetHeight + GetTx + GetBlockHash {
|
||||
pub trait Blockchain {
|
||||
/// Return the set of [`Capability`] supported by this backend
|
||||
fn get_capabilities(&self) -> HashSet<Capability>;
|
||||
/// Broadcast a transaction
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error>;
|
||||
/// Estimate the fee rate required to confirm a transaction in a given `target` of blocks
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error>;
|
||||
}
|
||||
|
||||
/// Trait for getting the current height of the blockchain.
|
||||
#[maybe_async]
|
||||
pub trait GetHeight {
|
||||
/// Return the current height
|
||||
fn get_height(&self) -> Result<u32, Error>;
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
/// Trait for getting a transaction by txid
|
||||
pub trait GetTx {
|
||||
/// Fetch a transaction given its txid
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error>;
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
/// Trait for getting block hash by block height
|
||||
pub trait GetBlockHash {
|
||||
/// fetch block hash given its height
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error>;
|
||||
}
|
||||
|
||||
/// Trait for blockchains that can sync by updating the database directly.
|
||||
#[maybe_async]
|
||||
pub trait WalletSync {
|
||||
/// Setup the backend and populate the internal database for the first time
|
||||
///
|
||||
/// This method is the equivalent of [`Self::wallet_sync`], but it's guaranteed to only be
|
||||
/// This method is the equivalent of [`Blockchain::sync`], but it's guaranteed to only be
|
||||
/// called once, at the first [`Wallet::sync`](crate::wallet::Wallet::sync).
|
||||
///
|
||||
/// The rationale behind the distinction between `sync` and `setup` is that some custom backends
|
||||
/// might need to perform specific actions only the first time they are synced.
|
||||
///
|
||||
/// For types that do not have that distinction, only this method can be implemented, since
|
||||
/// [`WalletSync::wallet_sync`] defaults to calling this internally if not overridden.
|
||||
/// Populate the internal database with transactions and UTXOs
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
/// [`Blockchain::sync`] defaults to calling this internally if not overridden.
|
||||
fn setup<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error>;
|
||||
|
||||
/// If not overridden, it defaults to calling [`Self::wallet_setup`] internally.
|
||||
/// Populate the internal database with transactions and UTXOs
|
||||
///
|
||||
/// If not overridden, it defaults to calling [`Blockchain::setup`] internally.
|
||||
///
|
||||
/// This method should implement the logic required to iterate over the list of the wallet's
|
||||
/// script_pubkeys using [`Database::iter_script_pubkeys`] and look for relevant transactions
|
||||
@@ -154,13 +115,23 @@ pub trait WalletSync {
|
||||
/// [`BatchOperations::set_tx`]: crate::database::BatchOperations::set_tx
|
||||
/// [`BatchOperations::set_utxo`]: crate::database::BatchOperations::set_utxo
|
||||
/// [`BatchOperations::del_utxo`]: crate::database::BatchOperations::del_utxo
|
||||
fn wallet_sync<D: BatchDatabase>(
|
||||
fn sync<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(self.wallet_setup(database, progress_update))
|
||||
maybe_await!(self.setup(database, progress_update))
|
||||
}
|
||||
|
||||
/// Fetch a transaction from the blockchain given its txid
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error>;
|
||||
/// Broadcast a transaction
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error>;
|
||||
|
||||
/// Return the current height
|
||||
fn get_height(&self) -> Result<u32, Error>;
|
||||
/// Estimate the fee rate required to confirm a transaction in a given `target` of blocks
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error>;
|
||||
}
|
||||
|
||||
/// Trait for [`Blockchain`] types that can be created given a configuration
|
||||
@@ -172,112 +143,12 @@ pub trait ConfigurableBlockchain: Blockchain + Sized {
|
||||
fn from_config(config: &Self::Config) -> Result<Self, Error>;
|
||||
}
|
||||
|
||||
/// Trait for blockchains that don't contain any state
|
||||
///
|
||||
/// Statless blockchains can be used to sync multiple wallets with different descriptors.
|
||||
///
|
||||
/// [`BlockchainFactory`] is automatically implemented for `Arc<T>` where `T` is a stateless
|
||||
/// blockchain.
|
||||
pub trait StatelessBlockchain: Blockchain {}
|
||||
|
||||
/// Trait for a factory of blockchains that share the underlying connection or configuration
|
||||
#[cfg_attr(
|
||||
not(feature = "async-interface"),
|
||||
doc = r##"
|
||||
## Example
|
||||
|
||||
This example shows how to sync multiple walles and return the sum of their balances
|
||||
|
||||
```no_run
|
||||
# use bdk::Error;
|
||||
# use bdk::blockchain::*;
|
||||
# use bdk::database::*;
|
||||
# use bdk::wallet::*;
|
||||
# use bdk::*;
|
||||
fn sum_of_balances<B: BlockchainFactory>(blockchain_factory: B, wallets: &[Wallet<MemoryDatabase>]) -> Result<u64, Error> {
|
||||
Ok(wallets
|
||||
.iter()
|
||||
.map(|w| -> Result<_, Error> {
|
||||
blockchain_factory.sync_wallet(&w, None, SyncOptions::default())?;
|
||||
w.get_balance()
|
||||
})
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.sum())
|
||||
}
|
||||
```
|
||||
"##
|
||||
)]
|
||||
pub trait BlockchainFactory {
|
||||
/// The type returned when building a blockchain from this factory
|
||||
type Inner: Blockchain;
|
||||
|
||||
/// Build a new blockchain for the given descriptor wallet_name
|
||||
///
|
||||
/// If `override_skip_blocks` is `None`, the returned blockchain will inherit the number of blocks
|
||||
/// from the factory. Since it's not possible to override the value to `None`, set it to
|
||||
/// `Some(0)` to rescan from the genesis.
|
||||
fn build(
|
||||
&self,
|
||||
wallet_name: &str,
|
||||
override_skip_blocks: Option<u32>,
|
||||
) -> Result<Self::Inner, Error>;
|
||||
|
||||
/// Build a new blockchain for a given wallet
|
||||
///
|
||||
/// Internally uses [`wallet_name_from_descriptor`] to derive the name, and then calls
|
||||
/// [`BlockchainFactory::build`] to create the blockchain instance.
|
||||
fn build_for_wallet<D: BatchDatabase>(
|
||||
&self,
|
||||
wallet: &Wallet<D>,
|
||||
override_skip_blocks: Option<u32>,
|
||||
) -> Result<Self::Inner, Error> {
|
||||
let wallet_name = wallet_name_from_descriptor(
|
||||
wallet.public_descriptor(KeychainKind::External)?.unwrap(),
|
||||
wallet.public_descriptor(KeychainKind::Internal)?,
|
||||
wallet.network(),
|
||||
wallet.secp_ctx(),
|
||||
)?;
|
||||
self.build(&wallet_name, override_skip_blocks)
|
||||
}
|
||||
|
||||
/// Use [`BlockchainFactory::build_for_wallet`] to get a blockchain, then sync the wallet
|
||||
///
|
||||
/// This can be used when a new blockchain would only be used to sync a wallet and then
|
||||
/// immediately dropped. Keep in mind that specific blockchain factories may perform slow
|
||||
/// operations to build a blockchain for a given wallet, so if a wallet needs to be synced
|
||||
/// often it's recommended to use [`BlockchainFactory::build_for_wallet`] to reuse the same
|
||||
/// blockchain multiple times.
|
||||
#[cfg(not(any(target_arch = "wasm32", feature = "async-interface")))]
|
||||
#[cfg_attr(
|
||||
docsrs,
|
||||
doc(cfg(not(any(target_arch = "wasm32", feature = "async-interface"))))
|
||||
)]
|
||||
fn sync_wallet<D: BatchDatabase>(
|
||||
&self,
|
||||
wallet: &Wallet<D>,
|
||||
override_skip_blocks: Option<u32>,
|
||||
sync_options: crate::wallet::SyncOptions,
|
||||
) -> Result<(), Error> {
|
||||
let blockchain = self.build_for_wallet(wallet, override_skip_blocks)?;
|
||||
wallet.sync(&blockchain, sync_options)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: StatelessBlockchain> BlockchainFactory for Arc<T> {
|
||||
type Inner = Self;
|
||||
|
||||
fn build(&self, _wallet_name: &str, _override_skip_blocks: Option<u32>) -> Result<Self, Error> {
|
||||
Ok(Arc::clone(self))
|
||||
}
|
||||
}
|
||||
|
||||
/// Data sent with a progress update over a [`channel`]
|
||||
pub type ProgressData = (f32, Option<String>);
|
||||
|
||||
/// Trait for types that can receive and process progress updates during [`WalletSync::wallet_sync`] and
|
||||
/// [`WalletSync::wallet_setup`]
|
||||
pub trait Progress: Send + 'static + core::fmt::Debug {
|
||||
/// Trait for types that can receive and process progress updates during [`Blockchain::sync`] and
|
||||
/// [`Blockchain::setup`]
|
||||
pub trait Progress: Send {
|
||||
/// Send a new progress update
|
||||
///
|
||||
/// The `progress` value should be in the range 0.0 - 100.0, and the `message` value is an
|
||||
@@ -302,7 +173,7 @@ impl Progress for Sender<ProgressData> {
|
||||
}
|
||||
|
||||
/// Type that implements [`Progress`] and drops every update received
|
||||
#[derive(Clone, Copy, Default, Debug)]
|
||||
#[derive(Clone, Copy)]
|
||||
pub struct NoopProgress;
|
||||
|
||||
/// Create a new instance of [`NoopProgress`]
|
||||
@@ -317,10 +188,10 @@ impl Progress for NoopProgress {
|
||||
}
|
||||
|
||||
/// Type that implements [`Progress`] and logs at level `INFO` every update received
|
||||
#[derive(Clone, Copy, Default, Debug)]
|
||||
#[derive(Clone, Copy)]
|
||||
pub struct LogProgress;
|
||||
|
||||
/// Create a new instance of [`LogProgress`]
|
||||
/// Create a nwe instance of [`LogProgress`]
|
||||
pub fn log_progress() -> LogProgress {
|
||||
LogProgress
|
||||
}
|
||||
@@ -343,51 +214,33 @@ impl<T: Blockchain> Blockchain for Arc<T> {
|
||||
maybe_await!(self.deref().get_capabilities())
|
||||
}
|
||||
|
||||
fn setup<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(self.deref().setup(database, progress_update))
|
||||
}
|
||||
|
||||
fn sync<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(self.deref().sync(database, progress_update))
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
maybe_await!(self.deref().get_tx(txid))
|
||||
}
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
maybe_await!(self.deref().broadcast(tx))
|
||||
}
|
||||
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
maybe_await!(self.deref().get_height())
|
||||
}
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
maybe_await!(self.deref().estimate_fee(target))
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl<T: GetTx> GetTx for Arc<T> {
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
maybe_await!(self.deref().get_tx(txid))
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl<T: GetHeight> GetHeight for Arc<T> {
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
maybe_await!(self.deref().get_height())
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl<T: GetBlockHash> GetBlockHash for Arc<T> {
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
|
||||
maybe_await!(self.deref().get_block_hash(height))
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl<T: WalletSync> WalletSync for Arc<T> {
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(self.deref().wallet_setup(database, progress_update))
|
||||
}
|
||||
|
||||
fn wallet_sync<D: BatchDatabase>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(self.deref().wallet_sync(database, progress_update))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,12 +18,10 @@
|
||||
//! ## Example
|
||||
//!
|
||||
//! ```no_run
|
||||
//! # use bdk::blockchain::{RpcConfig, RpcBlockchain, ConfigurableBlockchain, rpc::Auth};
|
||||
//! # use bdk::blockchain::{RpcConfig, RpcBlockchain, ConfigurableBlockchain};
|
||||
//! let config = RpcConfig {
|
||||
//! url: "127.0.0.1:18332".to_string(),
|
||||
//! auth: Auth::Cookie {
|
||||
//! file: "/home/user/.bitcoin/.cookie".into(),
|
||||
//! },
|
||||
//! auth: bitcoincore_rpc::Auth::CookieFile("/home/user/.bitcoin/.cookie".into()),
|
||||
//! network: bdk::bitcoin::Network::Testnet,
|
||||
//! wallet_name: "wallet_name".to_string(),
|
||||
//! skip_blocks: None,
|
||||
@@ -32,23 +30,21 @@
|
||||
//! ```
|
||||
|
||||
use crate::bitcoin::consensus::deserialize;
|
||||
use crate::bitcoin::hashes::hex::ToHex;
|
||||
use crate::bitcoin::{Address, Network, OutPoint, Transaction, TxOut, Txid};
|
||||
use crate::blockchain::*;
|
||||
use crate::blockchain::{Blockchain, Capability, ConfigurableBlockchain, Progress};
|
||||
use crate::database::{BatchDatabase, DatabaseUtils};
|
||||
use crate::descriptor::get_checksum;
|
||||
use crate::{BlockTime, Error, FeeRate, KeychainKind, LocalUtxo, TransactionDetails};
|
||||
use crate::descriptor::{get_checksum, IntoWalletDescriptor};
|
||||
use crate::wallet::utils::SecpCtx;
|
||||
use crate::{ConfirmationTime, Error, FeeRate, KeychainKind, LocalUtxo, TransactionDetails};
|
||||
use bitcoincore_rpc::json::{
|
||||
GetAddressInfoResultLabel, ImportMultiOptions, ImportMultiRequest,
|
||||
ImportMultiRequestScriptPubkey, ImportMultiRescanSince,
|
||||
};
|
||||
use bitcoincore_rpc::jsonrpc::serde_json::{json, Value};
|
||||
use bitcoincore_rpc::Auth as RpcAuth;
|
||||
use bitcoincore_rpc::{Client, RpcApi};
|
||||
use bitcoincore_rpc::jsonrpc::serde_json::Value;
|
||||
use bitcoincore_rpc::{Auth, Client, RpcApi};
|
||||
use log::debug;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde::Deserialize;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::path::PathBuf;
|
||||
use std::str::FromStr;
|
||||
|
||||
/// The main struct for RPC backend implementing the [crate::blockchain::Blockchain] trait
|
||||
@@ -56,8 +52,8 @@ use std::str::FromStr;
|
||||
pub struct RpcBlockchain {
|
||||
/// Rpc client to the node, includes the wallet name
|
||||
client: Client,
|
||||
/// Whether the wallet is a "descriptor" or "legacy" wallet in Core
|
||||
is_descriptors: bool,
|
||||
/// Network used
|
||||
network: Network,
|
||||
/// Blockchain capabilities, cached here at startup
|
||||
capabilities: HashSet<Capability>,
|
||||
/// Skip this many blocks of the blockchain at the first rescan, if None the rescan is done from the genesis block
|
||||
@@ -68,7 +64,7 @@ pub struct RpcBlockchain {
|
||||
}
|
||||
|
||||
/// RpcBlockchain configuration options
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
|
||||
#[derive(Debug)]
|
||||
pub struct RpcConfig {
|
||||
/// The bitcoin node url
|
||||
pub url: String,
|
||||
@@ -76,45 +72,12 @@ pub struct RpcConfig {
|
||||
pub auth: Auth,
|
||||
/// The network we are using (it will be checked the bitcoin node network matches this)
|
||||
pub network: Network,
|
||||
/// The wallet name in the bitcoin node, consider using [crate::wallet::wallet_name_from_descriptor] for this
|
||||
/// The wallet name in the bitcoin node, consider using [wallet_name_from_descriptor] for this
|
||||
pub wallet_name: String,
|
||||
/// Skip this many blocks of the blockchain at the first rescan, if None the rescan is done from the genesis block
|
||||
pub skip_blocks: Option<u32>,
|
||||
}
|
||||
|
||||
/// This struct is equivalent to [bitcoincore_rpc::Auth] but it implements [serde::Serialize]
|
||||
/// To be removed once upstream equivalent is implementing Serialize (json serialization format
|
||||
/// should be the same), see [rust-bitcoincore-rpc/pull/181](https://github.com/rust-bitcoin/rust-bitcoincore-rpc/pull/181)
|
||||
#[derive(Clone, Debug, Hash, Eq, PartialEq, Ord, PartialOrd, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
#[serde(untagged)]
|
||||
pub enum Auth {
|
||||
/// None authentication
|
||||
None,
|
||||
/// Authentication with username and password, usually [Auth::Cookie] should be preferred
|
||||
UserPass {
|
||||
/// Username
|
||||
username: String,
|
||||
/// Password
|
||||
password: String,
|
||||
},
|
||||
/// Authentication with a cookie file
|
||||
Cookie {
|
||||
/// Cookie file
|
||||
file: PathBuf,
|
||||
},
|
||||
}
|
||||
|
||||
impl From<Auth> for RpcAuth {
|
||||
fn from(auth: Auth) -> Self {
|
||||
match auth {
|
||||
Auth::None => RpcAuth::None,
|
||||
Auth::UserPass { username, password } => RpcAuth::UserPass(username, password),
|
||||
Auth::Cookie { file } => RpcAuth::CookieFile(file),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl RpcBlockchain {
|
||||
fn get_node_synced_height(&self) -> Result<u32, Error> {
|
||||
let info = self.client.get_address_info(&self._storage_address)?;
|
||||
@@ -141,45 +104,10 @@ impl Blockchain for RpcBlockchain {
|
||||
self.capabilities.clone()
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
Ok(self.client.send_raw_transaction(tx).map(|_| ())?)
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
let sat_per_kb = self
|
||||
.client
|
||||
.estimate_smart_fee(target as u16, None)?
|
||||
.fee_rate
|
||||
.ok_or(Error::FeeRateUnavailable)?
|
||||
.as_sat() as f64;
|
||||
|
||||
Ok(FeeRate::from_sat_per_vb((sat_per_kb / 1000f64) as f32))
|
||||
}
|
||||
}
|
||||
|
||||
impl GetTx for RpcBlockchain {
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(Some(self.client.get_raw_transaction(txid, None)?))
|
||||
}
|
||||
}
|
||||
|
||||
impl GetHeight for RpcBlockchain {
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(self.client.get_blockchain_info().map(|i| i.blocks as u32)?)
|
||||
}
|
||||
}
|
||||
|
||||
impl GetBlockHash for RpcBlockchain {
|
||||
fn get_block_hash(&self, height: u64) -> Result<BlockHash, Error> {
|
||||
Ok(self.client.get_block_hash(height)?)
|
||||
}
|
||||
}
|
||||
|
||||
impl WalletSync for RpcBlockchain {
|
||||
fn wallet_setup<D: BatchDatabase>(
|
||||
fn setup<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: Box<dyn Progress>,
|
||||
progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
let mut scripts_pubkeys = database.iter_script_pubkeys(Some(KeychainKind::External))?;
|
||||
scripts_pubkeys.extend(database.iter_script_pubkeys(Some(KeychainKind::Internal))?);
|
||||
@@ -187,81 +115,47 @@ impl WalletSync for RpcBlockchain {
|
||||
"importing {} script_pubkeys (some maybe already imported)",
|
||||
scripts_pubkeys.len()
|
||||
);
|
||||
let requests: Vec<_> = scripts_pubkeys
|
||||
.iter()
|
||||
.map(|s| ImportMultiRequest {
|
||||
timestamp: ImportMultiRescanSince::Timestamp(0),
|
||||
script_pubkey: Some(ImportMultiRequestScriptPubkey::Script(&s)),
|
||||
watchonly: Some(true),
|
||||
..Default::default()
|
||||
})
|
||||
.collect();
|
||||
let options = ImportMultiOptions {
|
||||
rescan: Some(false),
|
||||
};
|
||||
// Note we use import_multi because as of bitcoin core 0.21.0 many descriptors are not supported
|
||||
// https://bitcoindevkit.org/descriptors/#compatibility-matrix
|
||||
//TODO maybe convenient using import_descriptor for compatible descriptor and import_multi as fallback
|
||||
self.client.import_multi(&requests, Some(&options))?;
|
||||
|
||||
if self.is_descriptors {
|
||||
// Core still doesn't support complex descriptors like BDK, but when the wallet type is
|
||||
// "descriptors" we should import individual addresses using `importdescriptors` rather
|
||||
// than `importmulti`, using the `raw()` descriptor which allows us to specify an
|
||||
// arbitrary script
|
||||
let requests = Value::Array(
|
||||
scripts_pubkeys
|
||||
.iter()
|
||||
.map(|s| {
|
||||
let desc = format!("raw({})", s.to_hex());
|
||||
json!({
|
||||
"timestamp": "now",
|
||||
"desc": format!("{}#{}", desc, get_checksum(&desc).unwrap()),
|
||||
})
|
||||
})
|
||||
.collect(),
|
||||
);
|
||||
let current_height = self.get_height()?;
|
||||
|
||||
let res: Vec<Value> = self.client.call("importdescriptors", &[requests])?;
|
||||
res.into_iter()
|
||||
.map(|v| match v["success"].as_bool() {
|
||||
Some(true) => Ok(()),
|
||||
Some(false) => Err(Error::Generic(
|
||||
v["error"]["message"]
|
||||
.as_str()
|
||||
.unwrap_or("Unknown error")
|
||||
.to_string(),
|
||||
)),
|
||||
_ => Err(Error::Generic("Unexpected response from Core".to_string())),
|
||||
})
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
} else {
|
||||
let requests: Vec<_> = scripts_pubkeys
|
||||
.iter()
|
||||
.map(|s| ImportMultiRequest {
|
||||
timestamp: ImportMultiRescanSince::Timestamp(0),
|
||||
script_pubkey: Some(ImportMultiRequestScriptPubkey::Script(s)),
|
||||
watchonly: Some(true),
|
||||
..Default::default()
|
||||
})
|
||||
.collect();
|
||||
let options = ImportMultiOptions {
|
||||
rescan: Some(false),
|
||||
};
|
||||
self.client.import_multi(&requests, Some(&options))?;
|
||||
}
|
||||
// min because block invalidate may cause height to go down
|
||||
let node_synced = self.get_node_synced_height()?.min(current_height);
|
||||
|
||||
loop {
|
||||
let current_height = self.get_height()?;
|
||||
//TODO call rescan in chunks (updating node_synced_height) so that in case of
|
||||
// interruption work can be partially recovered
|
||||
debug!(
|
||||
"rescan_blockchain from:{} to:{}",
|
||||
node_synced, current_height
|
||||
);
|
||||
self.client
|
||||
.rescan_blockchain(Some(node_synced as usize), Some(current_height as usize))?;
|
||||
progress_update.update(1.0, None)?;
|
||||
|
||||
// min because block invalidate may cause height to go down
|
||||
let node_synced = self.get_node_synced_height()?.min(current_height);
|
||||
self.set_node_synced_height(current_height)?;
|
||||
|
||||
let sync_up_to = node_synced.saturating_add(10_000).min(current_height);
|
||||
|
||||
debug!("rescan_blockchain from:{} to:{}", node_synced, sync_up_to);
|
||||
self.client
|
||||
.rescan_blockchain(Some(node_synced as usize), Some(sync_up_to as usize))?;
|
||||
progress_update.update((sync_up_to as f32) / (current_height as f32), None)?;
|
||||
|
||||
self.set_node_synced_height(sync_up_to)?;
|
||||
|
||||
if sync_up_to == current_height {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
self.wallet_sync(database, progress_update)
|
||||
self.sync(database, progress_update)
|
||||
}
|
||||
|
||||
fn wallet_sync<D: BatchDatabase>(
|
||||
fn sync<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
db: &mut D,
|
||||
_progress_update: Box<dyn Progress>,
|
||||
_progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
let mut indexes = HashMap::new();
|
||||
for keykind in &[KeychainKind::External, KeychainKind::Internal] {
|
||||
@@ -288,7 +182,7 @@ impl WalletSync for RpcBlockchain {
|
||||
let mut list_txs_ids = HashSet::new();
|
||||
|
||||
for tx_result in list_txs.iter().filter(|t| {
|
||||
// list_txs returns all conflicting txs, we want to
|
||||
// list_txs returns all conflicting tx we want to
|
||||
// filter out replaced tx => unconfirmed and not in the mempool
|
||||
t.info.confirmations > 0 || self.client.get_mempool_entry(&t.info.txid).is_ok()
|
||||
}) {
|
||||
@@ -296,7 +190,7 @@ impl WalletSync for RpcBlockchain {
|
||||
list_txs_ids.insert(txid);
|
||||
if let Some(mut known_tx) = known_txs.get_mut(&txid) {
|
||||
let confirmation_time =
|
||||
BlockTime::new(tx_result.info.blockheight, tx_result.info.blocktime);
|
||||
ConfirmationTime::new(tx_result.info.blockheight, tx_result.info.blocktime);
|
||||
if confirmation_time != known_tx.confirmation_time {
|
||||
// reorg may change tx height
|
||||
debug!(
|
||||
@@ -304,7 +198,7 @@ impl WalletSync for RpcBlockchain {
|
||||
txid, confirmation_time
|
||||
);
|
||||
known_tx.confirmation_time = confirmation_time;
|
||||
db.set_tx(known_tx)?;
|
||||
db.set_tx(&known_tx)?;
|
||||
}
|
||||
} else {
|
||||
//TODO check there is already the raw tx in db?
|
||||
@@ -325,22 +219,21 @@ impl WalletSync for RpcBlockchain {
|
||||
|
||||
for input in tx.input.iter() {
|
||||
if let Some(previous_output) = db.get_previous_output(&input.previous_output)? {
|
||||
if db.is_mine(&previous_output.script_pubkey)? {
|
||||
sent += previous_output.value;
|
||||
}
|
||||
sent += previous_output.value;
|
||||
}
|
||||
}
|
||||
|
||||
let td = TransactionDetails {
|
||||
transaction: Some(tx),
|
||||
txid: tx_result.info.txid,
|
||||
confirmation_time: BlockTime::new(
|
||||
confirmation_time: ConfirmationTime::new(
|
||||
tx_result.info.blockheight,
|
||||
tx_result.info.blocktime,
|
||||
),
|
||||
received,
|
||||
sent,
|
||||
fee: tx_result.fee.map(|f| f.as_sat().unsigned_abs()),
|
||||
fee: tx_result.fee.map(|f| f.as_sat().abs() as u64),
|
||||
verified: true,
|
||||
};
|
||||
debug!(
|
||||
"saving tx: {} tx_result.fee:{:?} td.fees:{:?}",
|
||||
@@ -357,37 +250,32 @@ impl WalletSync for RpcBlockchain {
|
||||
}
|
||||
}
|
||||
|
||||
// Filter out trasactions that are for script pubkeys that aren't in this wallet.
|
||||
let current_utxos = current_utxo
|
||||
let current_utxos: HashSet<_> = current_utxo
|
||||
.into_iter()
|
||||
.filter_map(
|
||||
|u| match db.get_path_from_script_pubkey(&u.script_pub_key) {
|
||||
Err(e) => Some(Err(e)),
|
||||
Ok(None) => None,
|
||||
Ok(Some(path)) => Some(Ok(LocalUtxo {
|
||||
outpoint: OutPoint::new(u.txid, u.vout),
|
||||
keychain: path.0,
|
||||
txout: TxOut {
|
||||
value: u.amount.as_sat(),
|
||||
script_pubkey: u.script_pub_key,
|
||||
},
|
||||
is_spent: false,
|
||||
})),
|
||||
},
|
||||
)
|
||||
.collect::<Result<HashSet<_>, Error>>()?;
|
||||
.map(|u| {
|
||||
Ok(LocalUtxo {
|
||||
outpoint: OutPoint::new(u.txid, u.vout),
|
||||
keychain: db
|
||||
.get_path_from_script_pubkey(&u.script_pub_key)?
|
||||
.ok_or(Error::TransactionNotFound)?
|
||||
.0,
|
||||
txout: TxOut {
|
||||
value: u.amount.as_sat(),
|
||||
script_pubkey: u.script_pub_key,
|
||||
},
|
||||
})
|
||||
})
|
||||
.collect::<Result<_, Error>>()?;
|
||||
|
||||
let spent: HashSet<_> = known_utxos.difference(¤t_utxos).collect();
|
||||
for utxo in spent {
|
||||
debug!("setting as spent utxo: {:?}", utxo);
|
||||
let mut spent_utxo = utxo.clone();
|
||||
spent_utxo.is_spent = true;
|
||||
db.set_utxo(&spent_utxo)?;
|
||||
for s in spent {
|
||||
debug!("removing utxo: {:?}", s);
|
||||
db.del_utxo(&s.outpoint)?;
|
||||
}
|
||||
let received: HashSet<_> = current_utxos.difference(&known_utxos).collect();
|
||||
for utxo in received {
|
||||
debug!("adding utxo: {:?}", utxo);
|
||||
db.set_utxo(utxo)?;
|
||||
for s in received {
|
||||
debug!("adding utxo: {:?}", s);
|
||||
db.set_utxo(s)?;
|
||||
}
|
||||
|
||||
for (keykind, index) in indexes {
|
||||
@@ -397,6 +285,29 @@ impl WalletSync for RpcBlockchain {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(Some(self.client.get_raw_transaction(txid, None)?))
|
||||
}
|
||||
|
||||
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
|
||||
Ok(self.client.send_raw_transaction(tx).map(|_| ())?)
|
||||
}
|
||||
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(self.client.get_blockchain_info().map(|i| i.blocks as u32)?)
|
||||
}
|
||||
|
||||
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
|
||||
let sat_per_kb = self
|
||||
.client
|
||||
.estimate_smart_fee(target as u16, None)?
|
||||
.fee_rate
|
||||
.ok_or(Error::FeeRateUnavailable)?
|
||||
.as_sat() as f64;
|
||||
|
||||
Ok(FeeRate::from_sat_per_vb((sat_per_kb / 1000f64) as f32))
|
||||
}
|
||||
}
|
||||
|
||||
impl ConfigurableBlockchain for RpcBlockchain {
|
||||
@@ -409,37 +320,21 @@ impl ConfigurableBlockchain for RpcBlockchain {
|
||||
let wallet_url = format!("{}/wallet/{}", config.url, &wallet_name);
|
||||
debug!("connecting to {} auth:{:?}", wallet_url, config.auth);
|
||||
|
||||
let client = Client::new(wallet_url.as_str(), config.auth.clone().into())?;
|
||||
let rpc_version = client.version()?;
|
||||
|
||||
let client = Client::new(wallet_url, config.auth.clone())?;
|
||||
let loaded_wallets = client.list_wallets()?;
|
||||
if loaded_wallets.contains(&wallet_name) {
|
||||
debug!("wallet already loaded {:?}", wallet_name);
|
||||
} else if list_wallet_dir(&client)?.contains(&wallet_name) {
|
||||
client.load_wallet(&wallet_name)?;
|
||||
debug!("wallet loaded {:?}", wallet_name);
|
||||
} else {
|
||||
// pre-0.21 use legacy wallets
|
||||
if rpc_version < 210_000 {
|
||||
client.create_wallet(&wallet_name, Some(true), None, None, None)?;
|
||||
let existing_wallets = list_wallet_dir(&client)?;
|
||||
if existing_wallets.contains(&wallet_name) {
|
||||
client.load_wallet(&wallet_name)?;
|
||||
debug!("wallet loaded {:?}", wallet_name);
|
||||
} else {
|
||||
// TODO: move back to api call when https://github.com/rust-bitcoin/rust-bitcoincore-rpc/issues/225 is closed
|
||||
let args = [
|
||||
Value::String(wallet_name.clone()),
|
||||
Value::Bool(true),
|
||||
Value::Bool(false),
|
||||
Value::Null,
|
||||
Value::Bool(false),
|
||||
Value::Bool(true),
|
||||
];
|
||||
let _: Value = client.call("createwallet", &args)?;
|
||||
client.create_wallet(&wallet_name, Some(true), None, None, None)?;
|
||||
debug!("wallet created {:?}", wallet_name);
|
||||
}
|
||||
|
||||
debug!("wallet created {:?}", wallet_name);
|
||||
}
|
||||
|
||||
let is_descriptors = is_wallet_descriptor(&client)?;
|
||||
|
||||
let blockchain_info = client.get_blockchain_info()?;
|
||||
let network = match blockchain_info.chain.as_str() {
|
||||
"main" => Network::Bitcoin,
|
||||
@@ -456,6 +351,7 @@ impl ConfigurableBlockchain for RpcBlockchain {
|
||||
}
|
||||
|
||||
let mut capabilities: HashSet<_> = vec![Capability::FullHistory].into_iter().collect();
|
||||
let rpc_version = client.version()?;
|
||||
if rpc_version >= 210_000 {
|
||||
let info: HashMap<String, Value> = client.call("getindexinfo", &[]).unwrap();
|
||||
if info.contains_key("txindex") {
|
||||
@@ -471,14 +367,43 @@ impl ConfigurableBlockchain for RpcBlockchain {
|
||||
|
||||
Ok(RpcBlockchain {
|
||||
client,
|
||||
network,
|
||||
capabilities,
|
||||
is_descriptors,
|
||||
_storage_address: storage_address,
|
||||
skip_blocks: config.skip_blocks,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Deterministically generate a unique name given the descriptors defining the wallet
|
||||
pub fn wallet_name_from_descriptor<T>(
|
||||
descriptor: T,
|
||||
change_descriptor: Option<T>,
|
||||
network: Network,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<String, Error>
|
||||
where
|
||||
T: IntoWalletDescriptor,
|
||||
{
|
||||
//TODO check descriptors contains only public keys
|
||||
let descriptor = descriptor
|
||||
.into_wallet_descriptor(&secp, network)?
|
||||
.0
|
||||
.to_string();
|
||||
let mut wallet_name = get_checksum(&descriptor[..descriptor.find('#').unwrap()])?;
|
||||
if let Some(change_descriptor) = change_descriptor {
|
||||
let change_descriptor = change_descriptor
|
||||
.into_wallet_descriptor(&secp, network)?
|
||||
.0
|
||||
.to_string();
|
||||
wallet_name.push_str(
|
||||
get_checksum(&change_descriptor[..change_descriptor.find('#').unwrap()])?.as_str(),
|
||||
);
|
||||
}
|
||||
|
||||
Ok(wallet_name)
|
||||
}
|
||||
|
||||
/// return the wallets available in default wallet directory
|
||||
//TODO use bitcoincore_rpc method when PR #179 lands
|
||||
fn list_wallet_dir(client: &Client) -> Result<Vec<String>, Error> {
|
||||
@@ -495,141 +420,18 @@ fn list_wallet_dir(client: &Client) -> Result<Vec<String>, Error> {
|
||||
Ok(result.wallets.into_iter().map(|n| n.name).collect())
|
||||
}
|
||||
|
||||
/// Returns whether a wallet is legacy or descriptors by calling `getwalletinfo`.
|
||||
///
|
||||
/// This API is mapped by bitcoincore_rpc, but it doesn't have the fields we need (either
|
||||
/// "descriptors" or "format") so we have to call the RPC manually
|
||||
fn is_wallet_descriptor(client: &Client) -> Result<bool, Error> {
|
||||
#[derive(Deserialize)]
|
||||
struct CallResult {
|
||||
descriptors: Option<bool>,
|
||||
}
|
||||
|
||||
let result: CallResult = client.call("getwalletinfo", &[])?;
|
||||
Ok(result.descriptors.unwrap_or(false))
|
||||
}
|
||||
|
||||
/// Factory of [`RpcBlockchain`] instances, implements [`BlockchainFactory`]
|
||||
///
|
||||
/// Internally caches the node url and authentication params and allows getting many different [`RpcBlockchain`]
|
||||
/// objects for different wallet names and with different rescan heights.
|
||||
///
|
||||
/// ## Example
|
||||
///
|
||||
/// ```no_run
|
||||
/// # use bdk::bitcoin::Network;
|
||||
/// # use bdk::blockchain::BlockchainFactory;
|
||||
/// # use bdk::blockchain::rpc::{Auth, RpcBlockchainFactory};
|
||||
/// # fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
/// let factory = RpcBlockchainFactory {
|
||||
/// url: "http://127.0.0.1:18332".to_string(),
|
||||
/// auth: Auth::Cookie {
|
||||
/// file: "/home/user/.bitcoin/.cookie".into(),
|
||||
/// },
|
||||
/// network: Network::Testnet,
|
||||
/// wallet_name_prefix: Some("prefix-".to_string()),
|
||||
/// default_skip_blocks: 100_000,
|
||||
/// };
|
||||
/// let main_wallet_blockchain = factory.build("main_wallet", Some(200_000))?;
|
||||
/// # Ok(())
|
||||
/// # }
|
||||
/// ```
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RpcBlockchainFactory {
|
||||
/// The bitcoin node url
|
||||
pub url: String,
|
||||
/// The bitcoin node authentication mechanism
|
||||
pub auth: Auth,
|
||||
/// The network we are using (it will be checked the bitcoin node network matches this)
|
||||
pub network: Network,
|
||||
/// The optional prefix used to build the full wallet name for blockchains
|
||||
pub wallet_name_prefix: Option<String>,
|
||||
/// Default number of blocks to skip which will be inherited by blockchain unless overridden
|
||||
pub default_skip_blocks: u32,
|
||||
}
|
||||
|
||||
impl BlockchainFactory for RpcBlockchainFactory {
|
||||
type Inner = RpcBlockchain;
|
||||
|
||||
fn build(
|
||||
&self,
|
||||
checksum: &str,
|
||||
override_skip_blocks: Option<u32>,
|
||||
) -> Result<Self::Inner, Error> {
|
||||
RpcBlockchain::from_config(&RpcConfig {
|
||||
url: self.url.clone(),
|
||||
auth: self.auth.clone(),
|
||||
network: self.network,
|
||||
wallet_name: format!(
|
||||
"{}{}",
|
||||
self.wallet_name_prefix.as_ref().unwrap_or(&String::new()),
|
||||
checksum
|
||||
),
|
||||
skip_blocks: Some(override_skip_blocks.unwrap_or(self.default_skip_blocks)),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[cfg(any(feature = "test-rpc", feature = "test-rpc-legacy"))]
|
||||
mod test {
|
||||
use super::*;
|
||||
use crate::testutils::blockchain_tests::TestClient;
|
||||
#[cfg(feature = "test-rpc")]
|
||||
crate::bdk_blockchain_tests! {
|
||||
|
||||
use bitcoin::Network;
|
||||
use bitcoincore_rpc::RpcApi;
|
||||
|
||||
crate::bdk_blockchain_tests! {
|
||||
fn test_instance(test_client: &TestClient) -> RpcBlockchain {
|
||||
let config = RpcConfig {
|
||||
url: test_client.bitcoind.rpc_url(),
|
||||
auth: Auth::Cookie { file: test_client.bitcoind.params.cookie_file.clone() },
|
||||
network: Network::Regtest,
|
||||
wallet_name: format!("client-wallet-test-{}", std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_nanos() ),
|
||||
skip_blocks: None,
|
||||
};
|
||||
RpcBlockchain::from_config(&config).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
fn get_factory() -> (TestClient, RpcBlockchainFactory) {
|
||||
let test_client = TestClient::default();
|
||||
|
||||
let factory = RpcBlockchainFactory {
|
||||
fn test_instance(test_client: &TestClient) -> RpcBlockchain {
|
||||
let config = RpcConfig {
|
||||
url: test_client.bitcoind.rpc_url(),
|
||||
auth: Auth::Cookie {
|
||||
file: test_client.bitcoind.params.cookie_file.clone(),
|
||||
},
|
||||
auth: Auth::CookieFile(test_client.bitcoind.params.cookie_file.clone()),
|
||||
network: Network::Regtest,
|
||||
wallet_name_prefix: Some("prefix-".into()),
|
||||
default_skip_blocks: 0,
|
||||
wallet_name: format!("client-wallet-test-{:?}", std::time::SystemTime::now() ),
|
||||
skip_blocks: None,
|
||||
};
|
||||
|
||||
(test_client, factory)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_rpc_blockchain_factory() {
|
||||
let (_test_client, factory) = get_factory();
|
||||
|
||||
let a = factory.build("aaaaaa", None).unwrap();
|
||||
assert_eq!(a.skip_blocks, Some(0));
|
||||
assert_eq!(
|
||||
a.client
|
||||
.get_wallet_info()
|
||||
.expect("Node connection isn't working")
|
||||
.wallet_name,
|
||||
"prefix-aaaaaa"
|
||||
);
|
||||
|
||||
let b = factory.build("bbbbbb", Some(100)).unwrap();
|
||||
assert_eq!(b.skip_blocks, Some(100));
|
||||
assert_eq!(
|
||||
b.client
|
||||
.get_wallet_info()
|
||||
.expect("Node connection isn't working")
|
||||
.wallet_name,
|
||||
"prefix-bbbbbb"
|
||||
);
|
||||
RpcBlockchain::from_config(&config).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,433 +0,0 @@
|
||||
/*!
|
||||
This models a how a sync happens where you have a server that you send your script pubkeys to and it
|
||||
returns associated transactions i.e. electrum.
|
||||
*/
|
||||
#![allow(dead_code)]
|
||||
use crate::{
|
||||
database::{BatchDatabase, BatchOperations, DatabaseUtils},
|
||||
wallet::time::Instant,
|
||||
BlockTime, Error, KeychainKind, LocalUtxo, TransactionDetails,
|
||||
};
|
||||
use bitcoin::{OutPoint, Script, Transaction, TxOut, Txid};
|
||||
use log::*;
|
||||
use std::collections::{BTreeMap, BTreeSet, HashMap, HashSet, VecDeque};
|
||||
|
||||
/// A request for on-chain information
|
||||
pub enum Request<'a, D: BatchDatabase> {
|
||||
/// A request for transactions related to script pubkeys.
|
||||
Script(ScriptReq<'a, D>),
|
||||
/// A request for confirmation times for some transactions.
|
||||
Conftime(ConftimeReq<'a, D>),
|
||||
/// A request for full transaction details of some transactions.
|
||||
Tx(TxReq<'a, D>),
|
||||
/// Requests are finished here's a batch database update to reflect data gathered.
|
||||
Finish(D::Batch),
|
||||
}
|
||||
|
||||
/// starts a sync
|
||||
pub fn start<D: BatchDatabase>(db: &D, stop_gap: usize) -> Result<Request<'_, D>, Error> {
|
||||
use rand::seq::SliceRandom;
|
||||
let mut keychains = vec![KeychainKind::Internal, KeychainKind::External];
|
||||
// shuffling improve privacy, the server doesn't know my first request is from my internal or external addresses
|
||||
keychains.shuffle(&mut rand::thread_rng());
|
||||
let keychain = keychains.pop().unwrap();
|
||||
let scripts_needed = db
|
||||
.iter_script_pubkeys(Some(keychain))?
|
||||
.into_iter()
|
||||
.collect();
|
||||
let state = State::new(db);
|
||||
|
||||
Ok(Request::Script(ScriptReq {
|
||||
state,
|
||||
scripts_needed,
|
||||
script_index: 0,
|
||||
stop_gap,
|
||||
keychain,
|
||||
next_keychains: keychains,
|
||||
}))
|
||||
}
|
||||
|
||||
pub struct ScriptReq<'a, D: BatchDatabase> {
|
||||
state: State<'a, D>,
|
||||
script_index: usize,
|
||||
scripts_needed: VecDeque<Script>,
|
||||
stop_gap: usize,
|
||||
keychain: KeychainKind,
|
||||
next_keychains: Vec<KeychainKind>,
|
||||
}
|
||||
|
||||
/// The sync starts by returning script pubkeys we are interested in.
|
||||
impl<'a, D: BatchDatabase> ScriptReq<'a, D> {
|
||||
pub fn request(&self) -> impl Iterator<Item = &Script> + Clone {
|
||||
self.scripts_needed.iter()
|
||||
}
|
||||
|
||||
pub fn satisfy(
|
||||
mut self,
|
||||
// we want to know the txids assoiciated with the script and their height
|
||||
txids: Vec<Vec<(Txid, Option<u32>)>>,
|
||||
) -> Result<Request<'a, D>, Error> {
|
||||
for (txid_list, script) in txids.iter().zip(self.scripts_needed.iter()) {
|
||||
debug!(
|
||||
"found {} transactions for script pubkey {}",
|
||||
txid_list.len(),
|
||||
script
|
||||
);
|
||||
if !txid_list.is_empty() {
|
||||
// the address is active
|
||||
self.state
|
||||
.last_active_index
|
||||
.insert(self.keychain, self.script_index);
|
||||
}
|
||||
|
||||
for (txid, height) in txid_list {
|
||||
// have we seen this txid already?
|
||||
match self.state.db.get_tx(txid, true)? {
|
||||
Some(mut details) => {
|
||||
let old_height = details.confirmation_time.as_ref().map(|x| x.height);
|
||||
match (old_height, height) {
|
||||
(None, Some(_)) => {
|
||||
// It looks like the tx has confirmed since we last saw it -- we
|
||||
// need to know the confirmation time.
|
||||
self.state.tx_missing_conftime.insert(*txid, details);
|
||||
}
|
||||
(Some(old_height), Some(new_height)) if old_height != *new_height => {
|
||||
// The height of the tx has changed !? -- It's a reorg get the new confirmation time.
|
||||
self.state.tx_missing_conftime.insert(*txid, details);
|
||||
}
|
||||
(Some(_), None) => {
|
||||
// A re-org where the tx is not in the chain anymore.
|
||||
details.confirmation_time = None;
|
||||
self.state.finished_txs.push(details);
|
||||
}
|
||||
_ => self.state.finished_txs.push(details),
|
||||
}
|
||||
}
|
||||
None => {
|
||||
// we've never seen it let's get the whole thing
|
||||
self.state.tx_needed.insert(*txid);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
self.script_index += 1;
|
||||
}
|
||||
|
||||
for _ in txids {
|
||||
self.scripts_needed.pop_front();
|
||||
}
|
||||
|
||||
let last_active_index = self
|
||||
.state
|
||||
.last_active_index
|
||||
.get(&self.keychain)
|
||||
.map(|x| x + 1)
|
||||
.unwrap_or(0); // so no addresses active maps to 0
|
||||
|
||||
Ok(
|
||||
if self.script_index > last_active_index + self.stop_gap
|
||||
|| self.scripts_needed.is_empty()
|
||||
{
|
||||
debug!(
|
||||
"finished scanning for transactions for keychain {:?} at index {}",
|
||||
self.keychain, last_active_index
|
||||
);
|
||||
// we're done here -- check if we need to do the next keychain
|
||||
if let Some(keychain) = self.next_keychains.pop() {
|
||||
self.keychain = keychain;
|
||||
self.script_index = 0;
|
||||
self.scripts_needed = self
|
||||
.state
|
||||
.db
|
||||
.iter_script_pubkeys(Some(keychain))?
|
||||
.into_iter()
|
||||
.collect();
|
||||
Request::Script(self)
|
||||
} else {
|
||||
Request::Tx(TxReq { state: self.state })
|
||||
}
|
||||
} else {
|
||||
Request::Script(self)
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Then we get full transactions
|
||||
pub struct TxReq<'a, D> {
|
||||
state: State<'a, D>,
|
||||
}
|
||||
|
||||
impl<'a, D: BatchDatabase> TxReq<'a, D> {
|
||||
pub fn request(&self) -> impl Iterator<Item = &Txid> + Clone {
|
||||
self.state.tx_needed.iter()
|
||||
}
|
||||
|
||||
pub fn satisfy(
|
||||
mut self,
|
||||
tx_details: Vec<(Vec<Option<TxOut>>, Transaction)>,
|
||||
) -> Result<Request<'a, D>, Error> {
|
||||
let tx_details: Vec<TransactionDetails> = tx_details
|
||||
.into_iter()
|
||||
.zip(self.state.tx_needed.iter())
|
||||
.map(|((vout, tx), txid)| {
|
||||
debug!("found tx_details for {}", txid);
|
||||
assert_eq!(tx.txid(), *txid);
|
||||
let mut sent: u64 = 0;
|
||||
let mut received: u64 = 0;
|
||||
let mut inputs_sum: u64 = 0;
|
||||
let mut outputs_sum: u64 = 0;
|
||||
|
||||
for (txout, (_input_index, input)) in
|
||||
vout.into_iter().zip(tx.input.iter().enumerate())
|
||||
{
|
||||
let txout = match txout {
|
||||
Some(txout) => txout,
|
||||
None => {
|
||||
// skip coinbase inputs
|
||||
debug_assert!(
|
||||
input.previous_output.is_null(),
|
||||
"prevout should only be missing for coinbase"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
// Verify this input if requested via feature flag
|
||||
#[cfg(feature = "verify")]
|
||||
{
|
||||
use crate::wallet::verify::VerifyError;
|
||||
let serialized_tx = bitcoin::consensus::serialize(&tx);
|
||||
bitcoinconsensus::verify(
|
||||
txout.script_pubkey.to_bytes().as_ref(),
|
||||
txout.value,
|
||||
&serialized_tx,
|
||||
_input_index,
|
||||
)
|
||||
.map_err(VerifyError::from)?;
|
||||
}
|
||||
inputs_sum += txout.value;
|
||||
if self.state.db.is_mine(&txout.script_pubkey)? {
|
||||
sent += txout.value;
|
||||
}
|
||||
}
|
||||
|
||||
for out in &tx.output {
|
||||
outputs_sum += out.value;
|
||||
if self.state.db.is_mine(&out.script_pubkey)? {
|
||||
received += out.value;
|
||||
}
|
||||
}
|
||||
// we need to saturating sub since we want coinbase txs to map to 0 fee and
|
||||
// this subtraction will be negative for coinbase txs.
|
||||
let fee = inputs_sum.saturating_sub(outputs_sum);
|
||||
Result::<_, Error>::Ok(TransactionDetails {
|
||||
txid: *txid,
|
||||
transaction: Some(tx),
|
||||
received,
|
||||
sent,
|
||||
// we're going to fill this in later
|
||||
confirmation_time: None,
|
||||
fee: Some(fee),
|
||||
})
|
||||
})
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
|
||||
for tx_detail in tx_details {
|
||||
self.state.tx_needed.remove(&tx_detail.txid);
|
||||
self.state
|
||||
.tx_missing_conftime
|
||||
.insert(tx_detail.txid, tx_detail);
|
||||
}
|
||||
|
||||
if !self.state.tx_needed.is_empty() {
|
||||
Ok(Request::Tx(self))
|
||||
} else {
|
||||
Ok(Request::Conftime(ConftimeReq { state: self.state }))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Final step is to get confirmation times
|
||||
pub struct ConftimeReq<'a, D> {
|
||||
state: State<'a, D>,
|
||||
}
|
||||
|
||||
impl<'a, D: BatchDatabase> ConftimeReq<'a, D> {
|
||||
pub fn request(&self) -> impl Iterator<Item = &Txid> + Clone {
|
||||
self.state.tx_missing_conftime.keys()
|
||||
}
|
||||
|
||||
pub fn satisfy(
|
||||
mut self,
|
||||
confirmation_times: Vec<Option<BlockTime>>,
|
||||
) -> Result<Request<'a, D>, Error> {
|
||||
let conftime_needed = self
|
||||
.request()
|
||||
.cloned()
|
||||
.take(confirmation_times.len())
|
||||
.collect::<Vec<_>>();
|
||||
for (confirmation_time, txid) in confirmation_times.into_iter().zip(conftime_needed.iter())
|
||||
{
|
||||
debug!("confirmation time for {} was {:?}", txid, confirmation_time);
|
||||
if let Some(mut tx_details) = self.state.tx_missing_conftime.remove(txid) {
|
||||
tx_details.confirmation_time = confirmation_time;
|
||||
self.state.finished_txs.push(tx_details);
|
||||
}
|
||||
}
|
||||
|
||||
if self.state.tx_missing_conftime.is_empty() {
|
||||
Ok(Request::Finish(self.state.into_db_update()?))
|
||||
} else {
|
||||
Ok(Request::Conftime(self))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct State<'a, D> {
|
||||
db: &'a D,
|
||||
last_active_index: HashMap<KeychainKind, usize>,
|
||||
/// Transactions where we need to get the full details
|
||||
tx_needed: BTreeSet<Txid>,
|
||||
/// Transacitions that we know everything about
|
||||
finished_txs: Vec<TransactionDetails>,
|
||||
/// Transactions that discovered conftimes should be inserted into
|
||||
tx_missing_conftime: BTreeMap<Txid, TransactionDetails>,
|
||||
/// The start of the sync
|
||||
start_time: Instant,
|
||||
}
|
||||
|
||||
impl<'a, D: BatchDatabase> State<'a, D> {
|
||||
fn new(db: &'a D) -> Self {
|
||||
State {
|
||||
db,
|
||||
last_active_index: HashMap::default(),
|
||||
finished_txs: vec![],
|
||||
tx_needed: BTreeSet::default(),
|
||||
tx_missing_conftime: BTreeMap::default(),
|
||||
start_time: Instant::new(),
|
||||
}
|
||||
}
|
||||
fn into_db_update(self) -> Result<D::Batch, Error> {
|
||||
debug_assert!(self.tx_needed.is_empty() && self.tx_missing_conftime.is_empty());
|
||||
let existing_txs = self.db.iter_txs(false)?;
|
||||
let existing_txids: HashSet<Txid> = existing_txs.iter().map(|tx| tx.txid).collect();
|
||||
let finished_txs = make_txs_consistent(&self.finished_txs);
|
||||
let observed_txids: HashSet<Txid> = finished_txs.iter().map(|tx| tx.txid).collect();
|
||||
let txids_to_delete = existing_txids.difference(&observed_txids);
|
||||
|
||||
// Ensure `last_active_index` does not decrement database's current state.
|
||||
let index_updates = self
|
||||
.last_active_index
|
||||
.iter()
|
||||
.map(|(keychain, sync_index)| {
|
||||
let sync_index = *sync_index as u32;
|
||||
let index_res = match self.db.get_last_index(*keychain) {
|
||||
Ok(Some(db_index)) => Ok(std::cmp::max(db_index, sync_index)),
|
||||
Ok(None) => Ok(sync_index),
|
||||
Err(err) => Err(err),
|
||||
};
|
||||
index_res.map(|index| (*keychain, index))
|
||||
})
|
||||
.collect::<Result<Vec<(KeychainKind, u32)>, _>>()?;
|
||||
|
||||
let mut batch = self.db.begin_batch();
|
||||
|
||||
// Delete old txs that no longer exist
|
||||
for txid in txids_to_delete {
|
||||
if let Some(raw_tx) = self.db.get_raw_tx(txid)? {
|
||||
for i in 0..raw_tx.output.len() {
|
||||
// Also delete any utxos from the txs that no longer exist.
|
||||
let _ = batch.del_utxo(&OutPoint {
|
||||
txid: *txid,
|
||||
vout: i as u32,
|
||||
})?;
|
||||
}
|
||||
} else {
|
||||
unreachable!("we should always have the raw tx");
|
||||
}
|
||||
batch.del_tx(txid, true)?;
|
||||
}
|
||||
|
||||
let mut spent_utxos = HashSet::new();
|
||||
|
||||
// track all the spent utxos
|
||||
for finished_tx in &finished_txs {
|
||||
let tx = finished_tx
|
||||
.transaction
|
||||
.as_ref()
|
||||
.expect("transaction will always be present here");
|
||||
for input in &tx.input {
|
||||
spent_utxos.insert(&input.previous_output);
|
||||
}
|
||||
}
|
||||
|
||||
// set every utxo we observed, unless it's already spent
|
||||
// we don't do this in the loop above as we want to know all the spent outputs before
|
||||
// adding the non-spent to the batch in case there are new tranasactions
|
||||
// that spend form each other.
|
||||
for finished_tx in &finished_txs {
|
||||
let tx = finished_tx
|
||||
.transaction
|
||||
.as_ref()
|
||||
.expect("transaction will always be present here");
|
||||
for (i, output) in tx.output.iter().enumerate() {
|
||||
if let Some((keychain, _)) =
|
||||
self.db.get_path_from_script_pubkey(&output.script_pubkey)?
|
||||
{
|
||||
// add utxos we own from the new transactions we've seen.
|
||||
let outpoint = OutPoint {
|
||||
txid: finished_tx.txid,
|
||||
vout: i as u32,
|
||||
};
|
||||
|
||||
batch.set_utxo(&LocalUtxo {
|
||||
outpoint,
|
||||
txout: output.clone(),
|
||||
keychain,
|
||||
// Is this UTXO in the spent_utxos set?
|
||||
is_spent: spent_utxos.get(&outpoint).is_some(),
|
||||
})?;
|
||||
}
|
||||
}
|
||||
|
||||
batch.set_tx(finished_tx)?;
|
||||
}
|
||||
|
||||
// apply index updates
|
||||
for (keychain, new_index) in index_updates {
|
||||
debug!("updating index ({}, {})", keychain.as_byte(), new_index);
|
||||
batch.set_last_index(keychain, new_index)?;
|
||||
}
|
||||
|
||||
info!(
|
||||
"finished setup, elapsed {:?}ms",
|
||||
self.start_time.elapsed().as_millis()
|
||||
);
|
||||
Ok(batch)
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove conflicting transactions -- tie breaking them by fee.
|
||||
fn make_txs_consistent(txs: &[TransactionDetails]) -> Vec<&TransactionDetails> {
|
||||
let mut utxo_index: HashMap<OutPoint, &TransactionDetails> = HashMap::default();
|
||||
for tx in txs {
|
||||
for input in &tx.transaction.as_ref().unwrap().input {
|
||||
utxo_index
|
||||
.entry(input.previous_output)
|
||||
.and_modify(|existing| match (tx.fee, existing.fee) {
|
||||
(Some(fee), Some(existing_fee)) if fee > existing_fee => *existing = tx,
|
||||
(Some(_), None) => *existing = tx,
|
||||
_ => { /* leave it the same */ }
|
||||
})
|
||||
.or_insert(tx);
|
||||
}
|
||||
}
|
||||
|
||||
utxo_index
|
||||
.into_iter()
|
||||
.map(|(_, tx)| (tx.txid, tx))
|
||||
.collect::<HashMap<_, _>>()
|
||||
.into_iter()
|
||||
.map(|(_, tx)| tx)
|
||||
.collect()
|
||||
}
|
||||
388
src/blockchain/utils.rs
Normal file
388
src/blockchain/utils.rs
Normal file
@@ -0,0 +1,388 @@
|
||||
// Bitcoin Dev Kit
|
||||
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
|
||||
//
|
||||
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
|
||||
//
|
||||
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
|
||||
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
||||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
use rand::seq::SliceRandom;
|
||||
use rand::thread_rng;
|
||||
|
||||
use bitcoin::{BlockHeader, OutPoint, Script, Transaction, Txid};
|
||||
|
||||
use super::*;
|
||||
use crate::database::{BatchDatabase, BatchOperations, DatabaseUtils};
|
||||
use crate::error::Error;
|
||||
use crate::types::{ConfirmationTime, KeychainKind, LocalUtxo, TransactionDetails};
|
||||
use crate::wallet::time::Instant;
|
||||
use crate::wallet::utils::ChunksIterator;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ElsGetHistoryRes {
|
||||
pub height: i32,
|
||||
pub tx_hash: Txid,
|
||||
}
|
||||
|
||||
/// Implements the synchronization logic for an Electrum-like client.
|
||||
#[maybe_async]
|
||||
pub trait ElectrumLikeSync {
|
||||
fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script> + Clone>(
|
||||
&self,
|
||||
scripts: I,
|
||||
) -> Result<Vec<Vec<ElsGetHistoryRes>>, Error>;
|
||||
|
||||
fn els_batch_transaction_get<'s, I: IntoIterator<Item = &'s Txid> + Clone>(
|
||||
&self,
|
||||
txids: I,
|
||||
) -> Result<Vec<Transaction>, Error>;
|
||||
|
||||
fn els_batch_block_header<I: IntoIterator<Item = u32> + Clone>(
|
||||
&self,
|
||||
heights: I,
|
||||
) -> Result<Vec<BlockHeader>, Error>;
|
||||
|
||||
// Provided methods down here...
|
||||
|
||||
fn electrum_like_setup<D: BatchDatabase, P: Progress>(
|
||||
&self,
|
||||
stop_gap: usize,
|
||||
db: &mut D,
|
||||
_progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
// TODO: progress
|
||||
let start = Instant::new();
|
||||
debug!("start setup");
|
||||
|
||||
let chunk_size = stop_gap;
|
||||
|
||||
let mut history_txs_id = HashSet::new();
|
||||
let mut txid_height = HashMap::new();
|
||||
let mut max_indexes = HashMap::new();
|
||||
|
||||
let mut wallet_chains = vec![KeychainKind::Internal, KeychainKind::External];
|
||||
// shuffling improve privacy, the server doesn't know my first request is from my internal or external addresses
|
||||
wallet_chains.shuffle(&mut thread_rng());
|
||||
// download history of our internal and external script_pubkeys
|
||||
for keychain in wallet_chains.iter() {
|
||||
let script_iter = db.iter_script_pubkeys(Some(*keychain))?.into_iter();
|
||||
|
||||
for (i, chunk) in ChunksIterator::new(script_iter, stop_gap).enumerate() {
|
||||
// TODO if i == last, should create another chunk of addresses in db
|
||||
let call_result: Vec<Vec<ElsGetHistoryRes>> =
|
||||
maybe_await!(self.els_batch_script_get_history(chunk.iter()))?;
|
||||
let max_index = call_result
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter_map(|(i, v)| v.first().map(|_| i as u32))
|
||||
.max();
|
||||
if let Some(max) = max_index {
|
||||
max_indexes.insert(keychain, max + (i * chunk_size) as u32);
|
||||
}
|
||||
let flattened: Vec<ElsGetHistoryRes> = call_result.into_iter().flatten().collect();
|
||||
debug!("#{} of {:?} results:{}", i, keychain, flattened.len());
|
||||
if flattened.is_empty() {
|
||||
// Didn't find anything in the last `stop_gap` script_pubkeys, breaking
|
||||
break;
|
||||
}
|
||||
|
||||
for el in flattened {
|
||||
// el.height = -1 means unconfirmed with unconfirmed parents
|
||||
// el.height = 0 means unconfirmed with confirmed parents
|
||||
// but we treat those tx the same
|
||||
if el.height <= 0 {
|
||||
txid_height.insert(el.tx_hash, None);
|
||||
} else {
|
||||
txid_height.insert(el.tx_hash, Some(el.height as u32));
|
||||
}
|
||||
history_txs_id.insert(el.tx_hash);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// saving max indexes
|
||||
info!("max indexes are: {:?}", max_indexes);
|
||||
for keychain in wallet_chains.iter() {
|
||||
if let Some(index) = max_indexes.get(keychain) {
|
||||
db.set_last_index(*keychain, *index)?;
|
||||
}
|
||||
}
|
||||
|
||||
// get db status
|
||||
let txs_details_in_db: HashMap<Txid, TransactionDetails> = db
|
||||
.iter_txs(false)?
|
||||
.into_iter()
|
||||
.map(|tx| (tx.txid, tx))
|
||||
.collect();
|
||||
let txs_raw_in_db: HashMap<Txid, Transaction> = db
|
||||
.iter_raw_txs()?
|
||||
.into_iter()
|
||||
.map(|tx| (tx.txid(), tx))
|
||||
.collect();
|
||||
let utxos_deps = utxos_deps(db, &txs_raw_in_db)?;
|
||||
|
||||
// download new txs and headers
|
||||
let new_txs = maybe_await!(self.download_and_save_needed_raw_txs(
|
||||
&history_txs_id,
|
||||
&txs_raw_in_db,
|
||||
chunk_size,
|
||||
db
|
||||
))?;
|
||||
let new_timestamps = maybe_await!(self.download_needed_headers(
|
||||
&txid_height,
|
||||
&txs_details_in_db,
|
||||
chunk_size
|
||||
))?;
|
||||
|
||||
let mut batch = db.begin_batch();
|
||||
|
||||
// save any tx details not in db but in history_txs_id or with different height/timestamp
|
||||
for txid in history_txs_id.iter() {
|
||||
let height = txid_height.get(txid).cloned().flatten();
|
||||
let timestamp = new_timestamps.get(txid).cloned();
|
||||
if let Some(tx_details) = txs_details_in_db.get(txid) {
|
||||
// check if tx height matches, otherwise updates it. timestamp is not in the if clause
|
||||
// because we are not asking headers for confirmed tx we know about
|
||||
if tx_details.confirmation_time.as_ref().map(|c| c.height) != height {
|
||||
let confirmation_time = ConfirmationTime::new(height, timestamp);
|
||||
let mut new_tx_details = tx_details.clone();
|
||||
new_tx_details.confirmation_time = confirmation_time;
|
||||
batch.set_tx(&new_tx_details)?;
|
||||
}
|
||||
} else {
|
||||
save_transaction_details_and_utxos(
|
||||
txid,
|
||||
db,
|
||||
timestamp,
|
||||
height,
|
||||
&mut batch,
|
||||
&utxos_deps,
|
||||
)?;
|
||||
}
|
||||
}
|
||||
|
||||
// remove any tx details in db but not in history_txs_id
|
||||
for txid in txs_details_in_db.keys() {
|
||||
if !history_txs_id.contains(txid) {
|
||||
batch.del_tx(txid, false)?;
|
||||
}
|
||||
}
|
||||
|
||||
// remove any spent utxo
|
||||
for new_tx in new_txs.iter() {
|
||||
for input in new_tx.input.iter() {
|
||||
batch.del_utxo(&input.previous_output)?;
|
||||
}
|
||||
}
|
||||
|
||||
db.commit_batch(batch)?;
|
||||
info!("finish setup, elapsed {:?}ms", start.elapsed().as_millis());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// download txs identified by `history_txs_id` and theirs previous outputs if not already present in db
|
||||
fn download_and_save_needed_raw_txs<D: BatchDatabase>(
|
||||
&self,
|
||||
history_txs_id: &HashSet<Txid>,
|
||||
txs_raw_in_db: &HashMap<Txid, Transaction>,
|
||||
chunk_size: usize,
|
||||
db: &mut D,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
let mut txs_downloaded = vec![];
|
||||
let txids_raw_in_db: HashSet<Txid> = txs_raw_in_db.keys().cloned().collect();
|
||||
let txids_to_download: Vec<&Txid> = history_txs_id.difference(&txids_raw_in_db).collect();
|
||||
if !txids_to_download.is_empty() {
|
||||
info!("got {} txs to download", txids_to_download.len());
|
||||
txs_downloaded.extend(maybe_await!(self.download_and_save_in_chunks(
|
||||
txids_to_download,
|
||||
chunk_size,
|
||||
db,
|
||||
))?);
|
||||
let mut prev_txids = HashSet::new();
|
||||
let mut txids_downloaded = HashSet::new();
|
||||
for tx in txs_downloaded.iter() {
|
||||
txids_downloaded.insert(tx.txid());
|
||||
// add every previous input tx, but skip coinbase
|
||||
for input in tx.input.iter().filter(|i| !i.previous_output.is_null()) {
|
||||
prev_txids.insert(input.previous_output.txid);
|
||||
}
|
||||
}
|
||||
let already_present: HashSet<Txid> =
|
||||
txids_downloaded.union(&txids_raw_in_db).cloned().collect();
|
||||
let prev_txs_to_download: Vec<&Txid> =
|
||||
prev_txids.difference(&already_present).collect();
|
||||
info!("{} previous txs to download", prev_txs_to_download.len());
|
||||
txs_downloaded.extend(maybe_await!(self.download_and_save_in_chunks(
|
||||
prev_txs_to_download,
|
||||
chunk_size,
|
||||
db,
|
||||
))?);
|
||||
}
|
||||
|
||||
Ok(txs_downloaded)
|
||||
}
|
||||
|
||||
/// download headers at heights in `txid_height` if tx details not already present, returns a map Txid -> timestamp
|
||||
fn download_needed_headers(
|
||||
&self,
|
||||
txid_height: &HashMap<Txid, Option<u32>>,
|
||||
txs_details_in_db: &HashMap<Txid, TransactionDetails>,
|
||||
chunk_size: usize,
|
||||
) -> Result<HashMap<Txid, u64>, Error> {
|
||||
let mut txid_timestamp = HashMap::new();
|
||||
let txid_in_db_with_conf: HashSet<_> = txs_details_in_db
|
||||
.values()
|
||||
.filter_map(|details| details.confirmation_time.as_ref().map(|_| details.txid))
|
||||
.collect();
|
||||
let needed_txid_height: HashMap<&Txid, u32> = txid_height
|
||||
.iter()
|
||||
.filter(|(t, _)| !txid_in_db_with_conf.contains(*t))
|
||||
.filter_map(|(t, o)| o.map(|h| (t, h)))
|
||||
.collect();
|
||||
let needed_heights: HashSet<u32> = needed_txid_height.values().cloned().collect();
|
||||
if !needed_heights.is_empty() {
|
||||
info!("{} headers to download for timestamp", needed_heights.len());
|
||||
let mut height_timestamp: HashMap<u32, u64> = HashMap::new();
|
||||
for chunk in ChunksIterator::new(needed_heights.into_iter(), chunk_size) {
|
||||
let call_result: Vec<BlockHeader> =
|
||||
maybe_await!(self.els_batch_block_header(chunk.clone()))?;
|
||||
height_timestamp.extend(
|
||||
chunk
|
||||
.into_iter()
|
||||
.zip(call_result.iter().map(|h| h.time as u64)),
|
||||
);
|
||||
}
|
||||
for (txid, height) in needed_txid_height {
|
||||
let timestamp = height_timestamp
|
||||
.get(&height)
|
||||
.ok_or_else(|| Error::Generic("timestamp missing".to_string()))?;
|
||||
txid_timestamp.insert(*txid, *timestamp);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(txid_timestamp)
|
||||
}
|
||||
|
||||
fn download_and_save_in_chunks<D: BatchDatabase>(
|
||||
&self,
|
||||
to_download: Vec<&Txid>,
|
||||
chunk_size: usize,
|
||||
db: &mut D,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
let mut txs_downloaded = vec![];
|
||||
for chunk in ChunksIterator::new(to_download.into_iter(), chunk_size) {
|
||||
let call_result: Vec<Transaction> =
|
||||
maybe_await!(self.els_batch_transaction_get(chunk))?;
|
||||
let mut batch = db.begin_batch();
|
||||
for new_tx in call_result.iter() {
|
||||
batch.set_raw_tx(new_tx)?;
|
||||
}
|
||||
db.commit_batch(batch)?;
|
||||
txs_downloaded.extend(call_result);
|
||||
}
|
||||
|
||||
Ok(txs_downloaded)
|
||||
}
|
||||
}
|
||||
|
||||
fn save_transaction_details_and_utxos<D: BatchDatabase>(
|
||||
txid: &Txid,
|
||||
db: &mut D,
|
||||
timestamp: Option<u64>,
|
||||
height: Option<u32>,
|
||||
updates: &mut dyn BatchOperations,
|
||||
utxo_deps: &HashMap<OutPoint, OutPoint>,
|
||||
) -> Result<(), Error> {
|
||||
let tx = db.get_raw_tx(txid)?.ok_or(Error::TransactionNotFound)?;
|
||||
|
||||
let mut incoming: u64 = 0;
|
||||
let mut outgoing: u64 = 0;
|
||||
|
||||
let mut inputs_sum: u64 = 0;
|
||||
let mut outputs_sum: u64 = 0;
|
||||
|
||||
// look for our own inputs
|
||||
for input in tx.input.iter() {
|
||||
// skip coinbase inputs
|
||||
if input.previous_output.is_null() {
|
||||
continue;
|
||||
}
|
||||
|
||||
// We already downloaded all previous output txs in the previous step
|
||||
if let Some(previous_output) = db.get_previous_output(&input.previous_output)? {
|
||||
inputs_sum += previous_output.value;
|
||||
|
||||
if db.is_mine(&previous_output.script_pubkey)? {
|
||||
outgoing += previous_output.value;
|
||||
}
|
||||
} else {
|
||||
// The input is not ours, but we still need to count it for the fees
|
||||
let tx = db
|
||||
.get_raw_tx(&input.previous_output.txid)?
|
||||
.ok_or(Error::TransactionNotFound)?;
|
||||
inputs_sum += tx.output[input.previous_output.vout as usize].value;
|
||||
}
|
||||
|
||||
// removes conflicting UTXO if any (generated from same inputs, like for example RBF)
|
||||
if let Some(outpoint) = utxo_deps.get(&input.previous_output) {
|
||||
updates.del_utxo(outpoint)?;
|
||||
}
|
||||
}
|
||||
|
||||
for (i, output) in tx.output.iter().enumerate() {
|
||||
// to compute the fees later
|
||||
outputs_sum += output.value;
|
||||
|
||||
// this output is ours, we have a path to derive it
|
||||
if let Some((keychain, _child)) = db.get_path_from_script_pubkey(&output.script_pubkey)? {
|
||||
debug!("{} output #{} is mine, adding utxo", txid, i);
|
||||
updates.set_utxo(&LocalUtxo {
|
||||
outpoint: OutPoint::new(tx.txid(), i as u32),
|
||||
txout: output.clone(),
|
||||
keychain,
|
||||
})?;
|
||||
|
||||
incoming += output.value;
|
||||
}
|
||||
}
|
||||
|
||||
let tx_details = TransactionDetails {
|
||||
txid: tx.txid(),
|
||||
transaction: Some(tx),
|
||||
received: incoming,
|
||||
sent: outgoing,
|
||||
confirmation_time: ConfirmationTime::new(height, timestamp),
|
||||
fee: Some(inputs_sum.saturating_sub(outputs_sum)), /* if the tx is a coinbase, fees would be negative */
|
||||
verified: height.is_some(),
|
||||
};
|
||||
updates.set_tx(&tx_details)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// returns utxo dependency as the inputs needed for the utxo to exist
|
||||
/// `tx_raw_in_db` must contains utxo's generating txs or errors witt [crate::Error::TransactionNotFound]
|
||||
fn utxos_deps<D: BatchDatabase>(
|
||||
db: &mut D,
|
||||
tx_raw_in_db: &HashMap<Txid, Transaction>,
|
||||
) -> Result<HashMap<OutPoint, OutPoint>, Error> {
|
||||
let utxos = db.iter_utxos()?;
|
||||
let mut utxos_deps = HashMap::new();
|
||||
for utxo in utxos {
|
||||
let from_tx = tx_raw_in_db
|
||||
.get(&utxo.outpoint.txid)
|
||||
.ok_or(Error::TransactionNotFound)?;
|
||||
for input in from_tx.input.iter() {
|
||||
utxos_deps.insert(input.previous_output, utxo.outpoint);
|
||||
}
|
||||
}
|
||||
Ok(utxos_deps)
|
||||
}
|
||||
@@ -23,12 +23,12 @@
|
||||
//! # use bdk::database::{AnyDatabase, MemoryDatabase};
|
||||
//! # use bdk::{Wallet};
|
||||
//! let memory = MemoryDatabase::default();
|
||||
//! let wallet_memory = Wallet::new("...", None, Network::Testnet, memory)?;
|
||||
//! let wallet_memory = Wallet::new_offline("...", None, Network::Testnet, memory)?;
|
||||
//!
|
||||
//! # #[cfg(feature = "key-value-db")]
|
||||
//! # {
|
||||
//! let sled = sled::open("my-database")?.open_tree("default_tree")?;
|
||||
//! let wallet_sled = Wallet::new("...", None, Network::Testnet, sled)?;
|
||||
//! let wallet_sled = Wallet::new_offline("...", None, Network::Testnet, sled)?;
|
||||
//! # }
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
@@ -42,7 +42,7 @@
|
||||
//! # use bdk::{Wallet};
|
||||
//! let config = serde_json::from_str("...")?;
|
||||
//! let database = AnyDatabase::from_config(&config)?;
|
||||
//! let wallet = Wallet::new("...", None, Network::Testnet, database)?;
|
||||
//! let wallet = Wallet::new_offline("...", None, Network::Testnet, database)?;
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
|
||||
@@ -61,13 +61,10 @@ macro_rules! impl_from {
|
||||
|
||||
macro_rules! impl_inner_method {
|
||||
( $enum_name:ident, $self:expr, $name:ident $(, $args:expr)* ) => {
|
||||
#[allow(deprecated)]
|
||||
match $self {
|
||||
$enum_name::Memory(inner) => inner.$name( $($args, )* ),
|
||||
#[cfg(feature = "key-value-db")]
|
||||
$enum_name::Sled(inner) => inner.$name( $($args, )* ),
|
||||
#[cfg(feature = "sqlite")]
|
||||
$enum_name::Sqlite(inner) => inner.$name( $($args, )* ),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -85,15 +82,10 @@ pub enum AnyDatabase {
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "key-value-db")))]
|
||||
/// Simple key-value embedded database based on [`sled`]
|
||||
Sled(sled::Tree),
|
||||
#[cfg(feature = "sqlite")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "sqlite")))]
|
||||
/// Sqlite embedded database using [`rusqlite`]
|
||||
Sqlite(sqlite::SqliteDatabase),
|
||||
}
|
||||
|
||||
impl_from!(memory::MemoryDatabase, AnyDatabase, Memory,);
|
||||
impl_from!(sled::Tree, AnyDatabase, Sled, #[cfg(feature = "key-value-db")]);
|
||||
impl_from!(sqlite::SqliteDatabase, AnyDatabase, Sqlite, #[cfg(feature = "sqlite")]);
|
||||
|
||||
/// Type that contains any of the [`BatchDatabase::Batch`] types defined by the library
|
||||
pub enum AnyBatch {
|
||||
@@ -103,10 +95,6 @@ pub enum AnyBatch {
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "key-value-db")))]
|
||||
/// Simple key-value embedded database based on [`sled`]
|
||||
Sled(<sled::Tree as BatchDatabase>::Batch),
|
||||
#[cfg(feature = "sqlite")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "sqlite")))]
|
||||
/// Sqlite embedded database using [`rusqlite`]
|
||||
Sqlite(<sqlite::SqliteDatabase as BatchDatabase>::Batch),
|
||||
}
|
||||
|
||||
impl_from!(
|
||||
@@ -115,7 +103,6 @@ impl_from!(
|
||||
Memory,
|
||||
);
|
||||
impl_from!(<sled::Tree as BatchDatabase>::Batch, AnyBatch, Sled, #[cfg(feature = "key-value-db")]);
|
||||
impl_from!(<sqlite::SqliteDatabase as BatchDatabase>::Batch, AnyBatch, Sqlite, #[cfg(feature = "sqlite")]);
|
||||
|
||||
impl BatchOperations for AnyDatabase {
|
||||
fn set_script_pubkey(
|
||||
@@ -145,9 +132,6 @@ impl BatchOperations for AnyDatabase {
|
||||
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error> {
|
||||
impl_inner_method!(AnyDatabase, self, set_last_index, keychain, value)
|
||||
}
|
||||
fn set_sync_time(&mut self, sync_time: SyncTime) -> Result<(), Error> {
|
||||
impl_inner_method!(AnyDatabase, self, set_sync_time, sync_time)
|
||||
}
|
||||
|
||||
fn del_script_pubkey_from_path(
|
||||
&mut self,
|
||||
@@ -184,9 +168,6 @@ impl BatchOperations for AnyDatabase {
|
||||
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
|
||||
impl_inner_method!(AnyDatabase, self, del_last_index, keychain)
|
||||
}
|
||||
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
|
||||
impl_inner_method!(AnyDatabase, self, del_sync_time)
|
||||
}
|
||||
}
|
||||
|
||||
impl Database for AnyDatabase {
|
||||
@@ -248,9 +229,6 @@ impl Database for AnyDatabase {
|
||||
fn get_last_index(&self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
|
||||
impl_inner_method!(AnyDatabase, self, get_last_index, keychain)
|
||||
}
|
||||
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error> {
|
||||
impl_inner_method!(AnyDatabase, self, get_sync_time)
|
||||
}
|
||||
|
||||
fn increment_last_index(&mut self, keychain: KeychainKind) -> Result<u32, Error> {
|
||||
impl_inner_method!(AnyDatabase, self, increment_last_index, keychain)
|
||||
@@ -278,9 +256,6 @@ impl BatchOperations for AnyBatch {
|
||||
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error> {
|
||||
impl_inner_method!(AnyBatch, self, set_last_index, keychain, value)
|
||||
}
|
||||
fn set_sync_time(&mut self, sync_time: SyncTime) -> Result<(), Error> {
|
||||
impl_inner_method!(AnyBatch, self, set_sync_time, sync_time)
|
||||
}
|
||||
|
||||
fn del_script_pubkey_from_path(
|
||||
&mut self,
|
||||
@@ -311,9 +286,6 @@ impl BatchOperations for AnyBatch {
|
||||
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
|
||||
impl_inner_method!(AnyBatch, self, del_last_index, keychain)
|
||||
}
|
||||
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
|
||||
impl_inner_method!(AnyBatch, self, del_sync_time)
|
||||
}
|
||||
}
|
||||
|
||||
impl BatchDatabase for AnyDatabase {
|
||||
@@ -324,26 +296,19 @@ impl BatchDatabase for AnyDatabase {
|
||||
AnyDatabase::Memory(inner) => inner.begin_batch().into(),
|
||||
#[cfg(feature = "key-value-db")]
|
||||
AnyDatabase::Sled(inner) => inner.begin_batch().into(),
|
||||
#[cfg(feature = "sqlite")]
|
||||
AnyDatabase::Sqlite(inner) => inner.begin_batch().into(),
|
||||
}
|
||||
}
|
||||
fn commit_batch(&mut self, batch: Self::Batch) -> Result<(), Error> {
|
||||
match self {
|
||||
AnyDatabase::Memory(db) => match batch {
|
||||
AnyBatch::Memory(batch) => db.commit_batch(batch),
|
||||
#[cfg(any(feature = "key-value-db", feature = "sqlite"))]
|
||||
_ => unimplemented!("Other batch shouldn't be used with Memory db."),
|
||||
#[cfg(feature = "key-value-db")]
|
||||
_ => unimplemented!("Sled batch shouldn't be used with Memory db."),
|
||||
},
|
||||
#[cfg(feature = "key-value-db")]
|
||||
AnyDatabase::Sled(db) => match batch {
|
||||
AnyBatch::Sled(batch) => db.commit_batch(batch),
|
||||
_ => unimplemented!("Other batch shouldn't be used with Sled db."),
|
||||
},
|
||||
#[cfg(feature = "sqlite")]
|
||||
AnyDatabase::Sqlite(db) => match batch {
|
||||
AnyBatch::Sqlite(batch) => db.commit_batch(batch),
|
||||
_ => unimplemented!("Other batch shouldn't be used with Sqlite db."),
|
||||
_ => unimplemented!("Memory batch shouldn't be used with Sled db."),
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -368,23 +333,6 @@ impl ConfigurableDatabase for sled::Tree {
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration type for a [`sqlite::SqliteDatabase`] database
|
||||
#[cfg(feature = "sqlite")]
|
||||
#[derive(Debug, serde::Serialize, serde::Deserialize)]
|
||||
pub struct SqliteDbConfiguration {
|
||||
/// Main directory of the db
|
||||
pub path: String,
|
||||
}
|
||||
|
||||
#[cfg(feature = "sqlite")]
|
||||
impl ConfigurableDatabase for sqlite::SqliteDatabase {
|
||||
type Config = SqliteDbConfiguration;
|
||||
|
||||
fn from_config(config: &Self::Config) -> Result<Self, Error> {
|
||||
Ok(sqlite::SqliteDatabase::new(config.path.clone()))
|
||||
}
|
||||
}
|
||||
|
||||
/// Type that can contain any of the database configurations defined by the library
|
||||
///
|
||||
/// This allows storing a single configuration that can be loaded into an [`AnyDatabase`]
|
||||
@@ -398,10 +346,6 @@ pub enum AnyDatabaseConfig {
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "key-value-db")))]
|
||||
/// Simple key-value embedded database based on [`sled`]
|
||||
Sled(SledDbConfiguration),
|
||||
#[cfg(feature = "sqlite")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "sqlite")))]
|
||||
/// Sqlite embedded database using [`rusqlite`]
|
||||
Sqlite(SqliteDbConfiguration),
|
||||
}
|
||||
|
||||
impl ConfigurableDatabase for AnyDatabase {
|
||||
@@ -414,14 +358,9 @@ impl ConfigurableDatabase for AnyDatabase {
|
||||
}
|
||||
#[cfg(feature = "key-value-db")]
|
||||
AnyDatabaseConfig::Sled(inner) => AnyDatabase::Sled(sled::Tree::from_config(inner)?),
|
||||
#[cfg(feature = "sqlite")]
|
||||
AnyDatabaseConfig::Sqlite(inner) => {
|
||||
AnyDatabase::Sqlite(sqlite::SqliteDatabase::from_config(inner)?)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl_from!((), AnyDatabaseConfig, Memory,);
|
||||
impl_from!(SledDbConfiguration, AnyDatabaseConfig, Sled, #[cfg(feature = "key-value-db")]);
|
||||
impl_from!(SqliteDbConfiguration, AnyDatabaseConfig, Sqlite, #[cfg(feature = "sqlite")]);
|
||||
|
||||
@@ -18,7 +18,7 @@ use bitcoin::hash_types::Txid;
|
||||
use bitcoin::{OutPoint, Script, Transaction};
|
||||
|
||||
use crate::database::memory::MapKey;
|
||||
use crate::database::{BatchDatabase, BatchOperations, Database, SyncTime};
|
||||
use crate::database::{BatchDatabase, BatchOperations, Database};
|
||||
use crate::error::Error;
|
||||
use crate::types::*;
|
||||
|
||||
@@ -43,7 +43,6 @@ macro_rules! impl_batch_operations {
|
||||
let value = json!({
|
||||
"t": utxo.txout,
|
||||
"i": utxo.keychain,
|
||||
"s": utxo.is_spent,
|
||||
});
|
||||
self.insert(key, serde_json::to_vec(&value)?)$($after_insert)*;
|
||||
|
||||
@@ -83,13 +82,6 @@ macro_rules! impl_batch_operations {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn set_sync_time(&mut self, data: SyncTime) -> Result<(), Error> {
|
||||
let key = MapKey::SyncTime.as_map_key();
|
||||
self.insert(key, serde_json::to_vec(&data)?)$($after_insert)*;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn del_script_pubkey_from_path(&mut self, keychain: KeychainKind, path: u32) -> Result<Option<Script>, Error> {
|
||||
let key = MapKey::Path((Some(keychain), Some(path))).as_map_key();
|
||||
let res = self.remove(key);
|
||||
@@ -126,9 +118,8 @@ macro_rules! impl_batch_operations {
|
||||
let mut val: serde_json::Value = serde_json::from_slice(&b)?;
|
||||
let txout = serde_json::from_value(val["t"].take())?;
|
||||
let keychain = serde_json::from_value(val["i"].take())?;
|
||||
let is_spent = val.get_mut("s").and_then(|s| s.take().as_bool()).unwrap_or(false);
|
||||
|
||||
Ok(Some(LocalUtxo { outpoint: outpoint.clone(), txout, keychain, is_spent, }))
|
||||
Ok(Some(LocalUtxo { outpoint: outpoint.clone(), txout, keychain }))
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -166,17 +157,16 @@ macro_rules! impl_batch_operations {
|
||||
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
|
||||
let key = MapKey::LastIndex(keychain).as_map_key();
|
||||
let res = self.remove(key);
|
||||
$process_delete!(res)
|
||||
.map(ivec_to_u32)
|
||||
.transpose()
|
||||
}
|
||||
|
||||
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
|
||||
let key = MapKey::SyncTime.as_map_key();
|
||||
let res = self.remove(key);
|
||||
let res = $process_delete!(res);
|
||||
|
||||
Ok(res.map(|b| serde_json::from_slice(&b)).transpose()?)
|
||||
match res {
|
||||
None => Ok(None),
|
||||
Some(b) => {
|
||||
let array: [u8; 4] = b.as_ref().try_into().map_err(|_| Error::InvalidU32Bytes(b.to_vec()))?;
|
||||
let val = u32::from_be_bytes(array);
|
||||
Ok(Some(val))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -241,16 +231,11 @@ impl Database for Tree {
|
||||
let mut val: serde_json::Value = serde_json::from_slice(&v)?;
|
||||
let txout = serde_json::from_value(val["t"].take())?;
|
||||
let keychain = serde_json::from_value(val["i"].take())?;
|
||||
let is_spent = val
|
||||
.get_mut("s")
|
||||
.and_then(|s| s.take().as_bool())
|
||||
.unwrap_or(false);
|
||||
|
||||
Ok(LocalUtxo {
|
||||
outpoint,
|
||||
txout,
|
||||
keychain,
|
||||
is_spent,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
@@ -314,16 +299,11 @@ impl Database for Tree {
|
||||
let mut val: serde_json::Value = serde_json::from_slice(&b)?;
|
||||
let txout = serde_json::from_value(val["t"].take())?;
|
||||
let keychain = serde_json::from_value(val["i"].take())?;
|
||||
let is_spent = val
|
||||
.get_mut("s")
|
||||
.and_then(|s| s.take().as_bool())
|
||||
.unwrap_or(false);
|
||||
|
||||
Ok(LocalUtxo {
|
||||
outpoint: *outpoint,
|
||||
txout,
|
||||
keychain,
|
||||
is_spent,
|
||||
})
|
||||
})
|
||||
.transpose()
|
||||
@@ -350,15 +330,16 @@ impl Database for Tree {
|
||||
|
||||
fn get_last_index(&self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
|
||||
let key = MapKey::LastIndex(keychain).as_map_key();
|
||||
self.get(key)?.map(ivec_to_u32).transpose()
|
||||
}
|
||||
|
||||
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error> {
|
||||
let key = MapKey::SyncTime.as_map_key();
|
||||
Ok(self
|
||||
.get(key)?
|
||||
.map(|b| serde_json::from_slice(&b))
|
||||
.transpose()?)
|
||||
self.get(key)?
|
||||
.map(|b| -> Result<_, Error> {
|
||||
let array: [u8; 4] = b
|
||||
.as_ref()
|
||||
.try_into()
|
||||
.map_err(|_| Error::InvalidU32Bytes(b.to_vec()))?;
|
||||
let val = u32::from_be_bytes(array);
|
||||
Ok(val)
|
||||
})
|
||||
.transpose()
|
||||
}
|
||||
|
||||
// inserts 0 if not present
|
||||
@@ -377,19 +358,17 @@ impl Database for Tree {
|
||||
|
||||
Some(new.to_be_bytes().to_vec())
|
||||
})?
|
||||
.map_or(Ok(0), ivec_to_u32)
|
||||
.map_or(Ok(0), |b| -> Result<_, Error> {
|
||||
let array: [u8; 4] = b
|
||||
.as_ref()
|
||||
.try_into()
|
||||
.map_err(|_| Error::InvalidU32Bytes(b.to_vec()))?;
|
||||
let val = u32::from_be_bytes(array);
|
||||
Ok(val)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
fn ivec_to_u32(b: sled::IVec) -> Result<u32, Error> {
|
||||
let array: [u8; 4] = b
|
||||
.as_ref()
|
||||
.try_into()
|
||||
.map_err(|_| Error::InvalidU32Bytes(b.to_vec()))?;
|
||||
let val = u32::from_be_bytes(array);
|
||||
Ok(val)
|
||||
}
|
||||
|
||||
impl BatchDatabase for Tree {
|
||||
type Batch = sled::Batch;
|
||||
|
||||
@@ -487,9 +466,4 @@ mod test {
|
||||
fn test_last_index() {
|
||||
crate::database::test::test_last_index(get_tree());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sync_time() {
|
||||
crate::database::test::test_sync_time(get_tree());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,7 +14,6 @@
|
||||
//! This module defines an in-memory database type called [`MemoryDatabase`] that is based on a
|
||||
//! [`BTreeMap`].
|
||||
|
||||
use std::any::Any;
|
||||
use std::collections::BTreeMap;
|
||||
use std::ops::Bound::{Excluded, Included};
|
||||
|
||||
@@ -22,7 +21,7 @@ use bitcoin::consensus::encode::{deserialize, serialize};
|
||||
use bitcoin::hash_types::Txid;
|
||||
use bitcoin::{OutPoint, Script, Transaction};
|
||||
|
||||
use crate::database::{BatchDatabase, BatchOperations, ConfigurableDatabase, Database, SyncTime};
|
||||
use crate::database::{BatchDatabase, BatchOperations, ConfigurableDatabase, Database};
|
||||
use crate::error::Error;
|
||||
use crate::types::*;
|
||||
|
||||
@@ -33,7 +32,6 @@ use crate::types::*;
|
||||
// transactions t<txid> -> tx details
|
||||
// deriv indexes c{i,e} -> u32
|
||||
// descriptor checksum d{i,e} -> vec<u8>
|
||||
// last sync time l -> { height, timestamp }
|
||||
|
||||
pub(crate) enum MapKey<'a> {
|
||||
Path((Option<KeychainKind>, Option<u32>)),
|
||||
@@ -42,7 +40,6 @@ pub(crate) enum MapKey<'a> {
|
||||
RawTx(Option<&'a Txid>),
|
||||
Transaction(Option<&'a Txid>),
|
||||
LastIndex(KeychainKind),
|
||||
SyncTime,
|
||||
DescriptorChecksum(KeychainKind),
|
||||
}
|
||||
|
||||
@@ -61,7 +58,6 @@ impl MapKey<'_> {
|
||||
MapKey::RawTx(_) => b"r".to_vec(),
|
||||
MapKey::Transaction(_) => b"t".to_vec(),
|
||||
MapKey::LastIndex(st) => [b"c", st.as_ref()].concat(),
|
||||
MapKey::SyncTime => b"l".to_vec(),
|
||||
MapKey::DescriptorChecksum(st) => [b"d", st.as_ref()].concat(),
|
||||
}
|
||||
}
|
||||
@@ -109,12 +105,12 @@ fn after(key: &[u8]) -> Vec<u8> {
|
||||
/// Once it's dropped its content will be lost.
|
||||
///
|
||||
/// If you are looking for a permanent storage solution, you can try with the default key-value
|
||||
/// database called [`sled`]. See the [`database`] module documentation for more details.
|
||||
/// database called [`sled`]. See the [`database`] module documentation for more defailts.
|
||||
///
|
||||
/// [`database`]: crate::database
|
||||
#[derive(Debug, Default)]
|
||||
pub struct MemoryDatabase {
|
||||
map: BTreeMap<Vec<u8>, Box<dyn Any + Send + Sync>>,
|
||||
map: BTreeMap<Vec<u8>, Box<dyn std::any::Any>>,
|
||||
deleted_keys: Vec<Vec<u8>>,
|
||||
}
|
||||
|
||||
@@ -150,10 +146,8 @@ impl BatchOperations for MemoryDatabase {
|
||||
|
||||
fn set_utxo(&mut self, utxo: &LocalUtxo) -> Result<(), Error> {
|
||||
let key = MapKey::Utxo(Some(&utxo.outpoint)).as_map_key();
|
||||
self.map.insert(
|
||||
key,
|
||||
Box::new((utxo.txout.clone(), utxo.keychain, utxo.is_spent)),
|
||||
);
|
||||
self.map
|
||||
.insert(key, Box::new((utxo.txout.clone(), utxo.keychain)));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -185,12 +179,6 @@ impl BatchOperations for MemoryDatabase {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
fn set_sync_time(&mut self, data: SyncTime) -> Result<(), Error> {
|
||||
let key = MapKey::SyncTime.as_map_key();
|
||||
self.map.insert(key, Box::new(data));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn del_script_pubkey_from_path(
|
||||
&mut self,
|
||||
@@ -230,12 +218,11 @@ impl BatchOperations for MemoryDatabase {
|
||||
match res {
|
||||
None => Ok(None),
|
||||
Some(b) => {
|
||||
let (txout, keychain, is_spent) = b.downcast_ref().cloned().unwrap();
|
||||
let (txout, keychain) = b.downcast_ref().cloned().unwrap();
|
||||
Ok(Some(LocalUtxo {
|
||||
outpoint: *outpoint,
|
||||
txout,
|
||||
keychain,
|
||||
is_spent,
|
||||
}))
|
||||
}
|
||||
}
|
||||
@@ -282,13 +269,6 @@ impl BatchOperations for MemoryDatabase {
|
||||
Some(b) => Ok(Some(*b.downcast_ref().unwrap())),
|
||||
}
|
||||
}
|
||||
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
|
||||
let key = MapKey::SyncTime.as_map_key();
|
||||
let res = self.map.remove(&key);
|
||||
self.deleted_keys.push(key);
|
||||
|
||||
Ok(res.map(|b| b.downcast_ref().cloned().unwrap()))
|
||||
}
|
||||
}
|
||||
|
||||
impl Database for MemoryDatabase {
|
||||
@@ -329,12 +309,11 @@ impl Database for MemoryDatabase {
|
||||
.range::<Vec<u8>, _>((Included(&key), Excluded(&after(&key))))
|
||||
.map(|(k, v)| {
|
||||
let outpoint = deserialize(&k[1..]).unwrap();
|
||||
let (txout, keychain, is_spent) = v.downcast_ref().cloned().unwrap();
|
||||
let (txout, keychain) = v.downcast_ref().cloned().unwrap();
|
||||
Ok(LocalUtxo {
|
||||
outpoint,
|
||||
txout,
|
||||
keychain,
|
||||
is_spent,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
@@ -393,12 +372,11 @@ impl Database for MemoryDatabase {
|
||||
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
|
||||
let key = MapKey::Utxo(Some(outpoint)).as_map_key();
|
||||
Ok(self.map.get(&key).map(|b| {
|
||||
let (txout, keychain, is_spent) = b.downcast_ref().cloned().unwrap();
|
||||
let (txout, keychain) = b.downcast_ref().cloned().unwrap();
|
||||
LocalUtxo {
|
||||
outpoint: *outpoint,
|
||||
txout,
|
||||
keychain,
|
||||
is_spent,
|
||||
}
|
||||
}))
|
||||
}
|
||||
@@ -428,14 +406,6 @@ impl Database for MemoryDatabase {
|
||||
Ok(self.map.get(&key).map(|b| *b.downcast_ref().unwrap()))
|
||||
}
|
||||
|
||||
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error> {
|
||||
let key = MapKey::SyncTime.as_map_key();
|
||||
Ok(self
|
||||
.map
|
||||
.get(&key)
|
||||
.map(|b| b.downcast_ref().cloned().unwrap()))
|
||||
}
|
||||
|
||||
// inserts 0 if not present
|
||||
fn increment_last_index(&mut self, keychain: KeychainKind) -> Result<u32, Error> {
|
||||
let key = MapKey::LastIndex(keychain).as_map_key();
|
||||
@@ -482,29 +452,20 @@ impl ConfigurableDatabase for MemoryDatabase {
|
||||
/// don't have `test` set.
|
||||
macro_rules! populate_test_db {
|
||||
($db:expr, $tx_meta:expr, $current_height:expr$(,)?) => {{
|
||||
$crate::populate_test_db!($db, $tx_meta, $current_height, (@coinbase false))
|
||||
}};
|
||||
($db:expr, $tx_meta:expr, $current_height:expr, (@coinbase $is_coinbase:expr)$(,)?) => {{
|
||||
use std::str::FromStr;
|
||||
use $crate::database::BatchOperations;
|
||||
let mut db = $db;
|
||||
let tx_meta = $tx_meta;
|
||||
let current_height: Option<u32> = $current_height;
|
||||
let input = if $is_coinbase {
|
||||
vec![$crate::bitcoin::TxIn::default()]
|
||||
} else {
|
||||
vec![]
|
||||
};
|
||||
let tx = $crate::bitcoin::Transaction {
|
||||
let tx = Transaction {
|
||||
version: 1,
|
||||
lock_time: 0,
|
||||
input,
|
||||
input: vec![],
|
||||
output: tx_meta
|
||||
.output
|
||||
.iter()
|
||||
.map(|out_meta| $crate::bitcoin::TxOut {
|
||||
.map(|out_meta| bitcoin::TxOut {
|
||||
value: out_meta.value,
|
||||
script_pubkey: $crate::bitcoin::Address::from_str(&out_meta.to_address)
|
||||
script_pubkey: bitcoin::Address::from_str(&out_meta.to_address)
|
||||
.unwrap()
|
||||
.script_pubkey(),
|
||||
})
|
||||
@@ -512,33 +473,30 @@ macro_rules! populate_test_db {
|
||||
};
|
||||
|
||||
let txid = tx.txid();
|
||||
let confirmation_time = tx_meta
|
||||
.min_confirmations
|
||||
.and_then(|v| if v == 0 { None } else { Some(v) })
|
||||
.map(|conf| $crate::BlockTime {
|
||||
height: current_height.unwrap().checked_sub(conf as u32).unwrap() + 1,
|
||||
timestamp: 0,
|
||||
});
|
||||
let confirmation_time = tx_meta.min_confirmations.map(|conf| ConfirmationTime {
|
||||
height: current_height.unwrap().checked_sub(conf as u32).unwrap(),
|
||||
timestamp: 0,
|
||||
});
|
||||
|
||||
let tx_details = $crate::TransactionDetails {
|
||||
let tx_details = TransactionDetails {
|
||||
transaction: Some(tx.clone()),
|
||||
txid,
|
||||
fee: Some(0),
|
||||
received: 0,
|
||||
sent: 0,
|
||||
confirmation_time,
|
||||
verified: current_height.is_some(),
|
||||
};
|
||||
|
||||
db.set_tx(&tx_details).unwrap();
|
||||
for (vout, out) in tx.output.iter().enumerate() {
|
||||
db.set_utxo(&$crate::LocalUtxo {
|
||||
db.set_utxo(&LocalUtxo {
|
||||
txout: out.clone(),
|
||||
outpoint: $crate::bitcoin::OutPoint {
|
||||
outpoint: OutPoint {
|
||||
txid,
|
||||
vout: vout as u32,
|
||||
},
|
||||
keychain: $crate::KeychainKind::External,
|
||||
is_spent: false,
|
||||
keychain: KeychainKind::External,
|
||||
})
|
||||
.unwrap();
|
||||
}
|
||||
@@ -567,7 +525,7 @@ macro_rules! doctest_wallet {
|
||||
Some(100),
|
||||
);
|
||||
|
||||
$crate::Wallet::new(
|
||||
$crate::Wallet::new_offline(
|
||||
&descriptors.0,
|
||||
descriptors.1.as_ref(),
|
||||
Network::Regtest,
|
||||
@@ -624,9 +582,4 @@ mod test {
|
||||
fn test_last_index() {
|
||||
crate::database::test::test_last_index(get_tree());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sync_time() {
|
||||
crate::database::test::test_sync_time(get_tree());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,8 +24,6 @@
|
||||
//!
|
||||
//! [`Wallet`]: crate::wallet::Wallet
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use bitcoin::hash_types::Txid;
|
||||
use bitcoin::{OutPoint, Script, Transaction, TxOut};
|
||||
|
||||
@@ -38,23 +36,9 @@ pub use any::{AnyDatabase, AnyDatabaseConfig};
|
||||
#[cfg(feature = "key-value-db")]
|
||||
pub(crate) mod keyvalue;
|
||||
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub(crate) mod sqlite;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use sqlite::SqliteDatabase;
|
||||
|
||||
pub mod memory;
|
||||
pub use memory::MemoryDatabase;
|
||||
|
||||
/// Blockchain state at the time of syncing
|
||||
///
|
||||
/// Contains only the block time and height at the moment
|
||||
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||
pub struct SyncTime {
|
||||
/// Block timestamp and height at the time of sync
|
||||
pub block_time: BlockTime,
|
||||
}
|
||||
|
||||
/// Trait for operations that can be batched
|
||||
///
|
||||
/// This trait defines the list of operations that must be implemented on the [`Database`] type and
|
||||
@@ -75,8 +59,6 @@ pub trait BatchOperations {
|
||||
fn set_tx(&mut self, transaction: &TransactionDetails) -> Result<(), Error>;
|
||||
/// Store the last derivation index for a given keychain.
|
||||
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error>;
|
||||
/// Store the sync time
|
||||
fn set_sync_time(&mut self, sync_time: SyncTime) -> Result<(), Error>;
|
||||
|
||||
/// Delete a script_pubkey given the keychain and its child number.
|
||||
fn del_script_pubkey_from_path(
|
||||
@@ -102,10 +84,6 @@ pub trait BatchOperations {
|
||||
) -> Result<Option<TransactionDetails>, Error>;
|
||||
/// Delete the last derivation index for a keychain.
|
||||
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error>;
|
||||
/// Reset the sync time to `None`
|
||||
///
|
||||
/// Returns the removed value
|
||||
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error>;
|
||||
}
|
||||
|
||||
/// Trait for reading data from a database
|
||||
@@ -149,10 +127,8 @@ pub trait Database: BatchOperations {
|
||||
fn get_raw_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error>;
|
||||
/// Fetch the transaction metadata and optionally also the raw transaction
|
||||
fn get_tx(&self, txid: &Txid, include_raw: bool) -> Result<Option<TransactionDetails>, Error>;
|
||||
/// Return the last derivation index for a keychain.
|
||||
/// Return the last defivation index for a keychain.
|
||||
fn get_last_index(&self, keychain: KeychainKind) -> Result<Option<u32>, Error>;
|
||||
/// Return the sync time, if present
|
||||
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error>;
|
||||
|
||||
/// Increment the last derivation index for a keychain and return it
|
||||
///
|
||||
@@ -193,7 +169,8 @@ pub(crate) trait DatabaseUtils: Database {
|
||||
D: FnOnce() -> Result<Option<Transaction>, Error>,
|
||||
{
|
||||
self.get_tx(txid, true)?
|
||||
.and_then(|t| t.transaction)
|
||||
.map(|t| t.transaction)
|
||||
.flatten()
|
||||
.map_or_else(default, |t| Ok(Some(t)))
|
||||
}
|
||||
|
||||
@@ -312,12 +289,10 @@ pub mod test {
|
||||
txout,
|
||||
outpoint,
|
||||
keychain: KeychainKind::External,
|
||||
is_spent: true,
|
||||
};
|
||||
|
||||
tree.set_utxo(&utxo).unwrap();
|
||||
tree.set_utxo(&utxo).unwrap();
|
||||
assert_eq!(tree.iter_utxos().unwrap().len(), 1);
|
||||
|
||||
assert_eq!(tree.get_utxo(&outpoint).unwrap(), Some(utxo));
|
||||
}
|
||||
|
||||
@@ -342,10 +317,11 @@ pub mod test {
|
||||
received: 1337,
|
||||
sent: 420420,
|
||||
fee: Some(140),
|
||||
confirmation_time: Some(BlockTime {
|
||||
confirmation_time: Some(ConfirmationTime {
|
||||
timestamp: 123456,
|
||||
height: 1000,
|
||||
}),
|
||||
verified: true,
|
||||
};
|
||||
|
||||
tree.set_tx(&tx_details).unwrap();
|
||||
@@ -369,34 +345,6 @@ pub mod test {
|
||||
);
|
||||
}
|
||||
|
||||
pub fn test_list_transaction<D: Database>(mut tree: D) {
|
||||
let hex_tx = Vec::<u8>::from_hex("0100000001a15d57094aa7a21a28cb20b59aab8fc7d1149a3bdbcddba9c622e4f5f6a99ece010000006c493046022100f93bb0e7d8db7bd46e40132d1f8242026e045f03a0efe71bbb8e3f475e970d790221009337cd7f1f929f00cc6ff01f03729b069a7c21b59b1736ddfee5db5946c5da8c0121033b9b137ee87d5a812d6f506efdd37f0affa7ffc310711c06c7f3e097c9447c52ffffffff0100e1f505000000001976a9140389035a9225b3839e2bbf32d826a1e222031fd888ac00000000").unwrap();
|
||||
let tx: Transaction = deserialize(&hex_tx).unwrap();
|
||||
let txid = tx.txid();
|
||||
let mut tx_details = TransactionDetails {
|
||||
transaction: Some(tx),
|
||||
txid,
|
||||
received: 1337,
|
||||
sent: 420420,
|
||||
fee: Some(140),
|
||||
confirmation_time: Some(BlockTime {
|
||||
timestamp: 123456,
|
||||
height: 1000,
|
||||
}),
|
||||
};
|
||||
|
||||
tree.set_tx(&tx_details).unwrap();
|
||||
|
||||
// get raw tx
|
||||
assert_eq!(tree.iter_txs(true).unwrap(), vec![tx_details.clone()]);
|
||||
|
||||
// now get without raw tx
|
||||
tx_details.transaction = None;
|
||||
|
||||
// get not raw tx
|
||||
assert_eq!(tree.iter_txs(false).unwrap(), vec![tx_details.clone()]);
|
||||
}
|
||||
|
||||
pub fn test_last_index<D: Database>(mut tree: D) {
|
||||
tree.set_last_index(KeychainKind::External, 1337).unwrap();
|
||||
|
||||
@@ -421,25 +369,5 @@ pub mod test {
|
||||
);
|
||||
}
|
||||
|
||||
pub fn test_sync_time<D: Database>(mut tree: D) {
|
||||
assert!(tree.get_sync_time().unwrap().is_none());
|
||||
|
||||
tree.set_sync_time(SyncTime {
|
||||
block_time: BlockTime {
|
||||
height: 100,
|
||||
timestamp: 1000,
|
||||
},
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
let extracted = tree.get_sync_time().unwrap();
|
||||
assert!(extracted.is_some());
|
||||
assert_eq!(extracted.as_ref().unwrap().block_time.height, 100);
|
||||
assert_eq!(extracted.as_ref().unwrap().block_time.timestamp, 1000);
|
||||
|
||||
tree.del_sync_time().unwrap();
|
||||
assert!(tree.get_sync_time().unwrap().is_none());
|
||||
}
|
||||
|
||||
// TODO: more tests...
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -10,41 +10,6 @@
|
||||
// licenses.
|
||||
|
||||
//! Derived descriptor keys
|
||||
//!
|
||||
//! The [`DerivedDescriptorKey`] type is a wrapper over the standard [`DescriptorPublicKey`] which
|
||||
//! guarantees that all the extended keys have a fixed derivation path, i.e. all the wildcards have
|
||||
//! been replaced by actual derivation indexes.
|
||||
//!
|
||||
//! The [`AsDerived`] trait provides a quick way to derive descriptors to obtain a
|
||||
//! `Descriptor<DerivedDescriptorKey>` type. This, in turn, can be used to derive public
|
||||
//! keys for arbitrary derivation indexes.
|
||||
//!
|
||||
//! Combining this with [`Wallet::get_signers`], secret keys can also be derived.
|
||||
//!
|
||||
//! # Example
|
||||
//!
|
||||
//! ```
|
||||
//! # use std::str::FromStr;
|
||||
//! # use bitcoin::secp256k1::Secp256k1;
|
||||
//! use bdk::descriptor::{AsDerived, DescriptorPublicKey};
|
||||
//! use bdk::miniscript::{ToPublicKey, TranslatePk, MiniscriptKey};
|
||||
//!
|
||||
//! let secp = Secp256k1::gen_new();
|
||||
//!
|
||||
//! let key = DescriptorPublicKey::from_str("[aa600a45/84'/0'/0']tpubDCbDXFKoLTQp44wQuC12JgSn5g9CWGjZdpBHeTqyypZ4VvgYjTJmK9CkyR5bFvG9f4PutvwmvpYCLkFx2rpx25hiMs4sUgxJveW8ZzSAVAc/0/*")?;
|
||||
//! let (descriptor, _, _) = bdk::descriptor!(wpkh(key))?;
|
||||
//!
|
||||
//! // derived: wpkh([aa600a45/84'/0'/0']tpubDCbDXFKoLTQp44wQuC12JgSn5g9CWGjZdpBHeTqyypZ4VvgYjTJmK9CkyR5bFvG9f4PutvwmvpYCLkFx2rpx25hiMs4sUgxJveW8ZzSAVAc/0/42)#3ladd0t2
|
||||
//! let derived = descriptor.as_derived(42, &secp);
|
||||
//! println!("derived: {}", derived);
|
||||
//!
|
||||
//! // with_pks: wpkh(02373ecb54c5e83bd7e0d40adf78b65efaf12fafb13571f0261fc90364eee22e1e)#p4jjgvll
|
||||
//! let with_pks = derived.translate_pk_infallible(|pk| pk.to_public_key(), |pkh| pkh.to_public_key().to_pubkeyhash());
|
||||
//! println!("with_pks: {}", with_pks);
|
||||
//! # Ok::<(), Box<dyn std::error::Error>>(())
|
||||
//! ```
|
||||
//!
|
||||
//! [`Wallet::get_signers`]: crate::wallet::Wallet::get_signers
|
||||
|
||||
use std::cmp::Ordering;
|
||||
use std::fmt;
|
||||
@@ -52,10 +17,13 @@ use std::hash::{Hash, Hasher};
|
||||
use std::ops::Deref;
|
||||
|
||||
use bitcoin::hashes::hash160;
|
||||
use bitcoin::{PublicKey, XOnlyPublicKey};
|
||||
use bitcoin::PublicKey;
|
||||
|
||||
use miniscript::descriptor::{DescriptorSinglePub, SinglePubKey, Wildcard};
|
||||
use miniscript::{Descriptor, DescriptorPublicKey, MiniscriptKey, ToPublicKey, TranslatePk};
|
||||
pub use miniscript::{
|
||||
descriptor::KeyMap, descriptor::Wildcard, Descriptor, DescriptorPublicKey, Legacy, Miniscript,
|
||||
ScriptContext, Segwitv0,
|
||||
};
|
||||
use miniscript::{MiniscriptKey, ToPublicKey, TranslatePk};
|
||||
|
||||
use crate::wallet::utils::SecpCtx;
|
||||
|
||||
@@ -128,44 +96,21 @@ impl<'s> MiniscriptKey for DerivedDescriptorKey<'s> {
|
||||
fn is_uncompressed(&self) -> bool {
|
||||
self.0.is_uncompressed()
|
||||
}
|
||||
fn serialized_len(&self) -> usize {
|
||||
self.0.serialized_len()
|
||||
}
|
||||
}
|
||||
|
||||
impl<'s> ToPublicKey for DerivedDescriptorKey<'s> {
|
||||
fn to_public_key(&self) -> PublicKey {
|
||||
match &self.0 {
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::XOnly(_),
|
||||
..
|
||||
}) => panic!("Found x-only public key in non-tr descriptor"),
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::FullKey(ref pk),
|
||||
..
|
||||
}) => *pk,
|
||||
DescriptorPublicKey::XPub(ref xpub) => PublicKey::new(
|
||||
DescriptorPublicKey::SinglePub(ref spub) => spub.key.to_public_key(),
|
||||
DescriptorPublicKey::XPub(ref xpub) => {
|
||||
xpub.xkey
|
||||
.derive_pub(self.1, &xpub.derivation_path)
|
||||
.expect("Shouldn't fail, only normal derivations")
|
||||
.public_key,
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
fn to_x_only_pubkey(&self) -> XOnlyPublicKey {
|
||||
match &self.0 {
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::XOnly(ref pk),
|
||||
..
|
||||
}) => *pk,
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::FullKey(ref pk),
|
||||
..
|
||||
}) => XOnlyPublicKey::from(pk.inner),
|
||||
DescriptorPublicKey::XPub(ref xpub) => XOnlyPublicKey::from(
|
||||
xpub.xkey
|
||||
.derive_pub(self.1, &xpub.derivation_path)
|
||||
.expect("Shouldn't fail, only normal derivations")
|
||||
.public_key,
|
||||
),
|
||||
.public_key
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -174,19 +119,14 @@ impl<'s> ToPublicKey for DerivedDescriptorKey<'s> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Utilities to derive descriptors
|
||||
///
|
||||
/// Check out the [module level] documentation for more.
|
||||
///
|
||||
/// [module level]: crate::descriptor::derived
|
||||
pub trait AsDerived {
|
||||
/// Derive a descriptor and transform all of its keys to `DerivedDescriptorKey`
|
||||
pub(crate) trait AsDerived {
|
||||
// Derive a descriptor and transform all of its keys to `DerivedDescriptorKey`
|
||||
fn as_derived<'s>(&self, index: u32, secp: &'s SecpCtx)
|
||||
-> Descriptor<DerivedDescriptorKey<'s>>;
|
||||
|
||||
/// Transform the keys into `DerivedDescriptorKey`.
|
||||
///
|
||||
/// Panics if the descriptor is not "fixed", i.e. if it's derivable
|
||||
// Transform the keys into `DerivedDescriptorKey`.
|
||||
//
|
||||
// Panics if the descriptor is not "fixed", i.e. if it's derivable
|
||||
fn as_derived_fixed<'s>(&self, secp: &'s SecpCtx) -> Descriptor<DerivedDescriptorKey<'s>>;
|
||||
}
|
||||
|
||||
|
||||
@@ -73,48 +73,6 @@ macro_rules! impl_top_level_pk {
|
||||
}};
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_top_level_tr {
|
||||
( $internal_key:expr, $tap_tree:expr ) => {{
|
||||
use $crate::miniscript::descriptor::{
|
||||
Descriptor, DescriptorPublicKey, KeyMap, TapTree, Tr,
|
||||
};
|
||||
use $crate::miniscript::Tap;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use $crate::keys::{DescriptorKey, IntoDescriptorKey, ValidNetworks};
|
||||
|
||||
let secp = $crate::bitcoin::secp256k1::Secp256k1::new();
|
||||
|
||||
$internal_key
|
||||
.into_descriptor_key()
|
||||
.and_then(|key: DescriptorKey<Tap>| key.extract(&secp))
|
||||
.map_err($crate::descriptor::DescriptorError::Key)
|
||||
.and_then(|(pk, mut key_map, mut valid_networks)| {
|
||||
let tap_tree = $tap_tree.map(
|
||||
|(tap_tree, tree_keymap, tree_networks): (
|
||||
TapTree<DescriptorPublicKey>,
|
||||
KeyMap,
|
||||
ValidNetworks,
|
||||
)| {
|
||||
key_map.extend(tree_keymap.into_iter());
|
||||
valid_networks =
|
||||
$crate::keys::merge_networks(&valid_networks, &tree_networks);
|
||||
|
||||
tap_tree
|
||||
},
|
||||
);
|
||||
|
||||
Ok((
|
||||
Descriptor::<DescriptorPublicKey>::Tr(Tr::new(pk, tap_tree)?),
|
||||
key_map,
|
||||
valid_networks,
|
||||
))
|
||||
})
|
||||
}};
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_leaf_opcode {
|
||||
@@ -126,7 +84,7 @@ macro_rules! impl_leaf_opcode {
|
||||
)
|
||||
.map_err($crate::descriptor::DescriptorError::Miniscript)
|
||||
.and_then(|minisc| {
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
Ok(minisc)
|
||||
})
|
||||
.map(|minisc| {
|
||||
@@ -150,7 +108,7 @@ macro_rules! impl_leaf_opcode_value {
|
||||
)
|
||||
.map_err($crate::descriptor::DescriptorError::Miniscript)
|
||||
.and_then(|minisc| {
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
Ok(minisc)
|
||||
})
|
||||
.map(|minisc| {
|
||||
@@ -174,7 +132,7 @@ macro_rules! impl_leaf_opcode_value_two {
|
||||
)
|
||||
.map_err($crate::descriptor::DescriptorError::Miniscript)
|
||||
.and_then(|minisc| {
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
Ok(minisc)
|
||||
})
|
||||
.map(|minisc| {
|
||||
@@ -207,7 +165,7 @@ macro_rules! impl_node_opcode_two {
|
||||
std::sync::Arc::new(b_minisc),
|
||||
))?;
|
||||
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
|
||||
Ok((minisc, a_keymap, $crate::keys::merge_networks(&a_networks, &b_networks)))
|
||||
})
|
||||
@@ -239,7 +197,7 @@ macro_rules! impl_node_opcode_three {
|
||||
std::sync::Arc::new(c_minisc),
|
||||
))?;
|
||||
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
|
||||
Ok((minisc, a_keymap, networks))
|
||||
})
|
||||
@@ -270,62 +228,6 @@ macro_rules! impl_sortedmulti {
|
||||
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! parse_tap_tree {
|
||||
( @merge $tree_a:expr, $tree_b:expr) => {{
|
||||
use std::sync::Arc;
|
||||
use $crate::miniscript::descriptor::TapTree;
|
||||
|
||||
$tree_a
|
||||
.and_then(|tree_a| Ok((tree_a, $tree_b?)))
|
||||
.and_then(|((a_tree, mut a_keymap, a_networks), (b_tree, b_keymap, b_networks))| {
|
||||
a_keymap.extend(b_keymap.into_iter());
|
||||
Ok((TapTree::Tree(Arc::new(a_tree), Arc::new(b_tree)), a_keymap, $crate::keys::merge_networks(&a_networks, &b_networks)))
|
||||
})
|
||||
|
||||
}};
|
||||
|
||||
// Two sub-trees
|
||||
( { { $( $tree_a:tt )* }, { $( $tree_b:tt )* } } ) => {{
|
||||
let tree_a = $crate::parse_tap_tree!( { $( $tree_a )* } );
|
||||
let tree_b = $crate::parse_tap_tree!( { $( $tree_b )* } );
|
||||
|
||||
$crate::parse_tap_tree!(@merge tree_a, tree_b)
|
||||
}};
|
||||
|
||||
// One leaf and a sub-tree
|
||||
( { $op_a:ident ( $( $minisc_a:tt )* ), { $( $tree_b:tt )* } } ) => {{
|
||||
let tree_a = $crate::parse_tap_tree!( $op_a ( $( $minisc_a )* ) );
|
||||
let tree_b = $crate::parse_tap_tree!( { $( $tree_b )* } );
|
||||
|
||||
$crate::parse_tap_tree!(@merge tree_a, tree_b)
|
||||
}};
|
||||
( { { $( $tree_a:tt )* }, $op_b:ident ( $( $minisc_b:tt )* ) } ) => {{
|
||||
let tree_a = $crate::parse_tap_tree!( { $( $tree_a )* } );
|
||||
let tree_b = $crate::parse_tap_tree!( $op_b ( $( $minisc_b )* ) );
|
||||
|
||||
$crate::parse_tap_tree!(@merge tree_a, tree_b)
|
||||
}};
|
||||
|
||||
// Two leaves
|
||||
( { $op_a:ident ( $( $minisc_a:tt )* ), $op_b:ident ( $( $minisc_b:tt )* ) } ) => {{
|
||||
let tree_a = $crate::parse_tap_tree!( $op_a ( $( $minisc_a )* ) );
|
||||
let tree_b = $crate::parse_tap_tree!( $op_b ( $( $minisc_b )* ) );
|
||||
|
||||
$crate::parse_tap_tree!(@merge tree_a, tree_b)
|
||||
}};
|
||||
|
||||
// Single leaf
|
||||
( $op:ident ( $( $minisc:tt )* ) ) => {{
|
||||
use std::sync::Arc;
|
||||
use $crate::miniscript::descriptor::TapTree;
|
||||
|
||||
$crate::fragment!( $op ( $( $minisc )* ) )
|
||||
.map(|(a_minisc, a_keymap, a_networks)| (TapTree::Leaf(Arc::new(a_minisc)), a_keymap, a_networks))
|
||||
}};
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! apply_modifier {
|
||||
@@ -341,7 +243,7 @@ macro_rules! apply_modifier {
|
||||
),
|
||||
)?;
|
||||
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
|
||||
Ok((minisc, keymap, networks))
|
||||
})
|
||||
@@ -434,7 +336,7 @@ macro_rules! apply_modifier {
|
||||
/// syntax is more suitable for a fixed number of items known at compile time, while the other accepts a
|
||||
/// [`Vec`] of items, which makes it more suitable for writing dynamic descriptors.
|
||||
///
|
||||
/// They both produce the descriptor: `wsh(thresh(2,pk(...),s:pk(...),sndv:older(...)))`
|
||||
/// They both produce the descriptor: `wsh(thresh(2,pk(...),s:pk(...),sdv:older(...)))`
|
||||
///
|
||||
/// ```
|
||||
/// # use std::str::FromStr;
|
||||
@@ -447,7 +349,7 @@ macro_rules! apply_modifier {
|
||||
///
|
||||
/// let (descriptor_a, key_map_a, networks) = bdk::descriptor! {
|
||||
/// wsh (
|
||||
/// thresh(2, pk(my_key_1), s:pk(my_key_2), s:n:d:v:older(my_timelock))
|
||||
/// thresh(2, pk(my_key_1), s:pk(my_key_2), s:d:v:older(my_timelock))
|
||||
/// )
|
||||
/// }?;
|
||||
///
|
||||
@@ -455,7 +357,7 @@ macro_rules! apply_modifier {
|
||||
/// let b_items = vec![
|
||||
/// bdk::fragment!(pk(my_key_1))?,
|
||||
/// bdk::fragment!(s:pk(my_key_2))?,
|
||||
/// bdk::fragment!(s:n:d:v:older(my_timelock))?,
|
||||
/// bdk::fragment!(s:d:v:older(my_timelock))?,
|
||||
/// ];
|
||||
/// let (descriptor_b, mut key_map_b, networks) = bdk::descriptor!(wsh(thresh_vec(2, b_items)))?;
|
||||
///
|
||||
@@ -539,15 +441,6 @@ macro_rules! descriptor {
|
||||
( wsh ( $( $minisc:tt )* ) ) => ({
|
||||
$crate::impl_top_level_sh!(Wsh, new, new_sortedmulti, Segwitv0, $( $minisc )*)
|
||||
});
|
||||
|
||||
( tr ( $internal_key:expr ) ) => ({
|
||||
$crate::impl_top_level_tr!($internal_key, None)
|
||||
});
|
||||
( tr ( $internal_key:expr, $( $taptree:tt )* ) ) => ({
|
||||
let tap_tree = $crate::parse_tap_tree!( $( $taptree )* );
|
||||
tap_tree
|
||||
.and_then(|tap_tree| $crate::impl_top_level_tr!($internal_key, Some(tap_tree)))
|
||||
});
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
@@ -587,23 +480,6 @@ impl<A, B, C> From<(A, (B, (C, ())))> for TupleThree<A, B, C> {
|
||||
}
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! group_multi_keys {
|
||||
( $( $key:expr ),+ ) => {{
|
||||
use $crate::keys::IntoDescriptorKey;
|
||||
|
||||
let keys = vec![
|
||||
$(
|
||||
$key.into_descriptor_key(),
|
||||
)*
|
||||
];
|
||||
|
||||
keys.into_iter().collect::<Result<Vec<_>, _>>()
|
||||
.map_err($crate::descriptor::DescriptorError::Key)
|
||||
}};
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! fragment_internal {
|
||||
@@ -645,7 +521,7 @@ macro_rules! fragment_internal {
|
||||
// three operands it's (X, (X, (X, ()))), etc.
|
||||
//
|
||||
// To check that the right number of arguments has been passed we can "cast" those tuples to
|
||||
// more convenient structures like `TupleTwo`. If the conversion succeeds, the right number of
|
||||
// more convenient structures like `TupleTwo`. If the conversion succedes, the right number of
|
||||
// args was passed. Otherwise the compilation fails entirely.
|
||||
( @t $op:ident ( $( $args:tt )* ) $( $tail:tt )* ) => ({
|
||||
($crate::fragment!( $op ( $( $args )* ) ), $crate::fragment_internal!( @t $( $tail )* ))
|
||||
@@ -695,9 +571,8 @@ macro_rules! fragment {
|
||||
( pk ( $key:expr ) ) => ({
|
||||
$crate::fragment!(c:pk_k ( $key ))
|
||||
});
|
||||
( pk_h ( $key:expr ) ) => ({
|
||||
let secp = $crate::bitcoin::secp256k1::Secp256k1::new();
|
||||
$crate::keys::make_pkh($key, &secp)
|
||||
( pk_h ( $key_hash:expr ) ) => ({
|
||||
$crate::impl_leaf_opcode_value!(PkH, $key_hash)
|
||||
});
|
||||
( after ( $value:expr ) ) => ({
|
||||
$crate::impl_leaf_opcode_value!(After, $value)
|
||||
@@ -726,9 +601,6 @@ macro_rules! fragment {
|
||||
( and_or ( $( $inner:tt )* ) ) => ({
|
||||
$crate::impl_node_opcode_three!(AndOr, $( $inner )*)
|
||||
});
|
||||
( andor ( $( $inner:tt )* ) ) => ({
|
||||
$crate::impl_node_opcode_three!(AndOr, $( $inner )*)
|
||||
});
|
||||
( or_b ( $( $inner:tt )* ) ) => ({
|
||||
$crate::impl_node_opcode_two!(OrB, $( $inner )*)
|
||||
});
|
||||
@@ -764,22 +636,21 @@ macro_rules! fragment {
|
||||
.and_then(|items| $crate::fragment!(thresh_vec($thresh, items)))
|
||||
});
|
||||
( multi_vec ( $thresh:expr, $keys:expr ) ) => ({
|
||||
let secp = $crate::bitcoin::secp256k1::Secp256k1::new();
|
||||
|
||||
$crate::keys::make_multi($thresh, $crate::miniscript::Terminal::Multi, $keys, &secp)
|
||||
$crate::keys::make_multi($thresh, $keys)
|
||||
});
|
||||
( multi ( $thresh:expr $(, $key:expr )+ ) ) => ({
|
||||
$crate::group_multi_keys!( $( $key ),* )
|
||||
.and_then(|keys| $crate::fragment!( multi_vec ( $thresh, keys ) ))
|
||||
});
|
||||
( multi_a_vec ( $thresh:expr, $keys:expr ) ) => ({
|
||||
use $crate::keys::IntoDescriptorKey;
|
||||
let secp = $crate::bitcoin::secp256k1::Secp256k1::new();
|
||||
|
||||
$crate::keys::make_multi($thresh, $crate::miniscript::Terminal::MultiA, $keys, &secp)
|
||||
});
|
||||
( multi_a ( $thresh:expr $(, $key:expr )+ ) ) => ({
|
||||
$crate::group_multi_keys!( $( $key ),* )
|
||||
.and_then(|keys| $crate::fragment!( multi_a_vec ( $thresh, keys ) ))
|
||||
let keys = vec![
|
||||
$(
|
||||
$key.into_descriptor_key(),
|
||||
)*
|
||||
];
|
||||
|
||||
keys.into_iter().collect::<Result<Vec<_>, _>>()
|
||||
.map_err($crate::descriptor::DescriptorError::Key)
|
||||
.and_then(|keys| $crate::keys::make_multi($thresh, keys, &secp))
|
||||
});
|
||||
|
||||
// `sortedmulti()` is handled separately
|
||||
@@ -1173,15 +1044,13 @@ mod test {
|
||||
let private_key =
|
||||
PrivateKey::from_wif("cSQPHDBwXGjVzWRqAHm6zfvQhaTuj1f2bFH58h55ghbjtFwvmeXR").unwrap();
|
||||
let (descriptor, _, _) =
|
||||
descriptor!(wsh(thresh(2,n:d:v:older(1),s:pk(private_key),s:pk(private_key)))).unwrap();
|
||||
descriptor!(wsh(thresh(2,d:v:older(1),s:pk(private_key),s:pk(private_key)))).unwrap();
|
||||
|
||||
assert_eq!(descriptor.to_string(), "wsh(thresh(2,ndv:older(1),s:pk(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c),s:pk(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c)))#zzk3ux8g")
|
||||
assert_eq!(descriptor.to_string(), "wsh(thresh(2,dv:older(1),s:pk(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c),s:pk(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c)))#cfdcqs3s")
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic(
|
||||
expected = "Miniscript(ContextError(CompressedOnly(\"04b4632d08485ff1df2db55b9dafd23347d1c47a457072a1e87be26896549a87378ec38ff91d43e8c2092ebda601780485263da089465619e0358a5c1be7ac91f4\")))"
|
||||
)]
|
||||
#[should_panic(expected = "Miniscript(ContextError(CompressedOnly))")]
|
||||
fn test_dsl_miniscript_checks() {
|
||||
let mut uncompressed_pk =
|
||||
PrivateKey::from_wif("L5EZftvrYaSudiozVRzTqLcHLNDoVn7H5HSfM9BAN6tMJX8oTWz6").unwrap();
|
||||
@@ -1189,35 +1058,4 @@ mod test {
|
||||
|
||||
descriptor!(wsh(v: pk(uncompressed_pk))).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_dsl_tr_only_key() {
|
||||
let private_key =
|
||||
PrivateKey::from_wif("cSQPHDBwXGjVzWRqAHm6zfvQhaTuj1f2bFH58h55ghbjtFwvmeXR").unwrap();
|
||||
let (descriptor, _, _) = descriptor!(tr(private_key)).unwrap();
|
||||
|
||||
assert_eq!(
|
||||
descriptor.to_string(),
|
||||
"tr(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c)#heq9m95v"
|
||||
)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_dsl_tr_simple_tree() {
|
||||
let private_key =
|
||||
PrivateKey::from_wif("cSQPHDBwXGjVzWRqAHm6zfvQhaTuj1f2bFH58h55ghbjtFwvmeXR").unwrap();
|
||||
let (descriptor, _, _) =
|
||||
descriptor!(tr(private_key, { pk(private_key), pk(private_key) })).unwrap();
|
||||
|
||||
assert_eq!(descriptor.to_string(), "tr(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c,{pk(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c),pk(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c)})#xy5fjw6d")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_dsl_tr_single_leaf() {
|
||||
let private_key =
|
||||
PrivateKey::from_wif("cSQPHDBwXGjVzWRqAHm6zfvQhaTuj1f2bFH58h55ghbjtFwvmeXR").unwrap();
|
||||
let (descriptor, _, _) = descriptor!(tr(private_key, pk(private_key))).unwrap();
|
||||
|
||||
assert_eq!(descriptor.to_string(), "tr(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c,pk(02e96fe52ef0e22d2f131dd425ce1893073a3c6ad20e8cac36726393dfb4856a4c))#lzl2vmc7")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,25 +14,23 @@
|
||||
//! This module contains generic utilities to work with descriptors, plus some re-exported types
|
||||
//! from [`miniscript`].
|
||||
|
||||
use std::collections::{BTreeMap, HashSet};
|
||||
use std::collections::{BTreeMap, HashMap, HashSet};
|
||||
use std::ops::Deref;
|
||||
|
||||
use bitcoin::util::bip32::{ChildNumber, DerivationPath, ExtendedPubKey, Fingerprint, KeySource};
|
||||
use bitcoin::util::{psbt, taproot};
|
||||
use bitcoin::{secp256k1, PublicKey, XOnlyPublicKey};
|
||||
use bitcoin::{Network, Script, TxOut};
|
||||
|
||||
use miniscript::descriptor::{DescriptorType, InnerXKey, SinglePubKey};
|
||||
pub use miniscript::{
|
||||
descriptor::DescriptorXKey, descriptor::KeyMap, descriptor::Wildcard, Descriptor,
|
||||
DescriptorPublicKey, Legacy, Miniscript, ScriptContext, Segwitv0,
|
||||
use bitcoin::util::bip32::{
|
||||
ChildNumber, DerivationPath, ExtendedPrivKey, ExtendedPubKey, Fingerprint, KeySource,
|
||||
};
|
||||
use bitcoin::util::psbt;
|
||||
use bitcoin::{Network, PublicKey, Script, TxOut};
|
||||
|
||||
use miniscript::descriptor::{DescriptorPublicKey, DescriptorType, DescriptorXKey, Wildcard};
|
||||
pub use miniscript::{descriptor::KeyMap, Descriptor, Legacy, Miniscript, ScriptContext, Segwitv0};
|
||||
use miniscript::{DescriptorTrait, ForEachKey, TranslatePk};
|
||||
|
||||
use crate::descriptor::policy::BuildSatisfaction;
|
||||
|
||||
pub mod checksum;
|
||||
pub mod derived;
|
||||
pub(crate) mod derived;
|
||||
#[doc(hidden)]
|
||||
pub mod dsl;
|
||||
pub mod error;
|
||||
@@ -40,7 +38,8 @@ pub mod policy;
|
||||
pub mod template;
|
||||
|
||||
pub use self::checksum::get_checksum;
|
||||
pub use self::derived::{AsDerived, DerivedDescriptorKey};
|
||||
use self::derived::AsDerived;
|
||||
pub use self::derived::DerivedDescriptorKey;
|
||||
pub use self::error::Error as DescriptorError;
|
||||
pub use self::policy::Policy;
|
||||
use self::template::DescriptorTemplateOut;
|
||||
@@ -59,14 +58,7 @@ pub type DerivedDescriptor<'s> = Descriptor<DerivedDescriptorKey<'s>>;
|
||||
///
|
||||
/// [`psbt::Input`]: bitcoin::util::psbt::Input
|
||||
/// [`psbt::Output`]: bitcoin::util::psbt::Output
|
||||
pub type HdKeyPaths = BTreeMap<secp256k1::PublicKey, KeySource>;
|
||||
|
||||
/// Alias for the type of maps that represent taproot key origins in a [`psbt::Input`] or
|
||||
/// [`psbt::Output`]
|
||||
///
|
||||
/// [`psbt::Input`]: bitcoin::util::psbt::Input
|
||||
/// [`psbt::Output`]: bitcoin::util::psbt::Output
|
||||
pub type TapKeyOrigins = BTreeMap<bitcoin::XOnlyPublicKey, (Vec<taproot::TapLeafHash>, KeySource)>;
|
||||
pub type HdKeyPaths = BTreeMap<PublicKey, KeySource>;
|
||||
|
||||
/// Trait for types which can be converted into an [`ExtendedDescriptor`] and a [`KeyMap`] usable by a wallet in a specific [`Network`]
|
||||
pub trait IntoWalletDescriptor {
|
||||
@@ -134,13 +126,13 @@ impl IntoWalletDescriptor for (ExtendedDescriptor, KeyMap) {
|
||||
|
||||
let check_key = |pk: &DescriptorPublicKey| {
|
||||
let (pk, _, networks) = if self.0.is_witness() {
|
||||
let descriptor_key: DescriptorKey<miniscript::Segwitv0> =
|
||||
let desciptor_key: DescriptorKey<miniscript::Segwitv0> =
|
||||
pk.clone().into_descriptor_key()?;
|
||||
descriptor_key.extract(secp)?
|
||||
desciptor_key.extract(secp)?
|
||||
} else {
|
||||
let descriptor_key: DescriptorKey<miniscript::Legacy> =
|
||||
let desciptor_key: DescriptorKey<miniscript::Legacy> =
|
||||
pk.clone().into_descriptor_key()?;
|
||||
descriptor_key.extract(secp)?
|
||||
desciptor_key.extract(secp)?
|
||||
};
|
||||
|
||||
if networks.contains(&network) {
|
||||
@@ -246,13 +238,13 @@ pub(crate) fn into_wallet_descriptor_checked<T: IntoWalletDescriptor>(
|
||||
#[doc(hidden)]
|
||||
/// Used internally mainly by the `descriptor!()` and `fragment!()` macros
|
||||
pub trait CheckMiniscript<Ctx: miniscript::ScriptContext> {
|
||||
fn check_miniscript(&self) -> Result<(), miniscript::Error>;
|
||||
fn check_minsicript(&self) -> Result<(), miniscript::Error>;
|
||||
}
|
||||
|
||||
impl<Ctx: miniscript::ScriptContext, Pk: miniscript::MiniscriptKey> CheckMiniscript<Ctx>
|
||||
for miniscript::Miniscript<Pk, Ctx>
|
||||
{
|
||||
fn check_miniscript(&self) -> Result<(), miniscript::Error> {
|
||||
fn check_minsicript(&self) -> Result<(), miniscript::Error> {
|
||||
Ctx::check_global_validity(self)?;
|
||||
|
||||
Ok(())
|
||||
@@ -275,10 +267,41 @@ pub(crate) trait XKeyUtils {
|
||||
fn root_fingerprint(&self, secp: &SecpCtx) -> Fingerprint;
|
||||
}
|
||||
|
||||
impl<T> XKeyUtils for DescriptorXKey<T>
|
||||
where
|
||||
T: InnerXKey,
|
||||
{
|
||||
// FIXME: `InnerXKey` was made private in rust-miniscript, so we have to implement this manually on
|
||||
// both `ExtendedPubKey` and `ExtendedPrivKey`.
|
||||
//
|
||||
// Revert back to using the trait once https://github.com/rust-bitcoin/rust-miniscript/pull/230 is
|
||||
// released
|
||||
impl XKeyUtils for DescriptorXKey<ExtendedPubKey> {
|
||||
fn full_path(&self, append: &[ChildNumber]) -> DerivationPath {
|
||||
let full_path = match self.origin {
|
||||
Some((_, ref path)) => path
|
||||
.into_iter()
|
||||
.chain(self.derivation_path.into_iter())
|
||||
.cloned()
|
||||
.collect(),
|
||||
None => self.derivation_path.clone(),
|
||||
};
|
||||
|
||||
if self.wildcard != Wildcard::None {
|
||||
full_path
|
||||
.into_iter()
|
||||
.chain(append.iter())
|
||||
.cloned()
|
||||
.collect()
|
||||
} else {
|
||||
full_path
|
||||
}
|
||||
}
|
||||
|
||||
fn root_fingerprint(&self, _: &SecpCtx) -> Fingerprint {
|
||||
match self.origin {
|
||||
Some((fingerprint, _)) => fingerprint,
|
||||
None => self.xkey.fingerprint(),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl XKeyUtils for DescriptorXKey<ExtendedPrivKey> {
|
||||
fn full_path(&self, append: &[ChildNumber]) -> DerivationPath {
|
||||
let full_path = match self.origin {
|
||||
Some((_, ref path)) => path
|
||||
@@ -303,35 +326,23 @@ where
|
||||
fn root_fingerprint(&self, secp: &SecpCtx) -> Fingerprint {
|
||||
match self.origin {
|
||||
Some((fingerprint, _)) => fingerprint,
|
||||
None => self.xkey.xkey_fingerprint(secp),
|
||||
None => self.xkey.fingerprint(secp),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) trait DerivedDescriptorMeta {
|
||||
fn get_hd_keypaths(&self, secp: &SecpCtx) -> HdKeyPaths;
|
||||
fn get_tap_key_origins(&self, secp: &SecpCtx) -> TapKeyOrigins;
|
||||
fn get_hd_keypaths(&self, secp: &SecpCtx) -> Result<HdKeyPaths, DescriptorError>;
|
||||
}
|
||||
|
||||
pub(crate) trait DescriptorMeta {
|
||||
fn is_witness(&self) -> bool;
|
||||
fn is_taproot(&self) -> bool;
|
||||
fn get_extended_keys(&self) -> Result<Vec<DescriptorXKey<ExtendedPubKey>>, DescriptorError>;
|
||||
fn derive_from_hd_keypaths<'s>(
|
||||
&self,
|
||||
hd_keypaths: &HdKeyPaths,
|
||||
secp: &'s SecpCtx,
|
||||
) -> Option<DerivedDescriptor<'s>>;
|
||||
fn derive_from_tap_key_origins<'s>(
|
||||
&self,
|
||||
tap_key_origins: &TapKeyOrigins,
|
||||
secp: &'s SecpCtx,
|
||||
) -> Option<DerivedDescriptor<'s>>;
|
||||
fn derive_from_psbt_key_origins<'s>(
|
||||
&self,
|
||||
key_origins: BTreeMap<Fingerprint, (&DerivationPath, SinglePubKey)>,
|
||||
secp: &'s SecpCtx,
|
||||
) -> Option<DerivedDescriptor<'s>>;
|
||||
fn derive_from_psbt_input<'s>(
|
||||
&self,
|
||||
psbt_input: &psbt::Input,
|
||||
@@ -348,34 +359,23 @@ pub(crate) trait DescriptorScripts {
|
||||
impl<'s> DescriptorScripts for DerivedDescriptor<'s> {
|
||||
fn psbt_redeem_script(&self) -> Option<Script> {
|
||||
match self.desc_type() {
|
||||
DescriptorType::ShWpkh => Some(self.explicit_script().unwrap()),
|
||||
DescriptorType::ShWsh => Some(self.explicit_script().unwrap().to_v0_p2wsh()),
|
||||
DescriptorType::Sh => Some(self.explicit_script().unwrap()),
|
||||
DescriptorType::Bare => Some(self.explicit_script().unwrap()),
|
||||
DescriptorType::ShSortedMulti => Some(self.explicit_script().unwrap()),
|
||||
DescriptorType::ShWshSortedMulti => Some(self.explicit_script().unwrap().to_v0_p2wsh()),
|
||||
DescriptorType::Pkh
|
||||
| DescriptorType::Wpkh
|
||||
| DescriptorType::Tr
|
||||
| DescriptorType::Wsh
|
||||
| DescriptorType::WshSortedMulti => None,
|
||||
DescriptorType::ShWpkh => Some(self.explicit_script()),
|
||||
DescriptorType::ShWsh => Some(self.explicit_script().to_v0_p2wsh()),
|
||||
DescriptorType::Sh => Some(self.explicit_script()),
|
||||
DescriptorType::Bare => Some(self.explicit_script()),
|
||||
DescriptorType::ShSortedMulti => Some(self.explicit_script()),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
fn psbt_witness_script(&self) -> Option<Script> {
|
||||
match self.desc_type() {
|
||||
DescriptorType::Wsh => Some(self.explicit_script().unwrap()),
|
||||
DescriptorType::ShWsh => Some(self.explicit_script().unwrap()),
|
||||
DescriptorType::Wsh => Some(self.explicit_script()),
|
||||
DescriptorType::ShWsh => Some(self.explicit_script()),
|
||||
DescriptorType::WshSortedMulti | DescriptorType::ShWshSortedMulti => {
|
||||
Some(self.explicit_script().unwrap())
|
||||
Some(self.explicit_script())
|
||||
}
|
||||
DescriptorType::Bare
|
||||
| DescriptorType::Sh
|
||||
| DescriptorType::Pkh
|
||||
| DescriptorType::Wpkh
|
||||
| DescriptorType::ShSortedMulti
|
||||
| DescriptorType::Tr
|
||||
| DescriptorType::ShWpkh => None,
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -393,10 +393,6 @@ impl DescriptorMeta for ExtendedDescriptor {
|
||||
)
|
||||
}
|
||||
|
||||
fn is_taproot(&self) -> bool {
|
||||
self.desc_type() == DescriptorType::Tr
|
||||
}
|
||||
|
||||
fn get_extended_keys(&self) -> Result<Vec<DescriptorXKey<ExtendedPubKey>>, DescriptorError> {
|
||||
let mut answer = Vec::new();
|
||||
|
||||
@@ -411,122 +407,59 @@ impl DescriptorMeta for ExtendedDescriptor {
|
||||
Ok(answer)
|
||||
}
|
||||
|
||||
fn derive_from_psbt_key_origins<'s>(
|
||||
&self,
|
||||
key_origins: BTreeMap<Fingerprint, (&DerivationPath, SinglePubKey)>,
|
||||
secp: &'s SecpCtx,
|
||||
) -> Option<DerivedDescriptor<'s>> {
|
||||
// Ensure that deriving `xpub` with `path` yields `expected`
|
||||
let verify_key = |xpub: &DescriptorXKey<ExtendedPubKey>,
|
||||
path: &DerivationPath,
|
||||
expected: &SinglePubKey| {
|
||||
let derived = xpub
|
||||
.xkey
|
||||
.derive_pub(secp, path)
|
||||
.expect("The path should never contain hardened derivation steps")
|
||||
.public_key;
|
||||
|
||||
match expected {
|
||||
SinglePubKey::FullKey(pk) if &PublicKey::new(derived) == pk => true,
|
||||
SinglePubKey::XOnly(pk) if &XOnlyPublicKey::from(derived) == pk => true,
|
||||
_ => false,
|
||||
}
|
||||
};
|
||||
|
||||
let mut path_found = None;
|
||||
|
||||
// using `for_any_key` should make this stop as soon as we return `true`
|
||||
self.for_any_key(|key| {
|
||||
if let DescriptorPublicKey::XPub(xpub) = key.as_key().deref() {
|
||||
// Check if the key matches one entry in our `key_origins`. If it does, `matches()` will
|
||||
// return the "prefix" that matched, so we remove that prefix from the full path
|
||||
// found in `key_origins` and save it in `derive_path`. We expect this to be a derivation
|
||||
// path of length 1 if the key is `wildcard` and an empty path otherwise.
|
||||
let root_fingerprint = xpub.root_fingerprint(secp);
|
||||
let derive_path = key_origins
|
||||
.get_key_value(&root_fingerprint)
|
||||
.and_then(|(fingerprint, (path, expected))| {
|
||||
xpub.matches(&(*fingerprint, (*path).clone()), secp)
|
||||
.zip(Some((path, expected)))
|
||||
})
|
||||
.and_then(|(prefix, (full_path, expected))| {
|
||||
let derive_path = full_path
|
||||
.into_iter()
|
||||
.skip(prefix.into_iter().count())
|
||||
.cloned()
|
||||
.collect::<DerivationPath>();
|
||||
|
||||
// `derive_path` only contains the replacement index for the wildcard, if present, or
|
||||
// an empty path for fixed descriptors. To verify the key we also need the normal steps
|
||||
// that come before the wildcard, so we take them directly from `xpub` and then append
|
||||
// the final index
|
||||
if verify_key(
|
||||
xpub,
|
||||
&xpub.derivation_path.extend(derive_path.clone()),
|
||||
expected,
|
||||
) {
|
||||
Some(derive_path)
|
||||
} else {
|
||||
log::debug!(
|
||||
"Key `{}` derived with {} yields an unexpected key",
|
||||
root_fingerprint,
|
||||
derive_path
|
||||
);
|
||||
None
|
||||
}
|
||||
});
|
||||
|
||||
match derive_path {
|
||||
Some(path) if xpub.wildcard != Wildcard::None && path.len() == 1 => {
|
||||
// Ignore hardened wildcards
|
||||
if let ChildNumber::Normal { index } = path[0] {
|
||||
path_found = Some(index);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
Some(path) if xpub.wildcard == Wildcard::None && path.is_empty() => {
|
||||
path_found = Some(0);
|
||||
return true;
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
false
|
||||
});
|
||||
|
||||
path_found.map(|path| self.as_derived(path, secp))
|
||||
}
|
||||
|
||||
fn derive_from_hd_keypaths<'s>(
|
||||
&self,
|
||||
hd_keypaths: &HdKeyPaths,
|
||||
secp: &'s SecpCtx,
|
||||
) -> Option<DerivedDescriptor<'s>> {
|
||||
// "Convert" an hd_keypaths map to the format required by `derive_from_psbt_key_origins`
|
||||
let key_origins = hd_keypaths
|
||||
.iter()
|
||||
.map(|(pk, (fingerprint, path))| {
|
||||
(
|
||||
*fingerprint,
|
||||
(path, SinglePubKey::FullKey(PublicKey::new(*pk))),
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
self.derive_from_psbt_key_origins(key_origins, secp)
|
||||
}
|
||||
let index: HashMap<_, _> = hd_keypaths.values().map(|(a, b)| (a, b)).collect();
|
||||
|
||||
fn derive_from_tap_key_origins<'s>(
|
||||
&self,
|
||||
tap_key_origins: &TapKeyOrigins,
|
||||
secp: &'s SecpCtx,
|
||||
) -> Option<DerivedDescriptor<'s>> {
|
||||
// "Convert" a tap_key_origins map to the format required by `derive_from_psbt_key_origins`
|
||||
let key_origins = tap_key_origins
|
||||
.iter()
|
||||
.map(|(pk, (_, (fingerprint, path)))| (*fingerprint, (path, SinglePubKey::XOnly(*pk))))
|
||||
.collect();
|
||||
self.derive_from_psbt_key_origins(key_origins, secp)
|
||||
let mut path_found = None;
|
||||
self.for_each_key(|key| {
|
||||
if path_found.is_some() {
|
||||
// already found a matching path, we are done
|
||||
return true;
|
||||
}
|
||||
|
||||
if let DescriptorPublicKey::XPub(xpub) = key.as_key().deref() {
|
||||
// Check if the key matches one entry in our `index`. If it does, `matches()` will
|
||||
// return the "prefix" that matched, so we remove that prefix from the full path
|
||||
// found in `index` and save it in `derive_path`. We expect this to be a derivation
|
||||
// path of length 1 if the key is `wildcard` and an empty path otherwise.
|
||||
let root_fingerprint = xpub.root_fingerprint(secp);
|
||||
let derivation_path: Option<Vec<ChildNumber>> = index
|
||||
.get_key_value(&root_fingerprint)
|
||||
.and_then(|(fingerprint, path)| {
|
||||
xpub.matches(&(**fingerprint, (*path).clone()), secp)
|
||||
})
|
||||
.map(|prefix| {
|
||||
index
|
||||
.get(&xpub.root_fingerprint(secp))
|
||||
.unwrap()
|
||||
.into_iter()
|
||||
.skip(prefix.into_iter().count())
|
||||
.cloned()
|
||||
.collect()
|
||||
});
|
||||
|
||||
match derivation_path {
|
||||
Some(path) if xpub.wildcard != Wildcard::None && path.len() == 1 => {
|
||||
// Ignore hardened wildcards
|
||||
if let ChildNumber::Normal { index } = path[0] {
|
||||
path_found = Some(index)
|
||||
}
|
||||
}
|
||||
Some(path) if xpub.wildcard == Wildcard::None && path.is_empty() => {
|
||||
path_found = Some(0)
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
true
|
||||
});
|
||||
|
||||
path_found.map(|path| self.as_derived(path, secp))
|
||||
}
|
||||
|
||||
fn derive_from_psbt_input<'s>(
|
||||
@@ -538,9 +471,6 @@ impl DescriptorMeta for ExtendedDescriptor {
|
||||
if let Some(derived) = self.derive_from_hd_keypaths(&psbt_input.bip32_derivation, secp) {
|
||||
return Some(derived);
|
||||
}
|
||||
if let Some(derived) = self.derive_from_tap_key_origins(&psbt_input.tap_key_origins, secp) {
|
||||
return Some(derived);
|
||||
}
|
||||
if self.is_deriveable() {
|
||||
// We can't try to bruteforce the derivation index, exit here
|
||||
return None;
|
||||
@@ -549,10 +479,7 @@ impl DescriptorMeta for ExtendedDescriptor {
|
||||
let descriptor = self.as_derived_fixed(secp);
|
||||
match descriptor.desc_type() {
|
||||
// TODO: add pk() here
|
||||
DescriptorType::Pkh
|
||||
| DescriptorType::Wpkh
|
||||
| DescriptorType::ShWpkh
|
||||
| DescriptorType::Tr
|
||||
DescriptorType::Pkh | DescriptorType::Wpkh | DescriptorType::ShWpkh
|
||||
if utxo.is_some()
|
||||
&& descriptor.script_pubkey() == utxo.as_ref().unwrap().script_pubkey =>
|
||||
{
|
||||
@@ -560,7 +487,7 @@ impl DescriptorMeta for ExtendedDescriptor {
|
||||
}
|
||||
DescriptorType::Bare | DescriptorType::Sh | DescriptorType::ShSortedMulti
|
||||
if psbt_input.redeem_script.is_some()
|
||||
&& &descriptor.explicit_script().unwrap()
|
||||
&& &descriptor.explicit_script()
|
||||
== psbt_input.redeem_script.as_ref().unwrap() =>
|
||||
{
|
||||
Some(descriptor)
|
||||
@@ -570,7 +497,7 @@ impl DescriptorMeta for ExtendedDescriptor {
|
||||
| DescriptorType::ShWshSortedMulti
|
||||
| DescriptorType::WshSortedMulti
|
||||
if psbt_input.witness_script.is_some()
|
||||
&& &descriptor.explicit_script().unwrap()
|
||||
&& &descriptor.explicit_script()
|
||||
== psbt_input.witness_script.as_ref().unwrap() =>
|
||||
{
|
||||
Some(descriptor)
|
||||
@@ -581,7 +508,7 @@ impl DescriptorMeta for ExtendedDescriptor {
|
||||
}
|
||||
|
||||
impl<'s> DerivedDescriptorMeta for DerivedDescriptor<'s> {
|
||||
fn get_hd_keypaths(&self, secp: &SecpCtx) -> HdKeyPaths {
|
||||
fn get_hd_keypaths(&self, secp: &SecpCtx) -> Result<HdKeyPaths, DescriptorError> {
|
||||
let mut answer = BTreeMap::new();
|
||||
self.for_each_key(|key| {
|
||||
if let DescriptorPublicKey::XPub(xpub) = key.as_key().deref() {
|
||||
@@ -599,64 +526,7 @@ impl<'s> DerivedDescriptorMeta for DerivedDescriptor<'s> {
|
||||
true
|
||||
});
|
||||
|
||||
answer
|
||||
}
|
||||
|
||||
fn get_tap_key_origins(&self, secp: &SecpCtx) -> TapKeyOrigins {
|
||||
use miniscript::ToPublicKey;
|
||||
|
||||
let mut answer = BTreeMap::new();
|
||||
let mut insert_path = |pk: &DerivedDescriptorKey<'_>, lh| {
|
||||
let key_origin = match pk.deref() {
|
||||
DescriptorPublicKey::XPub(xpub) => {
|
||||
Some((xpub.root_fingerprint(secp), xpub.full_path(&[])))
|
||||
}
|
||||
DescriptorPublicKey::SinglePub(_) => None,
|
||||
};
|
||||
|
||||
// If this is the internal key, we only insert the key origin if it's not None.
|
||||
// For keys found in the tap tree we always insert a key origin (because the signer
|
||||
// looks for it to know which leaves to sign for), even though it may be None
|
||||
match (lh, key_origin) {
|
||||
(None, Some(ko)) => {
|
||||
answer
|
||||
.entry(pk.to_x_only_pubkey())
|
||||
.or_insert_with(|| (vec![], ko));
|
||||
}
|
||||
(Some(lh), origin) => {
|
||||
answer
|
||||
.entry(pk.to_x_only_pubkey())
|
||||
.or_insert_with(|| (vec![], origin.unwrap_or_default()))
|
||||
.0
|
||||
.push(lh);
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
};
|
||||
|
||||
if let Descriptor::Tr(tr) = &self {
|
||||
// Internal key first, then iterate the scripts
|
||||
insert_path(tr.internal_key(), None);
|
||||
|
||||
for (_, ms) in tr.iter_scripts() {
|
||||
// Assume always the same leaf version
|
||||
let leaf_hash = taproot::TapLeafHash::from_script(
|
||||
&ms.encode(),
|
||||
taproot::LeafVersion::TapScript,
|
||||
);
|
||||
|
||||
for key in ms.iter_pk_pkh() {
|
||||
let key = match key {
|
||||
miniscript::miniscript::iter::PkPkh::PlainPubkey(pk) => pk,
|
||||
miniscript::miniscript::iter::PkPkh::HashedPubkey(pk) => pk,
|
||||
};
|
||||
|
||||
insert_path(&key, Some(leaf_hash));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
answer
|
||||
Ok(answer)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -668,7 +538,6 @@ mod test {
|
||||
use bitcoin::hashes::hex::FromHex;
|
||||
use bitcoin::secp256k1::Secp256k1;
|
||||
use bitcoin::util::{bip32, psbt};
|
||||
use bitcoin::Script;
|
||||
|
||||
use super::*;
|
||||
use crate::psbt::PsbtUtils;
|
||||
@@ -798,7 +667,7 @@ mod test {
|
||||
|
||||
// make a descriptor out of it
|
||||
let desc = crate::descriptor!(wpkh(key)).unwrap();
|
||||
// this should convert the key that supports "any_network" to the right network (testnet)
|
||||
// this should conver the key that supports "any_network" to the right network (testnet)
|
||||
let (wallet_desc, _) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
@@ -932,22 +801,4 @@ mod test {
|
||||
DescriptorError::DuplicatedKeys
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sh_wsh_sortedmulti_redeemscript() {
|
||||
use super::{AsDerived, DescriptorScripts};
|
||||
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let descriptor = "sh(wsh(sortedmulti(3,tpubDEsqS36T4DVsKJd9UH8pAKzrkGBYPLEt9jZMwpKtzh1G6mgYehfHt9WCgk7MJG5QGSFWf176KaBNoXbcuFcuadAFKxDpUdMDKGBha7bY3QM/0/*,tpubDF3cpwfs7fMvXXuoQbohXtLjNM6ehwYT287LWtmLsd4r77YLg6MZg4vTETx5MSJ2zkfigbYWu31VA2Z2Vc1cZugCYXgS7FQu6pE8V6TriEH/0/*,tpubDE1SKfcW76Tb2AASv5bQWMuScYNAdoqLHoexw13sNDXwmUhQDBbCD3QAedKGLhxMrWQdMDKENzYtnXPDRvexQPNuDrLj52wAjHhNEm8sJ4p/0/*,tpubDFLc6oXwJmhm3FGGzXkfJNTh2KitoY3WhmmQvuAjMhD8YbyWn5mAqckbxXfm2etM3p5J6JoTpSrMqRSTfMLtNW46poDaEZJ1kjd3csRSjwH/0/*,tpubDEWD9NBeWP59xXmdqSNt4VYdtTGwbpyP8WS962BuqpQeMZmX9Pur14dhXdZT5a7wR1pK6dPtZ9fP5WR493hPzemnBvkfLLYxnUjAKj1JCQV/0/*,tpubDEHyZkkwd7gZWCTgQuYQ9C4myF2hMEmyHsBCCmLssGqoqUxeT3gzohF5uEVURkf9TtmeepJgkSUmteac38FwZqirjApzNX59XSHLcwaTZCH/0/*,tpubDEqLouCekwnMUWN486kxGzD44qVgeyuqHyxUypNEiQt5RnUZNJe386TKPK99fqRV1vRkZjYAjtXGTECz98MCsdLcnkM67U6KdYRzVubeCgZ/0/*)))";
|
||||
let (descriptor, _) =
|
||||
into_wallet_descriptor_checked(descriptor, &secp, Network::Testnet).unwrap();
|
||||
|
||||
let descriptor = descriptor.as_derived(0, &secp);
|
||||
|
||||
let script = Script::from_str("5321022f533b667e2ea3b36e21961c9fe9dca340fbe0af5210173a83ae0337ab20a57621026bb53a98e810bd0ee61a0ed1164ba6c024786d76554e793e202dc6ce9c78c4ea2102d5b8a7d66a41ffdb6f4c53d61994022e886b4f45001fb158b95c9164d45f8ca3210324b75eead2c1f9c60e8adeb5e7009fec7a29afcdb30d829d82d09562fe8bae8521032d34f8932200833487bd294aa219dcbe000b9f9b3d824799541430009f0fa55121037468f8ea99b6c64788398b5ad25480cad08f4b0d65be54ce3a55fd206b5ae4722103f72d3d96663b0ea99b0aeb0d7f273cab11a8de37885f1dddc8d9112adb87169357ae").unwrap();
|
||||
|
||||
assert_eq!(descriptor.psbt_redeem_script(), Some(script.to_v0_p2wsh()));
|
||||
assert_eq!(descriptor.psbt_witness_script(), Some(script));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -21,7 +21,6 @@
|
||||
//! ```
|
||||
//! # use std::sync::Arc;
|
||||
//! # use bdk::descriptor::*;
|
||||
//! # use bdk::wallet::signer::*;
|
||||
//! # use bdk::bitcoin::secp256k1::Secp256k1;
|
||||
//! use bdk::descriptor::policy::BuildSatisfaction;
|
||||
//! let secp = Secp256k1::new();
|
||||
@@ -30,7 +29,7 @@
|
||||
//! let (extended_desc, key_map) = ExtendedDescriptor::parse_descriptor(&secp, desc)?;
|
||||
//! println!("{:?}", extended_desc);
|
||||
//!
|
||||
//! let signers = Arc::new(SignersContainer::build(key_map, &extended_desc, &secp));
|
||||
//! let signers = Arc::new(key_map.into());
|
||||
//! let policy = extended_desc.extract_policy(&signers, BuildSatisfaction::None, &secp)?;
|
||||
//! println!("policy: {}", serde_json::to_string(&policy)?);
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
@@ -45,64 +44,68 @@ use serde::{Serialize, Serializer};
|
||||
|
||||
use bitcoin::hashes::*;
|
||||
use bitcoin::util::bip32::Fingerprint;
|
||||
use bitcoin::{PublicKey, XOnlyPublicKey};
|
||||
use bitcoin::PublicKey;
|
||||
|
||||
use miniscript::descriptor::{
|
||||
DescriptorPublicKey, DescriptorSinglePub, ShInner, SinglePubKey, SortedMultiVec, WshInner,
|
||||
use miniscript::descriptor::{DescriptorPublicKey, ShInner, SortedMultiVec, WshInner};
|
||||
use miniscript::{
|
||||
Descriptor, Miniscript, MiniscriptKey, Satisfier, ScriptContext, Terminal, ToPublicKey,
|
||||
};
|
||||
use miniscript::{Descriptor, Miniscript, MiniscriptKey, Satisfier, ScriptContext, Terminal};
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
|
||||
use crate::descriptor::ExtractPolicy;
|
||||
use crate::keys::ExtScriptContext;
|
||||
use crate::descriptor::{DerivedDescriptorKey, ExtractPolicy};
|
||||
use crate::wallet::signer::{SignerId, SignersContainer};
|
||||
use crate::wallet::utils::{self, After, Older, SecpCtx};
|
||||
|
||||
use super::checksum::get_checksum;
|
||||
use super::error::Error;
|
||||
use super::XKeyUtils;
|
||||
use bitcoin::util::psbt::{Input as PsbtInput, PartiallySignedTransaction as Psbt};
|
||||
use bitcoin::util::psbt::PartiallySignedTransaction as Psbt;
|
||||
use miniscript::psbt::PsbtInputSatisfier;
|
||||
|
||||
/// A unique identifier for a key
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum PkOrF {
|
||||
/// A legacy public key
|
||||
Pubkey(PublicKey),
|
||||
/// A x-only public key
|
||||
XOnlyPubkey(XOnlyPublicKey),
|
||||
/// An extended key fingerprint
|
||||
Fingerprint(Fingerprint),
|
||||
/// Raw public key or extended key fingerprint
|
||||
#[derive(Debug, Clone, Default, Serialize)]
|
||||
pub struct PkOrF {
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pubkey: Option<PublicKey>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pubkey_hash: Option<hash160::Hash>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
fingerprint: Option<Fingerprint>,
|
||||
}
|
||||
|
||||
impl PkOrF {
|
||||
fn from_key(k: &DescriptorPublicKey, secp: &SecpCtx) -> Self {
|
||||
match k {
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::FullKey(pk),
|
||||
..
|
||||
}) => PkOrF::Pubkey(*pk),
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::XOnly(pk),
|
||||
..
|
||||
}) => PkOrF::XOnlyPubkey(*pk),
|
||||
DescriptorPublicKey::XPub(xpub) => PkOrF::Fingerprint(xpub.root_fingerprint(secp)),
|
||||
DescriptorPublicKey::SinglePub(pubkey) => PkOrF {
|
||||
pubkey: Some(pubkey.key),
|
||||
..Default::default()
|
||||
},
|
||||
DescriptorPublicKey::XPub(xpub) => PkOrF {
|
||||
fingerprint: Some(xpub.root_fingerprint(secp)),
|
||||
..Default::default()
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn from_key_hash(k: hash160::Hash) -> Self {
|
||||
PkOrF {
|
||||
pubkey_hash: Some(k),
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// An item that needs to be satisfied
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
#[serde(tag = "type", rename_all = "UPPERCASE")]
|
||||
pub enum SatisfiableItem {
|
||||
// Leaves
|
||||
/// ECDSA Signature for a raw public key
|
||||
EcdsaSignature(PkOrF),
|
||||
/// Schnorr Signature for a raw public key
|
||||
SchnorrSignature(PkOrF),
|
||||
/// Signature for a raw public key
|
||||
Signature(PkOrF),
|
||||
/// Signature for an extended key fingerprint
|
||||
SignatureKey(PkOrF),
|
||||
/// SHA256 preimage hash
|
||||
Sha256Preimage {
|
||||
/// The digest value
|
||||
@@ -249,7 +252,7 @@ where
|
||||
}
|
||||
|
||||
/// Represent if and how much a policy item is satisfied by the wallet's descriptor
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
#[serde(tag = "type", rename_all = "UPPERCASE")]
|
||||
pub enum Satisfaction {
|
||||
/// Only a partial satisfaction of some kind of threshold policy
|
||||
@@ -360,11 +363,10 @@ impl Satisfaction {
|
||||
indexes
|
||||
.into_iter()
|
||||
// .inspect(|x| println!("--- orig --- {:?}", x))
|
||||
// we map each of the combinations of elements into a tuple of ([chosen items], [conditions]). unfortunately, those items have potentially more than one
|
||||
// we map each of the combinations of elements into a tuple of ([choosen items], [conditions]). unfortunately, those items have potentially more than one
|
||||
// condition (think about ORs), so we also use `mix` to expand those, i.e. [[0], [1, 2]] becomes [[0, 1], [0, 2]]. This is necessary to make sure that we
|
||||
// consider every possible options and check whether or not they are compatible.
|
||||
// since this step can turn one item of the iterator into multiple ones, we use `flat_map()` to expand them out
|
||||
.flat_map(|i_vec| {
|
||||
// consider every possibile options and check whether or not they are compatible.
|
||||
.map(|i_vec| {
|
||||
mix(i_vec
|
||||
.iter()
|
||||
.map(|i| {
|
||||
@@ -378,6 +380,9 @@ impl Satisfaction {
|
||||
.map(|x| (i_vec.clone(), x))
|
||||
.collect::<Vec<(Vec<usize>, Vec<Condition>)>>()
|
||||
})
|
||||
// .inspect(|x: &Vec<(Vec<usize>, Vec<Condition>)>| println!("fetch {:?}", x))
|
||||
// since the previous step can turn one item of the iterator into multiple ones, we call flatten to expand them out
|
||||
.flatten()
|
||||
// .inspect(|x| println!("flat {:?}", x))
|
||||
// try to fold all the conditions for this specific combination of indexes/options. if they are not compatible, try_fold will be Err
|
||||
.map(|(key, val)| {
|
||||
@@ -423,7 +428,7 @@ impl From<bool> for Satisfaction {
|
||||
}
|
||||
|
||||
/// Descriptor spending policy
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct Policy {
|
||||
/// Identifier for this policy node
|
||||
pub id: String,
|
||||
@@ -571,7 +576,7 @@ impl Policy {
|
||||
Ok(Some(policy))
|
||||
}
|
||||
|
||||
fn make_multisig<Ctx: ScriptContext + 'static>(
|
||||
fn make_multisig(
|
||||
keys: &[DescriptorPublicKey],
|
||||
signers: &SignersContainer,
|
||||
build_sat: BuildSatisfaction,
|
||||
@@ -605,7 +610,7 @@ impl Policy {
|
||||
}
|
||||
|
||||
if let Some(psbt) = build_sat.psbt() {
|
||||
if Ctx::find_signature(psbt, key, secp) {
|
||||
if signature_in_psbt(psbt, key, secp) {
|
||||
satisfaction.add(
|
||||
&Satisfaction::Complete {
|
||||
condition: Default::default(),
|
||||
@@ -721,27 +726,18 @@ impl From<SatisfiableItem> for Policy {
|
||||
|
||||
fn signer_id(key: &DescriptorPublicKey, secp: &SecpCtx) -> SignerId {
|
||||
match key {
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::FullKey(pk),
|
||||
..
|
||||
}) => pk.to_pubkeyhash().into(),
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::XOnly(pk),
|
||||
..
|
||||
}) => pk.to_pubkeyhash().into(),
|
||||
DescriptorPublicKey::SinglePub(pubkey) => pubkey.key.to_pubkeyhash().into(),
|
||||
DescriptorPublicKey::XPub(xpub) => xpub.root_fingerprint(secp).into(),
|
||||
}
|
||||
}
|
||||
|
||||
fn make_generic_signature<M: Fn() -> SatisfiableItem, F: Fn(&Psbt) -> bool>(
|
||||
fn signature(
|
||||
key: &DescriptorPublicKey,
|
||||
signers: &SignersContainer,
|
||||
build_sat: BuildSatisfaction,
|
||||
secp: &SecpCtx,
|
||||
make_policy: M,
|
||||
find_sig: F,
|
||||
) -> Policy {
|
||||
let mut policy: Policy = make_policy().into();
|
||||
let mut policy: Policy = SatisfiableItem::Signature(PkOrF::from_key(key, secp)).into();
|
||||
|
||||
policy.contribution = if signers.find(signer_id(key, secp)).is_some() {
|
||||
Satisfaction::Complete {
|
||||
@@ -752,7 +748,7 @@ fn make_generic_signature<M: Fn() -> SatisfiableItem, F: Fn(&Psbt) -> bool>(
|
||||
};
|
||||
|
||||
if let Some(psbt) = build_sat.psbt() {
|
||||
policy.satisfaction = if find_sig(psbt) {
|
||||
policy.satisfaction = if signature_in_psbt(psbt, key, secp) {
|
||||
Satisfaction::Complete {
|
||||
condition: Default::default(),
|
||||
}
|
||||
@@ -764,119 +760,45 @@ fn make_generic_signature<M: Fn() -> SatisfiableItem, F: Fn(&Psbt) -> bool>(
|
||||
policy
|
||||
}
|
||||
|
||||
fn generic_sig_in_psbt<
|
||||
// C is for "check", it's a closure we use to *check* if a psbt input contains the signature
|
||||
// for a specific key
|
||||
C: Fn(&PsbtInput, &SinglePubKey) -> bool,
|
||||
// E is for "extract", it extracts a key from the bip32 derivations found in the psbt input
|
||||
E: Fn(&PsbtInput, Fingerprint) -> Option<SinglePubKey>,
|
||||
>(
|
||||
psbt: &Psbt,
|
||||
key: &DescriptorPublicKey,
|
||||
secp: &SecpCtx,
|
||||
check: C,
|
||||
extract: E,
|
||||
) -> bool {
|
||||
fn signature_in_psbt(psbt: &Psbt, key: &DescriptorPublicKey, secp: &SecpCtx) -> bool {
|
||||
//TODO check signature validity
|
||||
psbt.inputs.iter().all(|input| match key {
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub { key, .. }) => check(input, key),
|
||||
DescriptorPublicKey::SinglePub(key) => input.partial_sigs.contains_key(&key.key),
|
||||
DescriptorPublicKey::XPub(xpub) => {
|
||||
let pubkey = input
|
||||
.bip32_derivation
|
||||
.iter()
|
||||
.find(|(_, (f, _))| *f == xpub.root_fingerprint(secp))
|
||||
.map(|(p, _)| p);
|
||||
//TODO check actual derivation matches
|
||||
match extract(input, xpub.root_fingerprint(secp)) {
|
||||
Some(pubkey) => check(input, &pubkey),
|
||||
match pubkey {
|
||||
Some(pubkey) => input.partial_sigs.contains_key(pubkey),
|
||||
None => false,
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
trait SigExt: ScriptContext {
|
||||
fn make_signature(
|
||||
key: &DescriptorPublicKey,
|
||||
signers: &SignersContainer,
|
||||
build_sat: BuildSatisfaction,
|
||||
secp: &SecpCtx,
|
||||
) -> Policy;
|
||||
fn signature_key(
|
||||
key: &<DescriptorPublicKey as MiniscriptKey>::Hash,
|
||||
signers: &SignersContainer,
|
||||
secp: &SecpCtx,
|
||||
) -> Policy {
|
||||
let key_hash = DerivedDescriptorKey::new(key.clone(), secp)
|
||||
.to_public_key()
|
||||
.to_pubkeyhash();
|
||||
let mut policy: Policy = SatisfiableItem::Signature(PkOrF::from_key_hash(key_hash)).into();
|
||||
|
||||
fn find_signature(psbt: &Psbt, key: &DescriptorPublicKey, secp: &SecpCtx) -> bool;
|
||||
}
|
||||
|
||||
impl<T: ScriptContext + 'static> SigExt for T {
|
||||
fn make_signature(
|
||||
key: &DescriptorPublicKey,
|
||||
signers: &SignersContainer,
|
||||
build_sat: BuildSatisfaction,
|
||||
secp: &SecpCtx,
|
||||
) -> Policy {
|
||||
if T::as_enum().is_taproot() {
|
||||
make_generic_signature(
|
||||
key,
|
||||
signers,
|
||||
build_sat,
|
||||
secp,
|
||||
|| SatisfiableItem::SchnorrSignature(PkOrF::from_key(key, secp)),
|
||||
|psbt| Self::find_signature(psbt, key, secp),
|
||||
)
|
||||
} else {
|
||||
make_generic_signature(
|
||||
key,
|
||||
signers,
|
||||
build_sat,
|
||||
secp,
|
||||
|| SatisfiableItem::EcdsaSignature(PkOrF::from_key(key, secp)),
|
||||
|psbt| Self::find_signature(psbt, key, secp),
|
||||
)
|
||||
if signers.find(SignerId::PkHash(key_hash)).is_some() {
|
||||
policy.contribution = Satisfaction::Complete {
|
||||
condition: Default::default(),
|
||||
}
|
||||
}
|
||||
|
||||
fn find_signature(psbt: &Psbt, key: &DescriptorPublicKey, secp: &SecpCtx) -> bool {
|
||||
if T::as_enum().is_taproot() {
|
||||
generic_sig_in_psbt(
|
||||
psbt,
|
||||
key,
|
||||
secp,
|
||||
|input, pk| {
|
||||
let pk = match pk {
|
||||
SinglePubKey::XOnly(pk) => pk,
|
||||
_ => return false,
|
||||
};
|
||||
|
||||
if input.tap_internal_key == Some(*pk) && input.tap_key_sig.is_some() {
|
||||
true
|
||||
} else {
|
||||
input.tap_script_sigs.keys().any(|(sk, _)| sk == pk)
|
||||
}
|
||||
},
|
||||
|input, fing| {
|
||||
input
|
||||
.tap_key_origins
|
||||
.iter()
|
||||
.find(|(_, (_, (f, _)))| f == &fing)
|
||||
.map(|(pk, _)| SinglePubKey::XOnly(*pk))
|
||||
},
|
||||
)
|
||||
} else {
|
||||
generic_sig_in_psbt(
|
||||
psbt,
|
||||
key,
|
||||
secp,
|
||||
|input, pk| match pk {
|
||||
SinglePubKey::FullKey(pk) => input.partial_sigs.contains_key(pk),
|
||||
_ => false,
|
||||
},
|
||||
|input, fing| {
|
||||
input
|
||||
.bip32_derivation
|
||||
.iter()
|
||||
.find(|(_, (f, _))| f == &fing)
|
||||
.map(|(pk, _)| SinglePubKey::FullKey(PublicKey::new(*pk)))
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
policy
|
||||
}
|
||||
|
||||
impl<Ctx: ScriptContext + 'static> ExtractPolicy for Miniscript<DescriptorPublicKey, Ctx> {
|
||||
impl<Ctx: ScriptContext> ExtractPolicy for Miniscript<DescriptorPublicKey, Ctx> {
|
||||
fn extract_policy(
|
||||
&self,
|
||||
signers: &SignersContainer,
|
||||
@@ -886,10 +808,8 @@ impl<Ctx: ScriptContext + 'static> ExtractPolicy for Miniscript<DescriptorPublic
|
||||
Ok(match &self.node {
|
||||
// Leaves
|
||||
Terminal::True | Terminal::False => None,
|
||||
Terminal::PkK(pubkey) => Some(Ctx::make_signature(pubkey, signers, build_sat, secp)),
|
||||
Terminal::PkH(pubkey_hash) => {
|
||||
Some(Ctx::make_signature(pubkey_hash, signers, build_sat, secp))
|
||||
}
|
||||
Terminal::PkK(pubkey) => Some(signature(pubkey, signers, build_sat, secp)),
|
||||
Terminal::PkH(pubkey_hash) => Some(signature_key(pubkey_hash, signers, secp)),
|
||||
Terminal::After(value) => {
|
||||
let mut policy: Policy = SatisfiableItem::AbsoluteTimelock { value: *value }.into();
|
||||
policy.contribution = Satisfaction::Complete {
|
||||
@@ -950,8 +870,8 @@ impl<Ctx: ScriptContext + 'static> ExtractPolicy for Miniscript<DescriptorPublic
|
||||
Terminal::Hash160(hash) => {
|
||||
Some(SatisfiableItem::Hash160Preimage { hash: *hash }.into())
|
||||
}
|
||||
Terminal::Multi(k, pks) | Terminal::MultiA(k, pks) => {
|
||||
Policy::make_multisig::<Ctx>(pks, signers, build_sat, *k, false, secp)?
|
||||
Terminal::Multi(k, pks) => {
|
||||
Policy::make_multisig(pks, signers, build_sat, *k, false, secp)?
|
||||
}
|
||||
// Identities
|
||||
Terminal::Alt(inner)
|
||||
@@ -1042,13 +962,13 @@ impl ExtractPolicy for Descriptor<DescriptorPublicKey> {
|
||||
build_sat: BuildSatisfaction,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<Option<Policy>, Error> {
|
||||
fn make_sortedmulti<Ctx: ScriptContext + 'static>(
|
||||
fn make_sortedmulti<Ctx: ScriptContext>(
|
||||
keys: &SortedMultiVec<DescriptorPublicKey, Ctx>,
|
||||
signers: &SignersContainer,
|
||||
build_sat: BuildSatisfaction,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<Option<Policy>, Error> {
|
||||
Ok(Policy::make_multisig::<Ctx>(
|
||||
Ok(Policy::make_multisig(
|
||||
keys.pks.as_ref(),
|
||||
signers,
|
||||
build_sat,
|
||||
@@ -1059,25 +979,10 @@ impl ExtractPolicy for Descriptor<DescriptorPublicKey> {
|
||||
}
|
||||
|
||||
match self {
|
||||
Descriptor::Pkh(pk) => Ok(Some(miniscript::Legacy::make_signature(
|
||||
pk.as_inner(),
|
||||
signers,
|
||||
build_sat,
|
||||
secp,
|
||||
))),
|
||||
Descriptor::Wpkh(pk) => Ok(Some(miniscript::Segwitv0::make_signature(
|
||||
pk.as_inner(),
|
||||
signers,
|
||||
build_sat,
|
||||
secp,
|
||||
))),
|
||||
Descriptor::Pkh(pk) => Ok(Some(signature(pk.as_inner(), signers, build_sat, secp))),
|
||||
Descriptor::Wpkh(pk) => Ok(Some(signature(pk.as_inner(), signers, build_sat, secp))),
|
||||
Descriptor::Sh(sh) => match sh.as_inner() {
|
||||
ShInner::Wpkh(pk) => Ok(Some(miniscript::Segwitv0::make_signature(
|
||||
pk.as_inner(),
|
||||
signers,
|
||||
build_sat,
|
||||
secp,
|
||||
))),
|
||||
ShInner::Wpkh(pk) => Ok(Some(signature(pk.as_inner(), signers, build_sat, secp))),
|
||||
ShInner::Ms(ms) => Ok(ms.extract_policy(signers, build_sat, secp)?),
|
||||
ShInner::SortedMulti(ref keys) => make_sortedmulti(keys, signers, build_sat, secp),
|
||||
ShInner::Wsh(wsh) => match wsh.as_inner() {
|
||||
@@ -1092,28 +997,6 @@ impl ExtractPolicy for Descriptor<DescriptorPublicKey> {
|
||||
WshInner::SortedMulti(ref keys) => make_sortedmulti(keys, signers, build_sat, secp),
|
||||
},
|
||||
Descriptor::Bare(ms) => Ok(ms.as_inner().extract_policy(signers, build_sat, secp)?),
|
||||
Descriptor::Tr(tr) => {
|
||||
// If there's no tap tree, treat this as a single sig, otherwise build a `Thresh`
|
||||
// node with threshold = 1 and the key spend signature plus all the tree leaves
|
||||
let key_spend_sig =
|
||||
miniscript::Tap::make_signature(tr.internal_key(), signers, build_sat, secp);
|
||||
|
||||
if tr.taptree().is_none() {
|
||||
Ok(Some(key_spend_sig))
|
||||
} else {
|
||||
let mut items = vec![key_spend_sig];
|
||||
items.append(
|
||||
&mut tr
|
||||
.iter_scripts()
|
||||
.filter_map(|(_, ms)| {
|
||||
ms.extract_policy(signers, build_sat, secp).transpose()
|
||||
})
|
||||
.collect::<Result<Vec<_>, _>>()?,
|
||||
);
|
||||
|
||||
Ok(Policy::make_thresh(items, 1)?)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1125,7 +1008,7 @@ mod test {
|
||||
|
||||
use super::*;
|
||||
use crate::descriptor::derived::AsDerived;
|
||||
use crate::descriptor::policy::SatisfiableItem::{EcdsaSignature, Multisig, Thresh};
|
||||
use crate::descriptor::policy::SatisfiableItem::{Multisig, Signature, Thresh};
|
||||
use crate::keys::{DescriptorKey, IntoDescriptorKey};
|
||||
use crate::wallet::signer::SignersContainer;
|
||||
use bitcoin::secp256k1::Secp256k1;
|
||||
@@ -1147,7 +1030,7 @@ mod test {
|
||||
) -> (DescriptorKey<Ctx>, DescriptorKey<Ctx>, Fingerprint) {
|
||||
let path = bip32::DerivationPath::from_str(path).unwrap();
|
||||
let tprv = bip32::ExtendedPrivKey::from_str(tprv).unwrap();
|
||||
let tpub = bip32::ExtendedPubKey::from_priv(secp, &tprv);
|
||||
let tpub = bip32::ExtendedPubKey::from_private(secp, &tprv);
|
||||
let fingerprint = tprv.fingerprint(secp);
|
||||
let prvkey = (tprv, path.clone()).into_descriptor_key().unwrap();
|
||||
let pubkey = (tpub, path).into_descriptor_key().unwrap();
|
||||
@@ -1166,26 +1049,30 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert!(matches!(&policy.item, EcdsaSignature(PkOrF::Fingerprint(f)) if f == &fingerprint));
|
||||
assert!(
|
||||
matches!(&policy.item, Signature(pk_or_f) if pk_or_f.fingerprint.unwrap() == fingerprint)
|
||||
);
|
||||
assert!(matches!(&policy.contribution, Satisfaction::None));
|
||||
|
||||
let desc = descriptor!(wpkh(prvkey)).unwrap();
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert!(matches!(&policy.item, EcdsaSignature(PkOrF::Fingerprint(f)) if f == &fingerprint));
|
||||
assert!(
|
||||
matches!(&policy.item, Signature(pk_or_f) if pk_or_f.fingerprint.unwrap() == fingerprint)
|
||||
);
|
||||
assert!(
|
||||
matches!(&policy.contribution, Satisfaction::Complete {condition} if condition.csv == None && condition.timelock == None)
|
||||
);
|
||||
@@ -1201,7 +1088,7 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1209,8 +1096,8 @@ mod test {
|
||||
|
||||
assert!(
|
||||
matches!(&policy.item, Multisig { keys, threshold } if threshold == &2usize
|
||||
&& keys[0] == PkOrF::Fingerprint(fingerprint0)
|
||||
&& keys[1] == PkOrF::Fingerprint(fingerprint1))
|
||||
&& keys[0].fingerprint.unwrap() == fingerprint0
|
||||
&& keys[1].fingerprint.unwrap() == fingerprint1)
|
||||
);
|
||||
// TODO should this be "Satisfaction::None" since we have no prv keys?
|
||||
// TODO should items and conditions not be empty?
|
||||
@@ -1233,15 +1120,15 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
assert!(
|
||||
matches!(&policy.item, Multisig { keys, threshold } if threshold == &2usize
|
||||
&& keys[0] == PkOrF::Fingerprint(fingerprint0)
|
||||
&& keys[1] == PkOrF::Fingerprint(fingerprint1))
|
||||
&& keys[0].fingerprint.unwrap() == fingerprint0
|
||||
&& keys[1].fingerprint.unwrap() == fingerprint1)
|
||||
);
|
||||
|
||||
assert!(
|
||||
@@ -1265,7 +1152,7 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1273,8 +1160,8 @@ mod test {
|
||||
|
||||
assert!(
|
||||
matches!(&policy.item, Multisig { keys, threshold } if threshold == &1
|
||||
&& keys[0] == PkOrF::Fingerprint(fingerprint0)
|
||||
&& keys[1] == PkOrF::Fingerprint(fingerprint1))
|
||||
&& keys[0].fingerprint.unwrap() == fingerprint0
|
||||
&& keys[1].fingerprint.unwrap() == fingerprint1)
|
||||
);
|
||||
assert!(
|
||||
matches!(&policy.contribution, Satisfaction::PartialComplete { n, m, items, conditions, .. } if n == &2
|
||||
@@ -1297,7 +1184,7 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1305,8 +1192,8 @@ mod test {
|
||||
|
||||
assert!(
|
||||
matches!(&policy.item, Multisig { keys, threshold } if threshold == &2
|
||||
&& keys[0] == PkOrF::Fingerprint(fingerprint0)
|
||||
&& keys[1] == PkOrF::Fingerprint(fingerprint1))
|
||||
&& keys[0].fingerprint.unwrap() == fingerprint0
|
||||
&& keys[1].fingerprint.unwrap() == fingerprint1)
|
||||
);
|
||||
|
||||
assert!(
|
||||
@@ -1330,13 +1217,15 @@ mod test {
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let single_key = wallet_desc.derive(0);
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = single_key
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert!(matches!(&policy.item, EcdsaSignature(PkOrF::Fingerprint(f)) if f == &fingerprint));
|
||||
assert!(
|
||||
matches!(&policy.item, Signature(pk_or_f) if pk_or_f.fingerprint.unwrap() == fingerprint)
|
||||
);
|
||||
assert!(matches!(&policy.contribution, Satisfaction::None));
|
||||
|
||||
let desc = descriptor!(wpkh(prvkey)).unwrap();
|
||||
@@ -1344,13 +1233,15 @@ mod test {
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let single_key = wallet_desc.derive(0);
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = single_key
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert!(matches!(&policy.item, EcdsaSignature(PkOrF::Fingerprint(f)) if f == &fingerprint));
|
||||
assert!(
|
||||
matches!(&policy.item, Signature(pk_or_f) if pk_or_f.fingerprint.unwrap() == fingerprint)
|
||||
);
|
||||
assert!(
|
||||
matches!(&policy.contribution, Satisfaction::Complete {condition} if condition.csv == None && condition.timelock == None)
|
||||
);
|
||||
@@ -1369,7 +1260,7 @@ mod test {
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let single_key = wallet_desc.derive(0);
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = single_key
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1377,8 +1268,8 @@ mod test {
|
||||
|
||||
assert!(
|
||||
matches!(&policy.item, Multisig { keys, threshold } if threshold == &1
|
||||
&& keys[0] == PkOrF::Fingerprint(fingerprint0)
|
||||
&& keys[1] == PkOrF::Fingerprint(fingerprint1))
|
||||
&& keys[0].fingerprint.unwrap() == fingerprint0
|
||||
&& keys[1].fingerprint.unwrap() == fingerprint1)
|
||||
);
|
||||
assert!(
|
||||
matches!(&policy.contribution, Satisfaction::PartialComplete { n, m, items, conditions, .. } if n == &2
|
||||
@@ -1412,7 +1303,7 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1451,7 +1342,7 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1476,7 +1367,7 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1494,7 +1385,7 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
@@ -1516,10 +1407,10 @@ mod test {
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers = keymap.into();
|
||||
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.extract_policy(&signers, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
@@ -1553,7 +1444,6 @@ mod test {
|
||||
|
||||
const ALICE_TPRV_STR:&str = "tprv8ZgxMBicQKsPf6T5X327efHnvJDr45Xnb8W4JifNWtEoqXu9MRYS4v1oYe6DFcMVETxy5w3bqpubYRqvcVTqovG1LifFcVUuJcbwJwrhYzP";
|
||||
const BOB_TPRV_STR:&str = "tprv8ZgxMBicQKsPeinZ155cJAn117KYhbaN6MV3WeG6sWhxWzcvX1eg1awd4C9GpUN1ncLEM2rzEvunAg3GizdZD4QPPCkisTz99tXXB4wZArp";
|
||||
const CAROL_TPRV_STR:&str = "tprv8ZgxMBicQKsPdC3CicFifuLCEyVVdXVUNYorxUWj3iGZ6nimnLAYAY9SYB7ib8rKzRxrCKFcEytCt6szwd2GHnGPRCBLAEAoSVDefSNk4Bt";
|
||||
const ALICE_BOB_PATH: &str = "m/0'";
|
||||
|
||||
#[test]
|
||||
@@ -1582,7 +1472,7 @@ mod test {
|
||||
addr.to_string()
|
||||
);
|
||||
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
|
||||
let psbt = Psbt::from_str(ALICE_SIGNED_PSBT).unwrap();
|
||||
|
||||
@@ -1638,19 +1528,19 @@ mod test {
|
||||
let (prvkey_bob, _, _) = setup_keys(BOB_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
|
||||
let desc =
|
||||
descriptor!(wsh(thresh(2,n:d:v:older(2),s:pk(prvkey_alice),s:pk(prvkey_bob)))).unwrap();
|
||||
descriptor!(wsh(thresh(2,d:v:older(2),s:pk(prvkey_alice),s:pk(prvkey_bob)))).unwrap();
|
||||
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
let signers_container = Arc::new(SignersContainer::from(keymap));
|
||||
|
||||
let addr = wallet_desc
|
||||
.as_derived(0, &secp)
|
||||
.address(Network::Testnet)
|
||||
.unwrap();
|
||||
assert_eq!(
|
||||
"tb1qsydsey4hexagwkvercqsmes6yet0ndkyt6uzcphtqnygjd8hmzmsfxrv58",
|
||||
"tb1qhpemaacpeu8ajlnh8k9v55ftg0px58r8630fz8t5mypxcwdk5d8sum522g",
|
||||
addr.to_string()
|
||||
);
|
||||
|
||||
@@ -1712,204 +1602,4 @@ mod test {
|
||||
);
|
||||
//println!("{}", serde_json::to_string(&policy_expired_signed).unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_pkh() {
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let (prvkey_alice, _, _) = setup_keys(ALICE_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
let (prvkey_bob, _, _) = setup_keys(BOB_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
let (prvkey_carol, _, _) = setup_keys(CAROL_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
|
||||
let desc = descriptor!(wsh(c: andor(
|
||||
pk(prvkey_alice),
|
||||
pk_k(prvkey_bob),
|
||||
pk_h(prvkey_carol),
|
||||
)))
|
||||
.unwrap();
|
||||
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
|
||||
let policy = wallet_desc.extract_policy(&signers_container, BuildSatisfaction::None, &secp);
|
||||
assert!(policy.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_tr_key_spend() {
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let (prvkey, _, fingerprint) = setup_keys(ALICE_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
|
||||
let desc = descriptor!(tr(prvkey)).unwrap();
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap();
|
||||
assert_eq!(
|
||||
policy,
|
||||
Some(Policy {
|
||||
id: "48u0tz0n".to_string(),
|
||||
item: SatisfiableItem::SchnorrSignature(PkOrF::Fingerprint(fingerprint)),
|
||||
satisfaction: Satisfaction::None,
|
||||
contribution: Satisfaction::Complete {
|
||||
condition: Condition::default()
|
||||
}
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_tr_script_spend() {
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let (alice_prv, _, alice_fing) = setup_keys(ALICE_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
let (_, bob_pub, bob_fing) = setup_keys(BOB_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
|
||||
let desc = descriptor!(tr(bob_pub, pk(alice_prv))).unwrap();
|
||||
let (wallet_desc, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
let signers_container = Arc::new(SignersContainer::build(keymap, &wallet_desc, &secp));
|
||||
|
||||
let policy = wallet_desc
|
||||
.extract_policy(&signers_container, BuildSatisfaction::None, &secp)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert!(
|
||||
matches!(policy.item, SatisfiableItem::Thresh { ref items, threshold: 1 } if items.len() == 2)
|
||||
);
|
||||
assert!(
|
||||
matches!(policy.contribution, Satisfaction::PartialComplete { n: 2, m: 1, items, .. } if items == vec![1])
|
||||
);
|
||||
|
||||
let alice_sig = SatisfiableItem::SchnorrSignature(PkOrF::Fingerprint(alice_fing));
|
||||
let bob_sig = SatisfiableItem::SchnorrSignature(PkOrF::Fingerprint(bob_fing));
|
||||
|
||||
let thresh_items = match policy.item {
|
||||
SatisfiableItem::Thresh { items, .. } => items,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
assert_eq!(thresh_items[0].item, bob_sig);
|
||||
assert_eq!(thresh_items[1].item, alice_sig);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_tr_satisfaction_key_spend() {
|
||||
const UNSIGNED_PSBT: &str = "cHNidP8BAFMBAAAAAUKgMCqtGLSiGYhsTols2UJ/VQQgQi/SXO38uXs2SahdAQAAAAD/////ARyWmAAAAAAAF6kU4R3W8CnGzZcSsaovTYu0X8vHt3WHAAAAAAABASuAlpgAAAAAACJRIEiEBFjbZa1xdjLfFjrKzuC1F1LeRyI/gL6IuGKNmUuSIRYnkGTDxwXMHP32fkDFoGJY28trxbkkVgR2z7jZa2pOJA0AyRF8LgAAAIADAAAAARcgJ5Bkw8cFzBz99n5AxaBiWNvLa8W5JFYEds+42WtqTiQAAA==";
|
||||
const SIGNED_PSBT: &str = "cHNidP8BAFMBAAAAAUKgMCqtGLSiGYhsTols2UJ/VQQgQi/SXO38uXs2SahdAQAAAAD/////ARyWmAAAAAAAF6kU4R3W8CnGzZcSsaovTYu0X8vHt3WHAAAAAAABASuAlpgAAAAAACJRIEiEBFjbZa1xdjLfFjrKzuC1F1LeRyI/gL6IuGKNmUuSARNAIsRvARpRxuyQosVA7guRQT9vXr+S25W2tnP2xOGBsSgq7A4RL8yrbvwDmNlWw9R0Nc/6t+IsyCyy7dD/lbUGgyEWJ5Bkw8cFzBz99n5AxaBiWNvLa8W5JFYEds+42WtqTiQNAMkRfC4AAACAAwAAAAEXICeQZMPHBcwc/fZ+QMWgYljby2vFuSRWBHbPuNlrak4kAAA=";
|
||||
|
||||
let unsigned_psbt = Psbt::from_str(UNSIGNED_PSBT).unwrap();
|
||||
let signed_psbt = Psbt::from_str(SIGNED_PSBT).unwrap();
|
||||
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let (_, pubkey, _) = setup_keys(ALICE_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
|
||||
let desc = descriptor!(tr(pubkey)).unwrap();
|
||||
let (wallet_desc, _) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
|
||||
let policy_unsigned = wallet_desc
|
||||
.extract_policy(
|
||||
&SignersContainer::default(),
|
||||
BuildSatisfaction::Psbt(&unsigned_psbt),
|
||||
&secp,
|
||||
)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
let policy_signed = wallet_desc
|
||||
.extract_policy(
|
||||
&SignersContainer::default(),
|
||||
BuildSatisfaction::Psbt(&signed_psbt),
|
||||
&secp,
|
||||
)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(policy_unsigned.satisfaction, Satisfaction::None);
|
||||
assert_eq!(
|
||||
policy_signed.satisfaction,
|
||||
Satisfaction::Complete {
|
||||
condition: Default::default()
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_tr_satisfaction_script_spend() {
|
||||
const UNSIGNED_PSBT: &str = "cHNidP8BAFMBAAAAAWZalxaErOL7P3WPIUc8DsjgE68S+ww+uqiqEI2SAwlPAAAAAAD/////AQiWmAAAAAAAF6kU4R3W8CnGzZcSsaovTYu0X8vHt3WHAAAAAAABASuAlpgAAAAAACJRINa6bLPZwp3/CYWoxyI3mLYcSC5f9LInAMUng94nspa2IhXBgiPY+kcolS1Hp0niOK/+7VHz6F+nsz8JVxnzWzkgToYjIHhGyuexxtRVKevRx4YwWR/W0r7LPHt6oS6DLlzyuYQarMAhFnhGyuexxtRVKevRx4YwWR/W0r7LPHt6oS6DLlzyuYQaLQH2onWFc3UR6I9ZhuHVeJCi5LNAf4APVd7mHn4BhdViHRwu7j4AAACAAgAAACEWgiPY+kcolS1Hp0niOK/+7VHz6F+nsz8JVxnzWzkgToYNAMkRfC4AAACAAgAAAAEXIIIj2PpHKJUtR6dJ4jiv/u1R8+hfp7M/CVcZ81s5IE6GARgg9qJ1hXN1EeiPWYbh1XiQouSzQH+AD1Xe5h5+AYXVYh0AAA==";
|
||||
const SIGNED_PSBT: &str = "cHNidP8BAFMBAAAAAWZalxaErOL7P3WPIUc8DsjgE68S+ww+uqiqEI2SAwlPAAAAAAD/////AQiWmAAAAAAAF6kU4R3W8CnGzZcSsaovTYu0X8vHt3WHAAAAAAABASuAlpgAAAAAACJRINa6bLPZwp3/CYWoxyI3mLYcSC5f9LInAMUng94nspa2AQcAAQhCAUALcP9w/+Ddly9DWdhHTnQ9uCDWLPZjR6vKbKePswW2Ee6W5KNfrklus/8z98n7BQ1U4vADHk0FbadeeL8rrbHlARNAC3D/cP/g3ZcvQ1nYR050Pbgg1iz2Y0erymynj7MFthHuluSjX65JbrP/M/fJ+wUNVOLwAx5NBW2nXni/K62x5UEUeEbK57HG1FUp69HHhjBZH9bSvss8e3qhLoMuXPK5hBr2onWFc3UR6I9ZhuHVeJCi5LNAf4APVd7mHn4BhdViHUAXNmWieJ80Fs+PMa2C186YOBPZbYG/ieEUkagMwzJ788SoCucNdp5wnxfpuJVygFhglDrXGzujFtC82PrMohwuIhXBgiPY+kcolS1Hp0niOK/+7VHz6F+nsz8JVxnzWzkgToYjIHhGyuexxtRVKevRx4YwWR/W0r7LPHt6oS6DLlzyuYQarMAhFnhGyuexxtRVKevRx4YwWR/W0r7LPHt6oS6DLlzyuYQaLQH2onWFc3UR6I9ZhuHVeJCi5LNAf4APVd7mHn4BhdViHRwu7j4AAACAAgAAACEWgiPY+kcolS1Hp0niOK/+7VHz6F+nsz8JVxnzWzkgToYNAMkRfC4AAACAAgAAAAEXIIIj2PpHKJUtR6dJ4jiv/u1R8+hfp7M/CVcZ81s5IE6GARgg9qJ1hXN1EeiPWYbh1XiQouSzQH+AD1Xe5h5+AYXVYh0AAA==";
|
||||
|
||||
let unsigned_psbt = Psbt::from_str(UNSIGNED_PSBT).unwrap();
|
||||
let signed_psbt = Psbt::from_str(SIGNED_PSBT).unwrap();
|
||||
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let (_, alice_pub, _) = setup_keys(ALICE_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
let (_, bob_pub, _) = setup_keys(BOB_TPRV_STR, ALICE_BOB_PATH, &secp);
|
||||
|
||||
let desc = descriptor!(tr(bob_pub, pk(alice_pub))).unwrap();
|
||||
let (wallet_desc, _) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
|
||||
let policy_unsigned = wallet_desc
|
||||
.extract_policy(
|
||||
&SignersContainer::default(),
|
||||
BuildSatisfaction::Psbt(&unsigned_psbt),
|
||||
&secp,
|
||||
)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
let policy_signed = wallet_desc
|
||||
.extract_policy(
|
||||
&SignersContainer::default(),
|
||||
BuildSatisfaction::Psbt(&signed_psbt),
|
||||
&secp,
|
||||
)
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert!(
|
||||
matches!(policy_unsigned.item, SatisfiableItem::Thresh { ref items, threshold: 1 } if items.len() == 2)
|
||||
);
|
||||
assert!(
|
||||
matches!(policy_unsigned.satisfaction, Satisfaction::Partial { n: 2, m: 1, items, .. } if items.is_empty())
|
||||
);
|
||||
|
||||
assert!(
|
||||
matches!(policy_signed.item, SatisfiableItem::Thresh { ref items, threshold: 1 } if items.len() == 2)
|
||||
);
|
||||
assert!(
|
||||
matches!(policy_signed.satisfaction, Satisfaction::PartialComplete { n: 2, m: 1, items, .. } if items == vec![0, 1])
|
||||
);
|
||||
|
||||
let satisfied_items = match policy_signed.item {
|
||||
SatisfiableItem::Thresh { items, .. } => items,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
satisfied_items[0].satisfaction,
|
||||
Satisfaction::Complete {
|
||||
condition: Default::default()
|
||||
}
|
||||
);
|
||||
assert_eq!(
|
||||
satisfied_items[1].satisfaction,
|
||||
Satisfaction::Complete {
|
||||
condition: Default::default()
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -40,19 +40,18 @@ pub type DescriptorTemplateOut = (ExtendedDescriptor, KeyMap, ValidNetworks);
|
||||
/// use bdk::keys::{IntoDescriptorKey, KeyError};
|
||||
/// use bdk::miniscript::Legacy;
|
||||
/// use bdk::template::{DescriptorTemplate, DescriptorTemplateOut};
|
||||
/// use bitcoin::Network;
|
||||
///
|
||||
/// struct MyP2PKH<K: IntoDescriptorKey<Legacy>>(K);
|
||||
///
|
||||
/// impl<K: IntoDescriptorKey<Legacy>> DescriptorTemplate for MyP2PKH<K> {
|
||||
/// fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
/// fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
/// Ok(bdk::descriptor!(pkh(self.0))?)
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
pub trait DescriptorTemplate {
|
||||
/// Build the complete descriptor
|
||||
fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError>;
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError>;
|
||||
}
|
||||
|
||||
/// Turns a [`DescriptorTemplate`] into a valid wallet descriptor by calling its
|
||||
@@ -63,7 +62,7 @@ impl<T: DescriptorTemplate> IntoWalletDescriptor for T {
|
||||
secp: &SecpCtx,
|
||||
network: Network,
|
||||
) -> Result<(ExtendedDescriptor, KeyMap), DescriptorError> {
|
||||
self.build(network)?.into_wallet_descriptor(secp, network)
|
||||
self.build()?.into_wallet_descriptor(secp, network)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -80,7 +79,7 @@ impl<T: DescriptorTemplate> IntoWalletDescriptor for T {
|
||||
///
|
||||
/// let key =
|
||||
/// bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// P2Pkh(key),
|
||||
/// None,
|
||||
/// Network::Testnet,
|
||||
@@ -96,7 +95,7 @@ impl<T: DescriptorTemplate> IntoWalletDescriptor for T {
|
||||
pub struct P2Pkh<K: IntoDescriptorKey<Legacy>>(pub K);
|
||||
|
||||
impl<K: IntoDescriptorKey<Legacy>> DescriptorTemplate for P2Pkh<K> {
|
||||
fn build(self, _network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
descriptor!(pkh(self.0))
|
||||
}
|
||||
}
|
||||
@@ -114,7 +113,7 @@ impl<K: IntoDescriptorKey<Legacy>> DescriptorTemplate for P2Pkh<K> {
|
||||
///
|
||||
/// let key =
|
||||
/// bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// P2Wpkh_P2Sh(key),
|
||||
/// None,
|
||||
/// Network::Testnet,
|
||||
@@ -131,7 +130,7 @@ impl<K: IntoDescriptorKey<Legacy>> DescriptorTemplate for P2Pkh<K> {
|
||||
pub struct P2Wpkh_P2Sh<K: IntoDescriptorKey<Segwitv0>>(pub K);
|
||||
|
||||
impl<K: IntoDescriptorKey<Segwitv0>> DescriptorTemplate for P2Wpkh_P2Sh<K> {
|
||||
fn build(self, _network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
descriptor!(sh(wpkh(self.0)))
|
||||
}
|
||||
}
|
||||
@@ -149,7 +148,7 @@ impl<K: IntoDescriptorKey<Segwitv0>> DescriptorTemplate for P2Wpkh_P2Sh<K> {
|
||||
///
|
||||
/// let key =
|
||||
/// bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// P2Wpkh(key),
|
||||
/// None,
|
||||
/// Network::Testnet,
|
||||
@@ -165,12 +164,12 @@ impl<K: IntoDescriptorKey<Segwitv0>> DescriptorTemplate for P2Wpkh_P2Sh<K> {
|
||||
pub struct P2Wpkh<K: IntoDescriptorKey<Segwitv0>>(pub K);
|
||||
|
||||
impl<K: IntoDescriptorKey<Segwitv0>> DescriptorTemplate for P2Wpkh<K> {
|
||||
fn build(self, _network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
descriptor!(wpkh(self.0))
|
||||
}
|
||||
}
|
||||
|
||||
/// BIP44 template. Expands to `pkh(key/44'/{0,1}'/0'/{0,1}/*)`
|
||||
/// BIP44 template. Expands to `pkh(key/44'/0'/0'/{0,1}/*)`
|
||||
///
|
||||
/// Since there are hardened derivation steps, this template requires a private derivable key (generally a `xprv`/`tprv`).
|
||||
///
|
||||
@@ -187,28 +186,28 @@ impl<K: IntoDescriptorKey<Segwitv0>> DescriptorTemplate for P2Wpkh<K> {
|
||||
/// use bdk::template::Bip44;
|
||||
///
|
||||
/// let key = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPeZRHk4rTG6orPS2CRNFX3njhUXx5vj9qGog5ZMH4uGReDWN5kCkY3jmWEtWause41CDvBRXD1shKknAMKxT99o9qUTRVC6m")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// Bip44(key.clone(), KeychainKind::External),
|
||||
/// Some(Bip44(key, KeychainKind::Internal)),
|
||||
/// Network::Testnet,
|
||||
/// MemoryDatabase::default()
|
||||
/// )?;
|
||||
///
|
||||
/// assert_eq!(wallet.get_address(New)?.to_string(), "mmogjc7HJEZkrLqyQYqJmxUqFaC7i4uf89");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "pkh([c55b303f/44'/1'/0']tpubDCuorCpzvYS2LCD75BR46KHE8GdDeg1wsAgNZeNr6DaB5gQK1o14uErKwKLuFmeemkQ6N2m3rNgvctdJLyr7nwu2yia7413Hhg8WWE44cgT/0/*)#5wrnv0xt");
|
||||
/// assert_eq!(wallet.get_address(New)?.to_string(), "miNG7dJTzJqNbFS19svRdTCisC65dsubtR");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "pkh([c55b303f/44'/0'/0']tpubDDDzQ31JkZB7VxUr9bjvBivDdqoFLrDPyLWtLapArAi51ftfmCb2DPxwLQzX65iNcXz1DGaVvyvo6JQ6rTU73r2gqdEo8uov9QKRb7nKCSU/0/*)#xgaaevjx");
|
||||
/// # Ok::<_, Box<dyn std::error::Error>>(())
|
||||
/// ```
|
||||
pub struct Bip44<K: DerivableKey<Legacy>>(pub K, pub KeychainKind);
|
||||
|
||||
impl<K: DerivableKey<Legacy>> DescriptorTemplate for Bip44<K> {
|
||||
fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Pkh(legacy::make_bipxx_private(44, self.0, self.1, network)?).build(network)
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Pkh(legacy::make_bipxx_private(44, self.0, self.1)?).build()
|
||||
}
|
||||
}
|
||||
|
||||
/// BIP44 public template. Expands to `pkh(key/{0,1}/*)`
|
||||
///
|
||||
/// This assumes that the key used has already been derived with `m/44'/0'/0'` for Mainnet or `m/44'/1'/0'` for Testnet.
|
||||
/// This assumes that the key used has already been derived with `m/44'/0'/0'`.
|
||||
///
|
||||
/// This template requires the parent fingerprint to populate correctly the metadata of PSBTs.
|
||||
///
|
||||
@@ -227,7 +226,7 @@ impl<K: DerivableKey<Legacy>> DescriptorTemplate for Bip44<K> {
|
||||
///
|
||||
/// let key = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDDDzQ31JkZB7VxUr9bjvBivDdqoFLrDPyLWtLapArAi51ftfmCb2DPxwLQzX65iNcXz1DGaVvyvo6JQ6rTU73r2gqdEo8uov9QKRb7nKCSU")?;
|
||||
/// let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// Bip44Public(key.clone(), fingerprint, KeychainKind::External),
|
||||
/// Some(Bip44Public(key, fingerprint, KeychainKind::Internal)),
|
||||
/// Network::Testnet,
|
||||
@@ -241,12 +240,12 @@ impl<K: DerivableKey<Legacy>> DescriptorTemplate for Bip44<K> {
|
||||
pub struct Bip44Public<K: DerivableKey<Legacy>>(pub K, pub bip32::Fingerprint, pub KeychainKind);
|
||||
|
||||
impl<K: DerivableKey<Legacy>> DescriptorTemplate for Bip44Public<K> {
|
||||
fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Pkh(legacy::make_bipxx_public(44, self.0, self.1, self.2)?).build(network)
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Pkh(legacy::make_bipxx_public(44, self.0, self.1, self.2)?).build()
|
||||
}
|
||||
}
|
||||
|
||||
/// BIP49 template. Expands to `sh(wpkh(key/49'/{0,1}'/0'/{0,1}/*))`
|
||||
/// BIP49 template. Expands to `sh(wpkh(key/49'/0'/0'/{0,1}/*))`
|
||||
///
|
||||
/// Since there are hardened derivation steps, this template requires a private derivable key (generally a `xprv`/`tprv`).
|
||||
///
|
||||
@@ -263,22 +262,22 @@ impl<K: DerivableKey<Legacy>> DescriptorTemplate for Bip44Public<K> {
|
||||
/// use bdk::template::Bip49;
|
||||
///
|
||||
/// let key = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPeZRHk4rTG6orPS2CRNFX3njhUXx5vj9qGog5ZMH4uGReDWN5kCkY3jmWEtWause41CDvBRXD1shKknAMKxT99o9qUTRVC6m")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// Bip49(key.clone(), KeychainKind::External),
|
||||
/// Some(Bip49(key, KeychainKind::Internal)),
|
||||
/// Network::Testnet,
|
||||
/// MemoryDatabase::default()
|
||||
/// )?;
|
||||
///
|
||||
/// assert_eq!(wallet.get_address(New)?.to_string(), "2N4zkWAoGdUv4NXhSsU8DvS5MB36T8nKHEB");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "sh(wpkh([c55b303f/49'/1'/0']tpubDDYr4kdnZgjjShzYNjZUZXUUtpXaofdkMaipyS8ThEh45qFmhT4hKYways7UXmg6V7het1QiFo9kf4kYUXyDvV4rHEyvSpys9pjCB3pukxi/0/*))#s9vxlc8e");
|
||||
/// assert_eq!(wallet.get_address(New)?.to_string(), "2N3K4xbVAHoiTQSwxkZjWDfKoNC27pLkYnt");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "sh(wpkh([c55b303f/49\'/0\'/0\']tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L/0/*))#gsmdv4xr");
|
||||
/// # Ok::<_, Box<dyn std::error::Error>>(())
|
||||
/// ```
|
||||
pub struct Bip49<K: DerivableKey<Segwitv0>>(pub K, pub KeychainKind);
|
||||
|
||||
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip49<K> {
|
||||
fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh_P2Sh(segwit_v0::make_bipxx_private(49, self.0, self.1, network)?).build(network)
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh_P2Sh(segwit_v0::make_bipxx_private(49, self.0, self.1)?).build()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -303,7 +302,7 @@ impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip49<K> {
|
||||
///
|
||||
/// let key = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L")?;
|
||||
/// let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// Bip49Public(key.clone(), fingerprint, KeychainKind::External),
|
||||
/// Some(Bip49Public(key, fingerprint, KeychainKind::Internal)),
|
||||
/// Network::Testnet,
|
||||
@@ -311,18 +310,18 @@ impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip49<K> {
|
||||
/// )?;
|
||||
///
|
||||
/// assert_eq!(wallet.get_address(New)?.to_string(), "2N3K4xbVAHoiTQSwxkZjWDfKoNC27pLkYnt");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "sh(wpkh([c55b303f/49'/0'/0']tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L/0/*))#gsmdv4xr");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "sh(wpkh([c55b303f/49\'/0\'/0\']tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L/0/*))#gsmdv4xr");
|
||||
/// # Ok::<_, Box<dyn std::error::Error>>(())
|
||||
/// ```
|
||||
pub struct Bip49Public<K: DerivableKey<Segwitv0>>(pub K, pub bip32::Fingerprint, pub KeychainKind);
|
||||
|
||||
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip49Public<K> {
|
||||
fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh_P2Sh(segwit_v0::make_bipxx_public(49, self.0, self.1, self.2)?).build(network)
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh_P2Sh(segwit_v0::make_bipxx_public(49, self.0, self.1, self.2)?).build()
|
||||
}
|
||||
}
|
||||
|
||||
/// BIP84 template. Expands to `wpkh(key/84'/{0,1}'/0'/{0,1}/*)`
|
||||
/// BIP84 template. Expands to `wpkh(key/84'/0'/0'/{0,1}/*)`
|
||||
///
|
||||
/// Since there are hardened derivation steps, this template requires a private derivable key (generally a `xprv`/`tprv`).
|
||||
///
|
||||
@@ -339,22 +338,22 @@ impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip49Public<K> {
|
||||
/// use bdk::template::Bip84;
|
||||
///
|
||||
/// let key = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPeZRHk4rTG6orPS2CRNFX3njhUXx5vj9qGog5ZMH4uGReDWN5kCkY3jmWEtWause41CDvBRXD1shKknAMKxT99o9qUTRVC6m")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// Bip84(key.clone(), KeychainKind::External),
|
||||
/// Some(Bip84(key, KeychainKind::Internal)),
|
||||
/// Network::Testnet,
|
||||
/// MemoryDatabase::default()
|
||||
/// )?;
|
||||
///
|
||||
/// assert_eq!(wallet.get_address(New)?.to_string(), "tb1qhl85z42h7r4su5u37rvvw0gk8j2t3n9y7zsg4n");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "wpkh([c55b303f/84'/1'/0']tpubDDc5mum24DekpNw92t6fHGp8Gr2JjF9J7i4TZBtN6Vp8xpAULG5CFaKsfugWa5imhrQQUZKXe261asP5koDHo5bs3qNTmf3U3o4v9SaB8gg/0/*)#6kfecsmr");
|
||||
/// assert_eq!(wallet.get_address(New)?.to_string(), "tb1qedg9fdlf8cnnqfd5mks6uz5w4kgpk2pr6y4qc7");
|
||||
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "wpkh([c55b303f/84\'/0\'/0\']tpubDC2Qwo2TFsaNC4ju8nrUJ9mqVT3eSgdmy1yPqhgkjwmke3PRXutNGRYAUo6RCHTcVQaDR3ohNU9we59brGHuEKPvH1ags2nevW5opEE9Z5Q/0/*)#nkk5dtkg");
|
||||
/// # Ok::<_, Box<dyn std::error::Error>>(())
|
||||
/// ```
|
||||
pub struct Bip84<K: DerivableKey<Segwitv0>>(pub K, pub KeychainKind);
|
||||
|
||||
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip84<K> {
|
||||
fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh(segwit_v0::make_bipxx_private(84, self.0, self.1, network)?).build(network)
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh(segwit_v0::make_bipxx_private(84, self.0, self.1)?).build()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -379,7 +378,7 @@ impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip84<K> {
|
||||
///
|
||||
/// let key = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC2Qwo2TFsaNC4ju8nrUJ9mqVT3eSgdmy1yPqhgkjwmke3PRXutNGRYAUo6RCHTcVQaDR3ohNU9we59brGHuEKPvH1ags2nevW5opEE9Z5Q")?;
|
||||
/// let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f")?;
|
||||
/// let wallet = Wallet::new(
|
||||
/// let wallet = Wallet::new_offline(
|
||||
/// Bip84Public(key.clone(), fingerprint, KeychainKind::External),
|
||||
/// Some(Bip84Public(key, fingerprint, KeychainKind::Internal)),
|
||||
/// Network::Testnet,
|
||||
@@ -393,8 +392,8 @@ impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip84<K> {
|
||||
pub struct Bip84Public<K: DerivableKey<Segwitv0>>(pub K, pub bip32::Fingerprint, pub KeychainKind);
|
||||
|
||||
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip84Public<K> {
|
||||
fn build(self, network: Network) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh(segwit_v0::make_bipxx_public(84, self.0, self.1, self.2)?).build(network)
|
||||
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
|
||||
P2Wpkh(segwit_v0::make_bipxx_public(84, self.0, self.1, self.2)?).build()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -407,19 +406,10 @@ macro_rules! expand_make_bipxx {
|
||||
bip: u32,
|
||||
key: K,
|
||||
keychain: KeychainKind,
|
||||
network: Network,
|
||||
) -> Result<impl IntoDescriptorKey<$ctx>, DescriptorError> {
|
||||
let mut derivation_path = Vec::with_capacity(4);
|
||||
derivation_path.push(bip32::ChildNumber::from_hardened_idx(bip)?);
|
||||
|
||||
match network {
|
||||
Network::Bitcoin => {
|
||||
derivation_path.push(bip32::ChildNumber::from_hardened_idx(0)?);
|
||||
}
|
||||
_ => {
|
||||
derivation_path.push(bip32::ChildNumber::from_hardened_idx(1)?);
|
||||
}
|
||||
}
|
||||
derivation_path.push(bip32::ChildNumber::from_hardened_idx(0)?);
|
||||
derivation_path.push(bip32::ChildNumber::from_hardened_idx(0)?);
|
||||
|
||||
match keychain {
|
||||
@@ -476,40 +466,6 @@ mod test {
|
||||
use miniscript::descriptor::{DescriptorPublicKey, DescriptorTrait, KeyMap};
|
||||
use miniscript::Descriptor;
|
||||
|
||||
// BIP44 `pkh(key/44'/{0,1}'/0'/{0,1}/*)`
|
||||
#[test]
|
||||
fn test_bip44_template_cointype() {
|
||||
use bitcoin::util::bip32::ChildNumber::{self, Hardened};
|
||||
|
||||
let xprvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("xprv9s21ZrQH143K2fpbqApQL69a4oKdGVnVN52R82Ft7d1pSqgKmajF62acJo3aMszZb6qQ22QsVECSFxvf9uyxFUvFYQMq3QbtwtRSMjLAhMf").unwrap();
|
||||
assert_eq!(Network::Bitcoin, xprvkey.network);
|
||||
let xdesc = Bip44(xprvkey, KeychainKind::Internal)
|
||||
.build(Network::Bitcoin)
|
||||
.unwrap();
|
||||
|
||||
if let ExtendedDescriptor::Pkh(pkh) = xdesc.0 {
|
||||
let path: Vec<ChildNumber> = pkh.into_inner().full_derivation_path().into();
|
||||
let purpose = path.get(0).unwrap();
|
||||
assert!(matches!(purpose, Hardened { index: 44 }));
|
||||
let coin_type = path.get(1).unwrap();
|
||||
assert!(matches!(coin_type, Hardened { index: 0 }));
|
||||
}
|
||||
|
||||
let tprvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy").unwrap();
|
||||
assert_eq!(Network::Testnet, tprvkey.network);
|
||||
let tdesc = Bip44(tprvkey, KeychainKind::Internal)
|
||||
.build(Network::Testnet)
|
||||
.unwrap();
|
||||
|
||||
if let ExtendedDescriptor::Pkh(pkh) = tdesc.0 {
|
||||
let path: Vec<ChildNumber> = pkh.into_inner().full_derivation_path().into();
|
||||
let purpose = path.get(0).unwrap();
|
||||
assert!(matches!(purpose, Hardened { index: 44 }));
|
||||
let coin_type = path.get(1).unwrap();
|
||||
assert!(matches!(coin_type, Hardened { index: 1 }));
|
||||
}
|
||||
}
|
||||
|
||||
// verify template descriptor generates expected address(es)
|
||||
fn check(
|
||||
desc: Result<(Descriptor<DescriptorPublicKey>, KeyMap, ValidNetworks), DescriptorError>,
|
||||
@@ -541,7 +497,7 @@ mod test {
|
||||
bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")
|
||||
.unwrap();
|
||||
check(
|
||||
P2Pkh(prvkey).build(Network::Bitcoin),
|
||||
P2Pkh(prvkey).build(),
|
||||
false,
|
||||
true,
|
||||
&["mwJ8hxFYW19JLuc65RCTaP4v1rzVU8cVMT"],
|
||||
@@ -552,7 +508,7 @@ mod test {
|
||||
)
|
||||
.unwrap();
|
||||
check(
|
||||
P2Pkh(pubkey).build(Network::Bitcoin),
|
||||
P2Pkh(pubkey).build(),
|
||||
false,
|
||||
true,
|
||||
&["muZpTpBYhxmRFuCjLc7C6BBDF32C8XVJUi"],
|
||||
@@ -566,7 +522,7 @@ mod test {
|
||||
bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")
|
||||
.unwrap();
|
||||
check(
|
||||
P2Wpkh_P2Sh(prvkey).build(Network::Bitcoin),
|
||||
P2Wpkh_P2Sh(prvkey).build(),
|
||||
true,
|
||||
true,
|
||||
&["2NB4ox5VDRw1ecUv6SnT3VQHPXveYztRqk5"],
|
||||
@@ -577,7 +533,7 @@ mod test {
|
||||
)
|
||||
.unwrap();
|
||||
check(
|
||||
P2Wpkh_P2Sh(pubkey).build(Network::Bitcoin),
|
||||
P2Wpkh_P2Sh(pubkey).build(),
|
||||
true,
|
||||
true,
|
||||
&["2N5LiC3CqzxDamRTPG1kiNv1FpNJQ7x28sb"],
|
||||
@@ -591,7 +547,7 @@ mod test {
|
||||
bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")
|
||||
.unwrap();
|
||||
check(
|
||||
P2Wpkh(prvkey).build(Network::Bitcoin),
|
||||
P2Wpkh(prvkey).build(),
|
||||
true,
|
||||
true,
|
||||
&["bcrt1q4525hmgw265tl3drrl8jjta7ayffu6jfcwxx9y"],
|
||||
@@ -602,7 +558,7 @@ mod test {
|
||||
)
|
||||
.unwrap();
|
||||
check(
|
||||
P2Wpkh(pubkey).build(Network::Bitcoin),
|
||||
P2Wpkh(pubkey).build(),
|
||||
true,
|
||||
true,
|
||||
&["bcrt1qngw83fg8dz0k749cg7k3emc7v98wy0c7azaa6h"],
|
||||
@@ -614,7 +570,7 @@ mod test {
|
||||
fn test_bip44_template() {
|
||||
let prvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy").unwrap();
|
||||
check(
|
||||
Bip44(prvkey, KeychainKind::External).build(Network::Bitcoin),
|
||||
Bip44(prvkey, KeychainKind::External).build(),
|
||||
false,
|
||||
false,
|
||||
&[
|
||||
@@ -624,7 +580,7 @@ mod test {
|
||||
],
|
||||
);
|
||||
check(
|
||||
Bip44(prvkey, KeychainKind::Internal).build(Network::Bitcoin),
|
||||
Bip44(prvkey, KeychainKind::Internal).build(),
|
||||
false,
|
||||
false,
|
||||
&[
|
||||
@@ -641,7 +597,7 @@ mod test {
|
||||
let pubkey = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDDDzQ31JkZB7VxUr9bjvBivDdqoFLrDPyLWtLapArAi51ftfmCb2DPxwLQzX65iNcXz1DGaVvyvo6JQ6rTU73r2gqdEo8uov9QKRb7nKCSU").unwrap();
|
||||
let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f").unwrap();
|
||||
check(
|
||||
Bip44Public(pubkey, fingerprint, KeychainKind::External).build(Network::Bitcoin),
|
||||
Bip44Public(pubkey, fingerprint, KeychainKind::External).build(),
|
||||
false,
|
||||
false,
|
||||
&[
|
||||
@@ -651,7 +607,7 @@ mod test {
|
||||
],
|
||||
);
|
||||
check(
|
||||
Bip44Public(pubkey, fingerprint, KeychainKind::Internal).build(Network::Bitcoin),
|
||||
Bip44Public(pubkey, fingerprint, KeychainKind::Internal).build(),
|
||||
false,
|
||||
false,
|
||||
&[
|
||||
@@ -667,7 +623,7 @@ mod test {
|
||||
fn test_bip49_template() {
|
||||
let prvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy").unwrap();
|
||||
check(
|
||||
Bip49(prvkey, KeychainKind::External).build(Network::Bitcoin),
|
||||
Bip49(prvkey, KeychainKind::External).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
@@ -677,7 +633,7 @@ mod test {
|
||||
],
|
||||
);
|
||||
check(
|
||||
Bip49(prvkey, KeychainKind::Internal).build(Network::Bitcoin),
|
||||
Bip49(prvkey, KeychainKind::Internal).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
@@ -694,7 +650,7 @@ mod test {
|
||||
let pubkey = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L").unwrap();
|
||||
let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f").unwrap();
|
||||
check(
|
||||
Bip49Public(pubkey, fingerprint, KeychainKind::External).build(Network::Bitcoin),
|
||||
Bip49Public(pubkey, fingerprint, KeychainKind::External).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
@@ -704,7 +660,7 @@ mod test {
|
||||
],
|
||||
);
|
||||
check(
|
||||
Bip49Public(pubkey, fingerprint, KeychainKind::Internal).build(Network::Bitcoin),
|
||||
Bip49Public(pubkey, fingerprint, KeychainKind::Internal).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
@@ -720,7 +676,7 @@ mod test {
|
||||
fn test_bip84_template() {
|
||||
let prvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy").unwrap();
|
||||
check(
|
||||
Bip84(prvkey, KeychainKind::External).build(Network::Bitcoin),
|
||||
Bip84(prvkey, KeychainKind::External).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
@@ -730,7 +686,7 @@ mod test {
|
||||
],
|
||||
);
|
||||
check(
|
||||
Bip84(prvkey, KeychainKind::Internal).build(Network::Bitcoin),
|
||||
Bip84(prvkey, KeychainKind::Internal).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
@@ -747,7 +703,7 @@ mod test {
|
||||
let pubkey = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC2Qwo2TFsaNC4ju8nrUJ9mqVT3eSgdmy1yPqhgkjwmke3PRXutNGRYAUo6RCHTcVQaDR3ohNU9we59brGHuEKPvH1ags2nevW5opEE9Z5Q").unwrap();
|
||||
let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f").unwrap();
|
||||
check(
|
||||
Bip84Public(pubkey, fingerprint, KeychainKind::External).build(Network::Bitcoin),
|
||||
Bip84Public(pubkey, fingerprint, KeychainKind::External).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
@@ -757,7 +713,7 @@ mod test {
|
||||
],
|
||||
);
|
||||
check(
|
||||
Bip84Public(pubkey, fingerprint, KeychainKind::Internal).build(Network::Bitcoin),
|
||||
Bip84Public(pubkey, fingerprint, KeychainKind::Internal).build(),
|
||||
true,
|
||||
false,
|
||||
&[
|
||||
|
||||
18
src/error.rs
18
src/error.rs
@@ -115,7 +115,7 @@ pub enum Error {
|
||||
Hex(bitcoin::hashes::hex::Error),
|
||||
/// Partially signed bitcoin transaction error
|
||||
Psbt(bitcoin::util::psbt::Error),
|
||||
/// Partially signed bitcoin transaction parse error
|
||||
/// Partially signed bitcoin transaction parseerror
|
||||
PsbtParse(bitcoin::util::psbt::PsbtParseError),
|
||||
|
||||
//KeyMismatch(bitcoin::secp256k1::PublicKey, bitcoin::secp256k1::PublicKey),
|
||||
@@ -130,7 +130,7 @@ pub enum Error {
|
||||
Electrum(electrum_client::Error),
|
||||
#[cfg(feature = "esplora")]
|
||||
/// Esplora client error
|
||||
Esplora(Box<crate::blockchain::esplora::EsploraError>),
|
||||
Esplora(crate::blockchain::esplora::EsploraError),
|
||||
#[cfg(feature = "compact_filters")]
|
||||
/// Compact filters client error)
|
||||
CompactFilters(crate::blockchain::compact_filters::CompactFiltersError),
|
||||
@@ -140,9 +140,6 @@ pub enum Error {
|
||||
#[cfg(feature = "rpc")]
|
||||
/// Rpc client error
|
||||
Rpc(bitcoincore_rpc::Error),
|
||||
#[cfg(feature = "sqlite")]
|
||||
/// Rusqlite client error
|
||||
Rusqlite(rusqlite::Error),
|
||||
}
|
||||
|
||||
impl fmt::Display for Error {
|
||||
@@ -193,12 +190,12 @@ impl_error!(bitcoin::util::psbt::PsbtParseError, PsbtParse);
|
||||
|
||||
#[cfg(feature = "electrum")]
|
||||
impl_error!(electrum_client::Error, Electrum);
|
||||
#[cfg(feature = "esplora")]
|
||||
impl_error!(crate::blockchain::esplora::EsploraError, Esplora);
|
||||
#[cfg(feature = "key-value-db")]
|
||||
impl_error!(sled::Error, Sled);
|
||||
#[cfg(feature = "rpc")]
|
||||
impl_error!(bitcoincore_rpc::Error, Rpc);
|
||||
#[cfg(feature = "sqlite")]
|
||||
impl_error!(rusqlite::Error, Rusqlite);
|
||||
|
||||
#[cfg(feature = "compact_filters")]
|
||||
impl From<crate::blockchain::compact_filters::CompactFiltersError> for Error {
|
||||
@@ -219,10 +216,3 @@ impl From<crate::wallet::verify::VerifyError> for Error {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "esplora")]
|
||||
impl From<crate::blockchain::esplora::EsploraError> for Error {
|
||||
fn from(other: crate::blockchain::esplora::EsploraError) -> Self {
|
||||
Error::Esplora(Box::new(other))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,23 +19,7 @@ use bitcoin::Network;
|
||||
|
||||
use miniscript::ScriptContext;
|
||||
|
||||
pub use bip39::{Error, Language, Mnemonic};
|
||||
|
||||
type Seed = [u8; 64];
|
||||
|
||||
/// Type describing entropy length (aka word count) in the mnemonic
|
||||
pub enum WordCount {
|
||||
/// 12 words mnemonic (128 bits entropy)
|
||||
Words12 = 128,
|
||||
/// 15 words mnemonic (160 bits entropy)
|
||||
Words15 = 160,
|
||||
/// 18 words mnemonic (192 bits entropy)
|
||||
Words18 = 192,
|
||||
/// 21 words mnemonic (224 bits entropy)
|
||||
Words21 = 224,
|
||||
/// 24 words mnemonic (256 bits entropy)
|
||||
Words24 = 256,
|
||||
}
|
||||
pub use bip39::{Language, Mnemonic, MnemonicType, Seed};
|
||||
|
||||
use super::{
|
||||
any_network, DerivableKey, DescriptorKey, ExtendedKey, GeneratableKey, GeneratedKey, KeyError,
|
||||
@@ -56,7 +40,7 @@ pub type MnemonicWithPassphrase = (Mnemonic, Option<String>);
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
|
||||
impl<Ctx: ScriptContext> DerivableKey<Ctx> for Seed {
|
||||
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
|
||||
Ok(bip32::ExtendedPrivKey::new_master(Network::Bitcoin, &self[..])?.into())
|
||||
Ok(bip32::ExtendedPrivKey::new_master(Network::Bitcoin, &self.as_bytes())?.into())
|
||||
}
|
||||
|
||||
fn into_descriptor_key(
|
||||
@@ -76,7 +60,7 @@ impl<Ctx: ScriptContext> DerivableKey<Ctx> for Seed {
|
||||
impl<Ctx: ScriptContext> DerivableKey<Ctx> for MnemonicWithPassphrase {
|
||||
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
|
||||
let (mnemonic, passphrase) = self;
|
||||
let seed: Seed = mnemonic.to_seed(passphrase.as_deref().unwrap_or(""));
|
||||
let seed = Seed::new(&mnemonic, passphrase.as_deref().unwrap_or(""));
|
||||
|
||||
seed.into_extended_key()
|
||||
}
|
||||
@@ -94,23 +78,6 @@ impl<Ctx: ScriptContext> DerivableKey<Ctx> for MnemonicWithPassphrase {
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
|
||||
impl<Ctx: ScriptContext> DerivableKey<Ctx> for (GeneratedKey<Mnemonic, Ctx>, Option<String>) {
|
||||
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
|
||||
let (mnemonic, passphrase) = self;
|
||||
(mnemonic.into_key(), passphrase).into_extended_key()
|
||||
}
|
||||
|
||||
fn into_descriptor_key(
|
||||
self,
|
||||
source: Option<bip32::KeySource>,
|
||||
derivation_path: bip32::DerivationPath,
|
||||
) -> Result<DescriptorKey<Ctx>, KeyError> {
|
||||
let (mnemonic, passphrase) = self;
|
||||
(mnemonic.into_key(), passphrase).into_descriptor_key(source, derivation_path)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
|
||||
impl<Ctx: ScriptContext> DerivableKey<Ctx> for Mnemonic {
|
||||
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
|
||||
@@ -134,15 +101,15 @@ impl<Ctx: ScriptContext> DerivableKey<Ctx> for Mnemonic {
|
||||
impl<Ctx: ScriptContext> GeneratableKey<Ctx> for Mnemonic {
|
||||
type Entropy = [u8; 32];
|
||||
|
||||
type Options = (WordCount, Language);
|
||||
type Error = Option<bip39::Error>;
|
||||
type Options = (MnemonicType, Language);
|
||||
type Error = Option<bip39::ErrorKind>;
|
||||
|
||||
fn generate_with_entropy(
|
||||
(word_count, language): Self::Options,
|
||||
(mnemonic_type, language): Self::Options,
|
||||
entropy: Self::Entropy,
|
||||
) -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
|
||||
let entropy = &entropy.as_ref()[..(word_count as usize / 8)];
|
||||
let mnemonic = Mnemonic::from_entropy_in(language, entropy)?;
|
||||
let entropy = &entropy.as_ref()[..(mnemonic_type.entropy_bits() / 8)];
|
||||
let mnemonic = Mnemonic::from_entropy(entropy, language).map_err(|e| e.downcast().ok())?;
|
||||
|
||||
Ok(GeneratedKey::new(mnemonic, any_network()))
|
||||
}
|
||||
@@ -154,17 +121,15 @@ mod test {
|
||||
|
||||
use bitcoin::util::bip32;
|
||||
|
||||
use bip39::{Language, Mnemonic};
|
||||
use bip39::{Language, Mnemonic, MnemonicType};
|
||||
|
||||
use crate::keys::{any_network, GeneratableKey, GeneratedKey};
|
||||
|
||||
use super::WordCount;
|
||||
|
||||
#[test]
|
||||
fn test_keys_bip39_mnemonic() {
|
||||
let mnemonic =
|
||||
"aim bunker wash balance finish force paper analyst cabin spoon stable organ";
|
||||
let mnemonic = Mnemonic::parse_in(Language::English, mnemonic).unwrap();
|
||||
let mnemonic = Mnemonic::from_phrase(mnemonic, Language::English).unwrap();
|
||||
let path = bip32::DerivationPath::from_str("m/44'/0'/0'/0").unwrap();
|
||||
|
||||
let key = (mnemonic, path);
|
||||
@@ -178,7 +143,7 @@ mod test {
|
||||
fn test_keys_bip39_mnemonic_passphrase() {
|
||||
let mnemonic =
|
||||
"aim bunker wash balance finish force paper analyst cabin spoon stable organ";
|
||||
let mnemonic = Mnemonic::parse_in(Language::English, mnemonic).unwrap();
|
||||
let mnemonic = Mnemonic::from_phrase(mnemonic, Language::English).unwrap();
|
||||
let path = bip32::DerivationPath::from_str("m/44'/0'/0'/0").unwrap();
|
||||
|
||||
let key = ((mnemonic, Some("passphrase".into())), path);
|
||||
@@ -192,7 +157,7 @@ mod test {
|
||||
fn test_keys_generate_bip39() {
|
||||
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
|
||||
Mnemonic::generate_with_entropy(
|
||||
(WordCount::Words12, Language::English),
|
||||
(MnemonicType::Words12, Language::English),
|
||||
crate::keys::test::TEST_ENTROPY,
|
||||
)
|
||||
.unwrap();
|
||||
@@ -204,7 +169,7 @@ mod test {
|
||||
|
||||
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
|
||||
Mnemonic::generate_with_entropy(
|
||||
(WordCount::Words24, Language::English),
|
||||
(MnemonicType::Words24, Language::English),
|
||||
crate::keys::test::TEST_ENTROPY,
|
||||
)
|
||||
.unwrap();
|
||||
@@ -215,11 +180,11 @@ mod test {
|
||||
#[test]
|
||||
fn test_keys_generate_bip39_random() {
|
||||
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
|
||||
Mnemonic::generate((WordCount::Words12, Language::English)).unwrap();
|
||||
Mnemonic::generate((MnemonicType::Words12, Language::English)).unwrap();
|
||||
assert_eq!(generated_mnemonic.valid_networks, any_network());
|
||||
|
||||
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
|
||||
Mnemonic::generate((WordCount::Words24, Language::English)).unwrap();
|
||||
Mnemonic::generate((MnemonicType::Words24, Language::English)).unwrap();
|
||||
assert_eq!(generated_mnemonic.valid_networks, any_network());
|
||||
}
|
||||
}
|
||||
|
||||
101
src/keys/mod.rs
101
src/keys/mod.rs
@@ -20,12 +20,12 @@ use std::str::FromStr;
|
||||
use bitcoin::secp256k1::{self, Secp256k1, Signing};
|
||||
|
||||
use bitcoin::util::bip32;
|
||||
use bitcoin::{Network, PrivateKey, PublicKey, XOnlyPublicKey};
|
||||
use bitcoin::{Network, PrivateKey, PublicKey};
|
||||
|
||||
use miniscript::descriptor::{Descriptor, DescriptorXKey, Wildcard};
|
||||
pub use miniscript::descriptor::{
|
||||
DescriptorPublicKey, DescriptorSecretKey, DescriptorSinglePriv, DescriptorSinglePub, KeyMap,
|
||||
SinglePubKey, SortedMultiVec,
|
||||
SortedMultiVec,
|
||||
};
|
||||
pub use miniscript::ScriptContext;
|
||||
use miniscript::{Miniscript, Terminal};
|
||||
@@ -127,8 +127,6 @@ pub enum ScriptContextEnum {
|
||||
Legacy,
|
||||
/// Segwitv0 scripts
|
||||
Segwitv0,
|
||||
/// Taproot scripts
|
||||
Tap,
|
||||
}
|
||||
|
||||
impl ScriptContextEnum {
|
||||
@@ -141,11 +139,6 @@ impl ScriptContextEnum {
|
||||
pub fn is_segwit_v0(&self) -> bool {
|
||||
self == &ScriptContextEnum::Segwitv0
|
||||
}
|
||||
|
||||
/// Returns whether the script context is [`ScriptContextEnum::Tap`]
|
||||
pub fn is_taproot(&self) -> bool {
|
||||
self == &ScriptContextEnum::Tap
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait that adds extra useful methods to [`ScriptContext`]s
|
||||
@@ -162,11 +155,6 @@ pub trait ExtScriptContext: ScriptContext {
|
||||
fn is_segwit_v0() -> bool {
|
||||
Self::as_enum().is_segwit_v0()
|
||||
}
|
||||
|
||||
/// Returns whether the script context is [`Tap`](miniscript::Tap), aka Taproot or Segwit V1
|
||||
fn is_taproot() -> bool {
|
||||
Self::as_enum().is_taproot()
|
||||
}
|
||||
}
|
||||
|
||||
impl<Ctx: ScriptContext + 'static> ExtScriptContext for Ctx {
|
||||
@@ -174,7 +162,6 @@ impl<Ctx: ScriptContext + 'static> ExtScriptContext for Ctx {
|
||||
match TypeId::of::<Ctx>() {
|
||||
t if t == TypeId::of::<miniscript::Legacy>() => ScriptContextEnum::Legacy,
|
||||
t if t == TypeId::of::<miniscript::Segwitv0>() => ScriptContextEnum::Segwitv0,
|
||||
t if t == TypeId::of::<miniscript::Tap>() => ScriptContextEnum::Tap,
|
||||
_ => unimplemented!("Unknown ScriptContext type"),
|
||||
}
|
||||
}
|
||||
@@ -225,7 +212,7 @@ impl<Ctx: ScriptContext + 'static> ExtScriptContext for Ctx {
|
||||
///
|
||||
/// use bdk::keys::{
|
||||
/// mainnet_network, DescriptorKey, DescriptorPublicKey, DescriptorSinglePub,
|
||||
/// IntoDescriptorKey, KeyError, ScriptContext, SinglePubKey,
|
||||
/// IntoDescriptorKey, KeyError, ScriptContext,
|
||||
/// };
|
||||
///
|
||||
/// pub struct MyKeyType {
|
||||
@@ -237,7 +224,7 @@ impl<Ctx: ScriptContext + 'static> ExtScriptContext for Ctx {
|
||||
/// Ok(DescriptorKey::from_public(
|
||||
/// DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
/// origin: None,
|
||||
/// key: SinglePubKey::FullKey(self.pubkey),
|
||||
/// key: self.pubkey,
|
||||
/// }),
|
||||
/// mainnet_network(),
|
||||
/// ))
|
||||
@@ -346,7 +333,7 @@ impl<Ctx: ScriptContext> ExtendedKey<Ctx> {
|
||||
secp: &Secp256k1<C>,
|
||||
) -> bip32::ExtendedPubKey {
|
||||
let mut xpub = match self {
|
||||
ExtendedKey::Private((xprv, _)) => bip32::ExtendedPubKey::from_priv(secp, &xprv),
|
||||
ExtendedKey::Private((xprv, _)) => bip32::ExtendedPubKey::from_private(secp, &xprv),
|
||||
ExtendedKey::Public((xpub, _)) => xpub,
|
||||
};
|
||||
|
||||
@@ -369,7 +356,7 @@ impl<Ctx: ScriptContext> From<bip32::ExtendedPrivKey> for ExtendedKey<Ctx> {
|
||||
|
||||
/// Trait for keys that can be derived.
|
||||
///
|
||||
/// When extra metadata are provided, a [`DerivableKey`] can be transformed into a
|
||||
/// When extra metadata are provided, a [`DerivableKey`] can be transofrmed into a
|
||||
/// [`DescriptorKey`]: the trait [`IntoDescriptorKey`] is automatically implemented
|
||||
/// for `(DerivableKey, DerivationPath)` and
|
||||
/// `(DerivableKey, KeySource, DerivationPath)` tuples.
|
||||
@@ -400,7 +387,7 @@ impl<Ctx: ScriptContext> From<bip32::ExtendedPrivKey> for ExtendedKey<Ctx> {
|
||||
/// network: self.network,
|
||||
/// depth: 0,
|
||||
/// parent_fingerprint: bip32::Fingerprint::default(),
|
||||
/// private_key: self.key_data.inner,
|
||||
/// private_key: self.key_data,
|
||||
/// chain_code: bip32::ChainCode::from(self.chain_code.as_ref()),
|
||||
/// child_number: bip32::ChildNumber::Normal { index: 0 },
|
||||
/// };
|
||||
@@ -432,7 +419,7 @@ impl<Ctx: ScriptContext> From<bip32::ExtendedPrivKey> for ExtendedKey<Ctx> {
|
||||
/// network: bitcoin::Network::Bitcoin, // pick an arbitrary network here
|
||||
/// depth: 0,
|
||||
/// parent_fingerprint: bip32::Fingerprint::default(),
|
||||
/// private_key: self.key_data.inner,
|
||||
/// private_key: self.key_data,
|
||||
/// chain_code: bip32::ChainCode::from(self.chain_code.as_ref()),
|
||||
/// child_number: bip32::ChildNumber::Normal { index: 0 },
|
||||
/// };
|
||||
@@ -473,9 +460,9 @@ use bdk::keys::bip39::{Mnemonic, Language};
|
||||
|
||||
# fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let xkey: ExtendedKey =
|
||||
Mnemonic::parse_in(
|
||||
Language::English,
|
||||
Mnemonic::from_phrase(
|
||||
"jelly crash boy whisper mouse ecology tuna soccer memory million news short",
|
||||
Language::English
|
||||
)?
|
||||
.into_extended_key()?;
|
||||
let xprv = xkey.into_xprv(Network::Bitcoin).unwrap();
|
||||
@@ -560,16 +547,6 @@ impl<K, Ctx: ScriptContext> Deref for GeneratedKey<K, Ctx> {
|
||||
}
|
||||
}
|
||||
|
||||
impl<K: Clone, Ctx: ScriptContext> Clone for GeneratedKey<K, Ctx> {
|
||||
fn clone(&self) -> GeneratedKey<K, Ctx> {
|
||||
GeneratedKey {
|
||||
key: self.key.clone(),
|
||||
valid_networks: self.valid_networks.clone(),
|
||||
phantom: self.phantom,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Make generated "derivable" keys themselves "derivable". Also make sure they are assigned the
|
||||
// right `valid_networks`.
|
||||
impl<Ctx, K> DerivableKey<Ctx> for GeneratedKey<K, Ctx>
|
||||
@@ -710,11 +687,11 @@ impl<Ctx: ScriptContext> GeneratableKey<Ctx> for PrivateKey {
|
||||
entropy: Self::Entropy,
|
||||
) -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
|
||||
// pick a arbitrary network here, but say that we support all of them
|
||||
let inner = secp256k1::SecretKey::from_slice(&entropy)?;
|
||||
let key = secp256k1::SecretKey::from_slice(&entropy)?;
|
||||
let private_key = PrivateKey {
|
||||
compressed: options.compressed,
|
||||
network: Network::Bitcoin,
|
||||
inner,
|
||||
key,
|
||||
};
|
||||
|
||||
Ok(GeneratedKey::new(private_key, any_network()))
|
||||
@@ -771,41 +748,22 @@ pub fn make_pk<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
|
||||
let (key, key_map, valid_networks) = descriptor_key.into_descriptor_key()?.extract(secp)?;
|
||||
let minisc = Miniscript::from_ast(Terminal::PkK(key))?;
|
||||
|
||||
minisc.check_miniscript()?;
|
||||
|
||||
Ok((minisc, key_map, valid_networks))
|
||||
}
|
||||
|
||||
// Used internally by `bdk::fragment!` to build `pk_h()` fragments
|
||||
#[doc(hidden)]
|
||||
pub fn make_pkh<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
|
||||
descriptor_key: Pk,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(Miniscript<DescriptorPublicKey, Ctx>, KeyMap, ValidNetworks), DescriptorError> {
|
||||
let (key, key_map, valid_networks) = descriptor_key.into_descriptor_key()?.extract(secp)?;
|
||||
let minisc = Miniscript::from_ast(Terminal::PkH(key))?;
|
||||
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
|
||||
Ok((minisc, key_map, valid_networks))
|
||||
}
|
||||
|
||||
// Used internally by `bdk::fragment!` to build `multi()` fragments
|
||||
#[doc(hidden)]
|
||||
pub fn make_multi<
|
||||
Pk: IntoDescriptorKey<Ctx>,
|
||||
Ctx: ScriptContext,
|
||||
V: Fn(usize, Vec<DescriptorPublicKey>) -> Terminal<DescriptorPublicKey, Ctx>,
|
||||
>(
|
||||
pub fn make_multi<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
|
||||
thresh: usize,
|
||||
variant: V,
|
||||
pks: Vec<Pk>,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(Miniscript<DescriptorPublicKey, Ctx>, KeyMap, ValidNetworks), DescriptorError> {
|
||||
let (pks, key_map, valid_networks) = expand_multi_keys(pks, secp)?;
|
||||
let minisc = Miniscript::from_ast(variant(thresh, pks))?;
|
||||
let minisc = Miniscript::from_ast(Terminal::Multi(thresh, pks))?;
|
||||
|
||||
minisc.check_miniscript()?;
|
||||
minisc.check_minsicript()?;
|
||||
|
||||
Ok((minisc, key_map, valid_networks))
|
||||
}
|
||||
@@ -858,17 +816,7 @@ impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for DescriptorPublicKey {
|
||||
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for PublicKey {
|
||||
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::FullKey(self),
|
||||
origin: None,
|
||||
})
|
||||
.into_descriptor_key()
|
||||
}
|
||||
}
|
||||
|
||||
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for XOnlyPublicKey {
|
||||
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
|
||||
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
|
||||
key: SinglePubKey::XOnly(self),
|
||||
key: self,
|
||||
origin: None,
|
||||
})
|
||||
.into_descriptor_key()
|
||||
@@ -969,19 +917,4 @@ pub mod test {
|
||||
"L2wTu6hQrnDMiFNWA5na6jB12ErGQqtXwqpSL7aWquJaZG8Ai3ch"
|
||||
);
|
||||
}
|
||||
|
||||
#[cfg(feature = "keys-bip39")]
|
||||
#[test]
|
||||
fn test_keys_wif_network_bip39() {
|
||||
let xkey: ExtendedKey = bip39::Mnemonic::parse_in(
|
||||
bip39::Language::English,
|
||||
"jelly crash boy whisper mouse ecology tuna soccer memory million news short",
|
||||
)
|
||||
.unwrap()
|
||||
.into_extended_key()
|
||||
.unwrap();
|
||||
let xprv = xkey.into_xprv(Network::Testnet).unwrap();
|
||||
|
||||
assert_eq!(xprv.network, Network::Testnet);
|
||||
}
|
||||
}
|
||||
|
||||
69
src/lib.rs
69
src/lib.rs
@@ -14,10 +14,6 @@
|
||||
// only enables the `doc_cfg` feature when
|
||||
// the `docsrs` configuration attribute is defined
|
||||
#![cfg_attr(docsrs, feature(doc_cfg))]
|
||||
#![cfg_attr(
|
||||
docsrs,
|
||||
doc(html_logo_url = "https://github.com/bitcoindevkit/bdk/raw/master/static/bdk.png")
|
||||
)]
|
||||
|
||||
//! A modern, lightweight, descriptor-based wallet library written in Rust.
|
||||
//!
|
||||
@@ -44,32 +40,31 @@
|
||||
//! interact with the bitcoin P2P network.
|
||||
//!
|
||||
//! ```toml
|
||||
//! bdk = "0.20.0"
|
||||
//! bdk = "0.9.0"
|
||||
//! ```
|
||||
//!
|
||||
//! # Examples
|
||||
#![cfg_attr(
|
||||
feature = "electrum",
|
||||
doc = r##"
|
||||
## Sync the balance of a descriptor
|
||||
|
||||
### Example
|
||||
```no_run
|
||||
use bdk::{Wallet, SyncOptions};
|
||||
use bdk::Wallet;
|
||||
use bdk::database::MemoryDatabase;
|
||||
use bdk::blockchain::ElectrumBlockchain;
|
||||
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
|
||||
use bdk::electrum_client::Client;
|
||||
|
||||
fn main() -> Result<(), bdk::Error> {
|
||||
let client = Client::new("ssl://electrum.blockstream.info:60002")?;
|
||||
let blockchain = ElectrumBlockchain::from(client);
|
||||
let wallet = Wallet::new(
|
||||
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
|
||||
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
|
||||
bitcoin::Network::Testnet,
|
||||
MemoryDatabase::default(),
|
||||
ElectrumBlockchain::from(client)
|
||||
)?;
|
||||
|
||||
wallet.sync(&blockchain, SyncOptions::default())?;
|
||||
wallet.sync(noop_progress(), None)?;
|
||||
|
||||
println!("Descriptor balance: {} SAT", wallet.get_balance()?);
|
||||
|
||||
@@ -81,13 +76,14 @@ fn main() -> Result<(), bdk::Error> {
|
||||
//!
|
||||
//! ## Generate a few addresses
|
||||
//!
|
||||
//! ### Example
|
||||
//! ```
|
||||
//! use bdk::{Wallet};
|
||||
//! use bdk::database::MemoryDatabase;
|
||||
//! use bdk::wallet::AddressIndex::New;
|
||||
//!
|
||||
//! fn main() -> Result<(), bdk::Error> {
|
||||
//! let wallet = Wallet::new(
|
||||
//! let wallet = Wallet::new_offline(
|
||||
//! "wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
|
||||
//! Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
|
||||
//! bitcoin::Network::Testnet,
|
||||
@@ -106,10 +102,11 @@ fn main() -> Result<(), bdk::Error> {
|
||||
doc = r##"
|
||||
## Create a transaction
|
||||
|
||||
### Example
|
||||
```no_run
|
||||
use bdk::{FeeRate, Wallet, SyncOptions};
|
||||
use bdk::{FeeRate, Wallet};
|
||||
use bdk::database::MemoryDatabase;
|
||||
use bdk::blockchain::ElectrumBlockchain;
|
||||
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
|
||||
use bdk::electrum_client::Client;
|
||||
|
||||
use bitcoin::consensus::serialize;
|
||||
@@ -122,10 +119,10 @@ fn main() -> Result<(), bdk::Error> {
|
||||
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
|
||||
bitcoin::Network::Testnet,
|
||||
MemoryDatabase::default(),
|
||||
ElectrumBlockchain::from(client)
|
||||
)?;
|
||||
let blockchain = ElectrumBlockchain::from(client);
|
||||
|
||||
wallet.sync(&blockchain, SyncOptions::default())?;
|
||||
wallet.sync(noop_progress(), None)?;
|
||||
|
||||
let send_to = wallet.get_address(New)?;
|
||||
let (psbt, details) = {
|
||||
@@ -149,6 +146,7 @@ fn main() -> Result<(), bdk::Error> {
|
||||
//!
|
||||
//! ## Sign a transaction
|
||||
//!
|
||||
//! ### Example
|
||||
//! ```no_run
|
||||
//! use std::str::FromStr;
|
||||
//!
|
||||
@@ -158,7 +156,7 @@ fn main() -> Result<(), bdk::Error> {
|
||||
//! use bdk::database::MemoryDatabase;
|
||||
//!
|
||||
//! fn main() -> Result<(), bdk::Error> {
|
||||
//! let wallet = Wallet::new(
|
||||
//! let wallet = Wallet::new_offline(
|
||||
//! "wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/0/*)",
|
||||
//! Some("wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/1/*)"),
|
||||
//! bitcoin::Network::Testnet,
|
||||
@@ -190,7 +188,7 @@ fn main() -> Result<(), bdk::Error> {
|
||||
//! * `async-interface`: async functions in bdk traits
|
||||
//! * `keys-bip39`: [BIP-39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki) mnemonic codes for generating deterministic keys
|
||||
//!
|
||||
//! # Internal features
|
||||
//! ## Internal features
|
||||
//!
|
||||
//! These features do not expose any new API, but influence internal implementation aspects of
|
||||
//! BDK.
|
||||
@@ -207,24 +205,11 @@ extern crate serde;
|
||||
#[macro_use]
|
||||
extern crate serde_json;
|
||||
|
||||
#[cfg(all(feature = "reqwest", feature = "ureq"))]
|
||||
compile_error!("Features reqwest and ureq are mutually exclusive and cannot be enabled together");
|
||||
|
||||
#[cfg(all(feature = "async-interface", feature = "electrum"))]
|
||||
compile_error!(
|
||||
"Features async-interface and electrum are mutually exclusive and cannot be enabled together"
|
||||
);
|
||||
|
||||
#[cfg(all(feature = "async-interface", feature = "ureq"))]
|
||||
compile_error!(
|
||||
"Features async-interface and ureq are mutually exclusive and cannot be enabled together"
|
||||
);
|
||||
|
||||
#[cfg(all(feature = "async-interface", feature = "compact_filters"))]
|
||||
compile_error!(
|
||||
"Features async-interface and compact_filters are mutually exclusive and cannot be enabled together"
|
||||
);
|
||||
|
||||
#[cfg(feature = "keys-bip39")]
|
||||
extern crate bip39;
|
||||
|
||||
@@ -243,20 +228,12 @@ pub extern crate bitcoincore_rpc;
|
||||
#[cfg(feature = "electrum")]
|
||||
pub extern crate electrum_client;
|
||||
|
||||
#[cfg(feature = "esplora")]
|
||||
pub extern crate reqwest;
|
||||
|
||||
#[cfg(feature = "key-value-db")]
|
||||
pub extern crate sled;
|
||||
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub extern crate rusqlite;
|
||||
|
||||
// We should consider putting this under a feature flag but we need the macro in doctests so we need
|
||||
// to wait until https://github.com/rust-lang/rust/issues/67295 is fixed.
|
||||
//
|
||||
// Stuff in here is too rough to document atm
|
||||
#[doc(hidden)]
|
||||
#[macro_use]
|
||||
pub mod testutils;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
#[macro_use]
|
||||
pub(crate) mod error;
|
||||
@@ -278,10 +255,16 @@ pub use wallet::address_validator;
|
||||
pub use wallet::signer;
|
||||
pub use wallet::signer::SignOptions;
|
||||
pub use wallet::tx_builder::TxBuilder;
|
||||
pub use wallet::SyncOptions;
|
||||
pub use wallet::Wallet;
|
||||
|
||||
/// Get the version of BDK at runtime
|
||||
pub fn version() -> &'static str {
|
||||
env!("CARGO_PKG_VERSION", "unknown")
|
||||
}
|
||||
|
||||
// We should consider putting this under a feature flag but we need the macro in doctets so we need
|
||||
// to wait until https://github.com/rust-lang/rust/issues/67295 is fixed.
|
||||
//
|
||||
// Stuff in here is too rough to document atm
|
||||
#[doc(hidden)]
|
||||
pub mod testutils;
|
||||
|
||||
@@ -19,7 +19,7 @@ pub trait PsbtUtils {
|
||||
impl PsbtUtils for Psbt {
|
||||
#[allow(clippy::all)] // We want to allow `manual_map` but it is too new.
|
||||
fn get_utxo_for(&self, input_index: usize) -> Option<TxOut> {
|
||||
let tx = &self.unsigned_tx;
|
||||
let tx = &self.global.unsigned_tx;
|
||||
|
||||
if input_index >= tx.input.len() {
|
||||
return None;
|
||||
@@ -43,8 +43,8 @@ impl PsbtUtils for Psbt {
|
||||
mod test {
|
||||
use crate::bitcoin::TxIn;
|
||||
use crate::psbt::Psbt;
|
||||
use crate::wallet::test::{get_funded_wallet, get_test_wpkh};
|
||||
use crate::wallet::AddressIndex;
|
||||
use crate::wallet::{get_funded_wallet, test::get_test_wpkh};
|
||||
use crate::SignOptions;
|
||||
use std::str::FromStr;
|
||||
|
||||
@@ -93,7 +93,7 @@ mod test {
|
||||
let mut builder = wallet.build_tx();
|
||||
builder.add_recipient(send_to.script_pubkey(), 10_000);
|
||||
let (mut psbt, _) = builder.finish().unwrap();
|
||||
psbt.unsigned_tx.input.push(TxIn::default());
|
||||
psbt.global.unsigned_tx.input.push(TxIn::default());
|
||||
let options = SignOptions {
|
||||
trust_witness_utxo: true,
|
||||
..Default::default()
|
||||
@@ -112,9 +112,10 @@ mod test {
|
||||
|
||||
// add a finalized input
|
||||
psbt.inputs.push(psbt_bip.inputs[0].clone());
|
||||
psbt.unsigned_tx
|
||||
psbt.global
|
||||
.unsigned_tx
|
||||
.input
|
||||
.push(psbt_bip.unsigned_tx.input[0].clone());
|
||||
.push(psbt_bip.global.unsigned_tx.input[0].clone());
|
||||
|
||||
let _ = wallet.sign(&mut psbt, SignOptions::default()).unwrap();
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,140 +0,0 @@
|
||||
use bitcoin::Network;
|
||||
|
||||
use crate::{
|
||||
blockchain::ConfigurableBlockchain, database::MemoryDatabase, testutils, wallet::AddressIndex,
|
||||
Wallet,
|
||||
};
|
||||
|
||||
use super::blockchain_tests::TestClient;
|
||||
|
||||
/// Trait for testing [`ConfigurableBlockchain`] implementations.
|
||||
pub trait ConfigurableBlockchainTester<B: ConfigurableBlockchain>: Sized {
|
||||
/// Blockchain name for logging.
|
||||
const BLOCKCHAIN_NAME: &'static str;
|
||||
|
||||
/// Generates a blockchain config with a given stop_gap.
|
||||
///
|
||||
/// If this returns [`Option::None`], then the associated tests will not run.
|
||||
fn config_with_stop_gap(
|
||||
&self,
|
||||
_test_client: &mut TestClient,
|
||||
_stop_gap: usize,
|
||||
) -> Option<B::Config> {
|
||||
None
|
||||
}
|
||||
|
||||
/// Runs all avaliable tests.
|
||||
fn run(&self) {
|
||||
let test_client = &mut TestClient::default();
|
||||
|
||||
if self.config_with_stop_gap(test_client, 0).is_some() {
|
||||
test_wallet_sync_with_stop_gaps(test_client, self);
|
||||
} else {
|
||||
println!(
|
||||
"{}: Skipped tests requiring config_with_stop_gap.",
|
||||
Self::BLOCKCHAIN_NAME
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Test whether blockchain implementation syncs with expected behaviour given different `stop_gap`
|
||||
/// parameters.
|
||||
///
|
||||
/// For each test vector:
|
||||
/// * Fill wallet's derived addresses with balances (as specified by test vector).
|
||||
/// * [0..addrs_before] => 1000sats for each address
|
||||
/// * [addrs_before..actual_gap] => empty addresses
|
||||
/// * [actual_gap..addrs_after] => 1000sats for each address
|
||||
/// * Then, perform wallet sync and obtain wallet balance
|
||||
/// * Check balance is within expected range (we can compare `stop_gap` and `actual_gap` to
|
||||
/// determine this).
|
||||
fn test_wallet_sync_with_stop_gaps<T, B>(test_client: &mut TestClient, tester: &T)
|
||||
where
|
||||
T: ConfigurableBlockchainTester<B>,
|
||||
B: ConfigurableBlockchain,
|
||||
{
|
||||
// Generates wallet descriptor
|
||||
let descriptor_of_account = |account_index: usize| -> String {
|
||||
format!("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/{account_index}/*)")
|
||||
};
|
||||
|
||||
// Amount (in satoshis) provided to a single address (which expects to have a balance)
|
||||
const AMOUNT_PER_TX: u64 = 1000;
|
||||
|
||||
// [stop_gap, actual_gap, addrs_before, addrs_after]
|
||||
//
|
||||
// [0] stop_gap: Passed to [`ElectrumBlockchainConfig`]
|
||||
// [1] actual_gap: Range size of address indexes without a balance
|
||||
// [2] addrs_before: Range size of address indexes (before gap) which contains a balance
|
||||
// [3] addrs_after: Range size of address indexes (after gap) which contains a balance
|
||||
let test_vectors: Vec<[u64; 4]> = vec![
|
||||
[0, 0, 0, 5],
|
||||
[0, 0, 5, 5],
|
||||
[0, 1, 5, 5],
|
||||
[0, 2, 5, 5],
|
||||
[1, 0, 5, 5],
|
||||
[1, 1, 5, 5],
|
||||
[1, 2, 5, 5],
|
||||
[2, 1, 5, 5],
|
||||
[2, 2, 5, 5],
|
||||
[2, 3, 5, 5],
|
||||
];
|
||||
|
||||
for (account_index, vector) in test_vectors.into_iter().enumerate() {
|
||||
let [stop_gap, actual_gap, addrs_before, addrs_after] = vector;
|
||||
let descriptor = descriptor_of_account(account_index);
|
||||
|
||||
let blockchain = B::from_config(
|
||||
&tester
|
||||
.config_with_stop_gap(test_client, stop_gap as _)
|
||||
.unwrap(),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let wallet =
|
||||
Wallet::new(&descriptor, None, Network::Regtest, MemoryDatabase::new()).unwrap();
|
||||
|
||||
// fill server-side with txs to specified address indexes
|
||||
// return the max balance of the wallet (also the actual balance)
|
||||
let max_balance = (0..addrs_before)
|
||||
.chain(addrs_before + actual_gap..addrs_before + actual_gap + addrs_after)
|
||||
.fold(0_u64, |sum, i| {
|
||||
let address = wallet.get_address(AddressIndex::Peek(i as _)).unwrap();
|
||||
test_client.receive(testutils! {
|
||||
@tx ( (@addr address.address) => AMOUNT_PER_TX )
|
||||
});
|
||||
sum + AMOUNT_PER_TX
|
||||
});
|
||||
|
||||
// minimum allowed balance of wallet (based on stop gap)
|
||||
let min_balance = if actual_gap > stop_gap {
|
||||
addrs_before * AMOUNT_PER_TX
|
||||
} else {
|
||||
max_balance
|
||||
};
|
||||
|
||||
// perform wallet sync
|
||||
wallet.sync(&blockchain, Default::default()).unwrap();
|
||||
|
||||
let wallet_balance = wallet.get_balance().unwrap();
|
||||
|
||||
let details = format!(
|
||||
"test_vector: [stop_gap: {}, actual_gap: {}, addrs_before: {}, addrs_after: {}]",
|
||||
stop_gap, actual_gap, addrs_before, addrs_after,
|
||||
);
|
||||
assert!(
|
||||
wallet_balance <= max_balance,
|
||||
"wallet balance is greater than received amount: {}",
|
||||
details
|
||||
);
|
||||
assert!(
|
||||
wallet_balance >= min_balance,
|
||||
"wallet balance is smaller than expected: {}",
|
||||
details
|
||||
);
|
||||
|
||||
// generate block to confirm new transactions
|
||||
test_client.generate(1, None);
|
||||
}
|
||||
}
|
||||
@@ -14,37 +14,11 @@
|
||||
#[cfg(feature = "test-blockchains")]
|
||||
pub mod blockchain_tests;
|
||||
|
||||
#[cfg(test)]
|
||||
#[cfg(feature = "test-blockchains")]
|
||||
pub mod configurable_blockchain_tests;
|
||||
use bitcoin::secp256k1::{Secp256k1, Verification};
|
||||
use bitcoin::{Address, PublicKey};
|
||||
|
||||
use bitcoin::{Address, Txid};
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct TestIncomingInput {
|
||||
pub txid: Txid,
|
||||
pub vout: u32,
|
||||
pub sequence: Option<u32>,
|
||||
}
|
||||
|
||||
impl TestIncomingInput {
|
||||
pub fn new(txid: Txid, vout: u32, sequence: Option<u32>) -> Self {
|
||||
Self {
|
||||
txid,
|
||||
vout,
|
||||
sequence,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "test-blockchains")]
|
||||
pub fn into_raw_tx_input(self) -> bitcoincore_rpc::json::CreateRawTransactionInput {
|
||||
bitcoincore_rpc::json::CreateRawTransactionInput {
|
||||
txid: self.txid,
|
||||
vout: self.vout,
|
||||
sequence: self.sequence,
|
||||
}
|
||||
}
|
||||
}
|
||||
use miniscript::descriptor::DescriptorPublicKey;
|
||||
use miniscript::{Descriptor, MiniscriptKey, TranslatePk};
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct TestIncomingOutput {
|
||||
@@ -63,7 +37,6 @@ impl TestIncomingOutput {
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct TestIncomingTx {
|
||||
pub input: Vec<TestIncomingInput>,
|
||||
pub output: Vec<TestIncomingOutput>,
|
||||
pub min_confirmations: Option<u64>,
|
||||
pub locktime: Option<i64>,
|
||||
@@ -72,14 +45,12 @@ pub struct TestIncomingTx {
|
||||
|
||||
impl TestIncomingTx {
|
||||
pub fn new(
|
||||
input: Vec<TestIncomingInput>,
|
||||
output: Vec<TestIncomingOutput>,
|
||||
min_confirmations: Option<u64>,
|
||||
locktime: Option<i64>,
|
||||
replaceable: Option<bool>,
|
||||
) -> Self {
|
||||
Self {
|
||||
input,
|
||||
output,
|
||||
min_confirmations,
|
||||
locktime,
|
||||
@@ -87,57 +58,81 @@ impl TestIncomingTx {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn add_input(&mut self, input: TestIncomingInput) {
|
||||
self.input.push(input);
|
||||
}
|
||||
|
||||
pub fn add_output(&mut self, output: TestIncomingOutput) {
|
||||
self.output.push(output);
|
||||
}
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub trait TranslateDescriptor {
|
||||
// derive and translate a `Descriptor<DescriptorPublicKey>` into a `Descriptor<PublicKey>`
|
||||
fn derive_translated<C: Verification>(
|
||||
&self,
|
||||
secp: &Secp256k1<C>,
|
||||
index: u32,
|
||||
) -> Descriptor<PublicKey>;
|
||||
}
|
||||
|
||||
impl TranslateDescriptor for Descriptor<DescriptorPublicKey> {
|
||||
fn derive_translated<C: Verification>(
|
||||
&self,
|
||||
secp: &Secp256k1<C>,
|
||||
index: u32,
|
||||
) -> Descriptor<PublicKey> {
|
||||
let translate = |key: &DescriptorPublicKey| -> PublicKey {
|
||||
match key {
|
||||
DescriptorPublicKey::XPub(xpub) => {
|
||||
xpub.xkey
|
||||
.derive_pub(secp, &xpub.derivation_path)
|
||||
.expect("hardened derivation steps")
|
||||
.public_key
|
||||
}
|
||||
DescriptorPublicKey::SinglePub(key) => key.key,
|
||||
}
|
||||
};
|
||||
|
||||
self.derive(index)
|
||||
.translate_pk_infallible(|pk| translate(pk), |pkh| translate(pkh).to_pubkeyhash())
|
||||
}
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! testutils {
|
||||
( @external $descriptors:expr, $child:expr ) => ({
|
||||
use $crate::bitcoin::secp256k1::Secp256k1;
|
||||
use $crate::miniscript::descriptor::{Descriptor, DescriptorPublicKey, DescriptorTrait};
|
||||
use bitcoin::secp256k1::Secp256k1;
|
||||
use miniscript::descriptor::{Descriptor, DescriptorPublicKey, DescriptorTrait};
|
||||
|
||||
use $crate::descriptor::AsDerived;
|
||||
use $crate::testutils::TranslateDescriptor;
|
||||
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let parsed = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, &$descriptors.0).expect("Failed to parse descriptor in `testutils!(@external)`").0;
|
||||
parsed.as_derived($child, &secp).address(bitcoin::Network::Regtest).expect("No address form")
|
||||
parsed.derive_translated(&secp, $child).address(bitcoin::Network::Regtest).expect("No address form")
|
||||
});
|
||||
( @internal $descriptors:expr, $child:expr ) => ({
|
||||
use $crate::bitcoin::secp256k1::Secp256k1;
|
||||
use $crate::miniscript::descriptor::{Descriptor, DescriptorPublicKey, DescriptorTrait};
|
||||
use bitcoin::secp256k1::Secp256k1;
|
||||
use miniscript::descriptor::{Descriptor, DescriptorPublicKey, DescriptorTrait};
|
||||
|
||||
use $crate::descriptor::AsDerived;
|
||||
use $crate::testutils::TranslateDescriptor;
|
||||
|
||||
let secp = Secp256k1::new();
|
||||
|
||||
let parsed = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, &$descriptors.1.expect("Missing internal descriptor")).expect("Failed to parse descriptor in `testutils!(@internal)`").0;
|
||||
parsed.as_derived($child, &secp).address($crate::bitcoin::Network::Regtest).expect("No address form")
|
||||
parsed.derive_translated(&secp, $child).address(bitcoin::Network::Regtest).expect("No address form")
|
||||
});
|
||||
( @e $descriptors:expr, $child:expr ) => ({ testutils!(@external $descriptors, $child) });
|
||||
( @i $descriptors:expr, $child:expr ) => ({ testutils!(@internal $descriptors, $child) });
|
||||
( @addr $addr:expr ) => ({ $addr });
|
||||
|
||||
( @tx ( $( ( $( $addr:tt )* ) => $amount:expr ),+ ) $( ( @inputs $( ($txid:expr, $vout:expr) ),+ ) )? $( ( @locktime $locktime:expr ) )? $( ( @confirmations $confirmations:expr ) )? $( ( @replaceable $replaceable:expr ) )? ) => ({
|
||||
( @tx ( $( ( $( $addr:tt )* ) => $amount:expr ),+ ) $( ( @locktime $locktime:expr ) )? $( ( @confirmations $confirmations:expr ) )? $( ( @replaceable $replaceable:expr ) )? ) => ({
|
||||
let outs = vec![$( $crate::testutils::TestIncomingOutput::new($amount, testutils!( $($addr)* ))),+];
|
||||
let _ins: Vec<$crate::testutils::TestIncomingInput> = vec![];
|
||||
$(
|
||||
let _ins = vec![$( $crate::testutils::TestIncomingInput { txid: $txid, vout: $vout, sequence: None }),+];
|
||||
)?
|
||||
|
||||
let locktime = None::<i64>$(.or(Some($locktime)))?;
|
||||
|
||||
let min_confirmations = None::<u64>$(.or(Some($confirmations)))?;
|
||||
let replaceable = None::<bool>$(.or(Some($replaceable)))?;
|
||||
|
||||
$crate::testutils::TestIncomingTx::new(_ins, outs, min_confirmations, locktime, replaceable)
|
||||
$crate::testutils::TestIncomingTx::new(outs, min_confirmations, locktime, replaceable)
|
||||
});
|
||||
|
||||
( @literal $key:expr ) => ({
|
||||
@@ -150,8 +145,8 @@ macro_rules! testutils {
|
||||
let mut seed = [0u8; 32];
|
||||
rand::thread_rng().fill(&mut seed[..]);
|
||||
|
||||
let key = $crate::bitcoin::util::bip32::ExtendedPrivKey::new_master(
|
||||
$crate::bitcoin::Network::Testnet,
|
||||
let key = bitcoin::util::bip32::ExtendedPrivKey::new_master(
|
||||
bitcoin::Network::Testnet,
|
||||
&seed,
|
||||
);
|
||||
|
||||
@@ -163,13 +158,13 @@ macro_rules! testutils {
|
||||
( @generate_wif ) => ({
|
||||
use rand::Rng;
|
||||
|
||||
let mut key = [0u8; $crate::bitcoin::secp256k1::constants::SECRET_KEY_SIZE];
|
||||
let mut key = [0u8; bitcoin::secp256k1::constants::SECRET_KEY_SIZE];
|
||||
rand::thread_rng().fill(&mut key[..]);
|
||||
|
||||
($crate::bitcoin::PrivateKey {
|
||||
(bitcoin::PrivateKey {
|
||||
compressed: true,
|
||||
network: $crate::bitcoin::Network::Testnet,
|
||||
key: $crate::bitcoin::secp256k1::SecretKey::from_slice(&key).unwrap(),
|
||||
network: bitcoin::Network::Testnet,
|
||||
key: bitcoin::secp256k1::SecretKey::from_slice(&key).unwrap(),
|
||||
}.to_string(), None::<String>, None::<String>)
|
||||
});
|
||||
|
||||
@@ -186,8 +181,8 @@ macro_rules! testutils {
|
||||
( @descriptors ( $external_descriptor:expr ) $( ( $internal_descriptor:expr ) )? $( ( @keys $( $keys:tt )* ) )* ) => ({
|
||||
use std::str::FromStr;
|
||||
use std::collections::HashMap;
|
||||
use $crate::miniscript::descriptor::Descriptor;
|
||||
use $crate::miniscript::TranslatePk;
|
||||
use miniscript::descriptor::Descriptor;
|
||||
use miniscript::TranslatePk;
|
||||
|
||||
#[allow(unused_assignments, unused_mut)]
|
||||
let mut keys: HashMap<&'static str, (String, Option<String>, Option<String>)> = HashMap::new();
|
||||
|
||||
74
src/types.rs
74
src/types.rs
@@ -10,7 +10,6 @@
|
||||
// licenses.
|
||||
|
||||
use std::convert::AsRef;
|
||||
use std::ops::Sub;
|
||||
|
||||
use bitcoin::blockdata::transaction::{OutPoint, Transaction, TxOut};
|
||||
use bitcoin::{hash_types::Txid, util::psbt};
|
||||
@@ -66,31 +65,10 @@ impl FeeRate {
|
||||
FeeRate(1.0)
|
||||
}
|
||||
|
||||
/// Calculate fee rate from `fee` and weight units (`wu`).
|
||||
pub fn from_wu(fee: u64, wu: usize) -> FeeRate {
|
||||
Self::from_vb(fee, wu.vbytes())
|
||||
}
|
||||
|
||||
/// Calculate fee rate from `fee` and `vbytes`.
|
||||
pub fn from_vb(fee: u64, vbytes: usize) -> FeeRate {
|
||||
let rate = fee as f32 / vbytes as f32;
|
||||
Self::from_sat_per_vb(rate)
|
||||
}
|
||||
|
||||
/// Return the value as satoshi/vbyte
|
||||
pub fn as_sat_vb(&self) -> f32 {
|
||||
self.0
|
||||
}
|
||||
|
||||
/// Calculate absolute fee in Satoshis using size in weight units.
|
||||
pub fn fee_wu(&self, wu: usize) -> u64 {
|
||||
self.fee_vb(wu.vbytes())
|
||||
}
|
||||
|
||||
/// Calculate absolute fee in Satoshis using size in virtual bytes.
|
||||
pub fn fee_vb(&self, vbytes: usize) -> u64 {
|
||||
(self.as_sat_vb() * vbytes as f32).ceil() as u64
|
||||
}
|
||||
}
|
||||
|
||||
impl std::default::Default for FeeRate {
|
||||
@@ -99,27 +77,6 @@ impl std::default::Default for FeeRate {
|
||||
}
|
||||
}
|
||||
|
||||
impl Sub for FeeRate {
|
||||
type Output = Self;
|
||||
|
||||
fn sub(self, other: FeeRate) -> Self::Output {
|
||||
FeeRate(self.0 - other.0)
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait implemented by types that can be used to measure weight units.
|
||||
pub trait Vbytes {
|
||||
/// Convert weight units to virtual bytes.
|
||||
fn vbytes(self) -> usize;
|
||||
}
|
||||
|
||||
impl Vbytes for usize {
|
||||
fn vbytes(self) -> usize {
|
||||
// ref: https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki#transaction-size-calculations
|
||||
(self as f32 / 4.0).ceil() as usize
|
||||
}
|
||||
}
|
||||
|
||||
/// An unspent output owned by a [`Wallet`].
|
||||
///
|
||||
/// [`Wallet`]: crate::Wallet
|
||||
@@ -131,8 +88,6 @@ pub struct LocalUtxo {
|
||||
pub txout: TxOut,
|
||||
/// Type of keychain
|
||||
pub keychain: KeychainKind,
|
||||
/// Whether this UTXO is spent or not
|
||||
pub is_spent: bool,
|
||||
}
|
||||
|
||||
/// A [`Utxo`] with its `satisfaction_weight`.
|
||||
@@ -202,10 +157,8 @@ pub struct TransactionDetails {
|
||||
pub txid: Txid,
|
||||
|
||||
/// Received value (sats)
|
||||
/// Sum of owned outputs of this transaction.
|
||||
pub received: u64,
|
||||
/// Sent value (sats)
|
||||
/// Sum of owned inputs of this transaction.
|
||||
pub sent: u64,
|
||||
/// Fee value (sats) if available.
|
||||
/// The availability of the fee depends on the backend. It's never `None` with an Electrum
|
||||
@@ -214,29 +167,32 @@ pub struct TransactionDetails {
|
||||
pub fee: Option<u64>,
|
||||
/// If the transaction is confirmed, contains height and timestamp of the block containing the
|
||||
/// transaction, unconfirmed transaction contains `None`.
|
||||
pub confirmation_time: Option<BlockTime>,
|
||||
pub confirmation_time: Option<ConfirmationTime>,
|
||||
/// Whether the tx has been verified against the consensus rules
|
||||
///
|
||||
/// Confirmed txs are considered "verified" by default, while unconfirmed txs are checked to
|
||||
/// ensure an unstrusted [`Blockchain`](crate::blockchain::Blockchain) backend can't trick the
|
||||
/// wallet into using an invalid tx as an RBF template.
|
||||
///
|
||||
/// The check is only perfomed when the `verify` feature is enabled.
|
||||
#[serde(default = "bool::default")] // default to `false` if not specified
|
||||
pub verified: bool,
|
||||
}
|
||||
|
||||
/// Block height and timestamp of a block
|
||||
/// Block height and timestamp of the block containing the confirmed transaction
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Default)]
|
||||
pub struct BlockTime {
|
||||
pub struct ConfirmationTime {
|
||||
/// confirmation block height
|
||||
pub height: u32,
|
||||
/// confirmation block timestamp
|
||||
pub timestamp: u64,
|
||||
}
|
||||
|
||||
/// **DEPRECATED**: Confirmation time of a transaction
|
||||
///
|
||||
/// The structure has been renamed to `BlockTime`
|
||||
#[deprecated(note = "This structure has been renamed to `BlockTime`")]
|
||||
pub type ConfirmationTime = BlockTime;
|
||||
|
||||
impl BlockTime {
|
||||
/// Returns `Some` `BlockTime` if both `height` and `timestamp` are `Some`
|
||||
impl ConfirmationTime {
|
||||
/// Returns `Some` `ConfirmationTime` if both `height` and `timestamp` are `Some`
|
||||
pub fn new(height: Option<u32>, timestamp: Option<u64>) -> Option<Self> {
|
||||
match (height, timestamp) {
|
||||
(Some(height), Some(timestamp)) => Some(BlockTime { height, timestamp }),
|
||||
(Some(height), Some(timestamp)) => Some(ConfirmationTime { height, timestamp }),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -55,7 +55,7 @@
|
||||
//! }
|
||||
//!
|
||||
//! let descriptor = "wpkh(tpubD6NzVbkrYhZ4Xferm7Pz4VnjdcDPFyjVu5K4iZXQ4pVN8Cks4pHVowTBXBKRhX64pkRyJZJN5xAKj4UDNnLPb5p2sSKXhewoYx5GbTdUFWq/*)";
|
||||
//! let mut wallet = Wallet::new(descriptor, None, Network::Testnet, MemoryDatabase::default())?;
|
||||
//! let mut wallet = Wallet::new_offline(descriptor, None, Network::Testnet, MemoryDatabase::default())?;
|
||||
//! wallet.add_address_validator(Arc::new(PrintAddressAndContinue));
|
||||
//!
|
||||
//! let address = wallet.get_address(New)?;
|
||||
@@ -100,7 +100,6 @@ impl std::error::Error for AddressValidatorError {}
|
||||
/// validator will be propagated up to the original caller that triggered the address generation.
|
||||
///
|
||||
/// For a usage example see [this module](crate::address_validator)'s documentation.
|
||||
#[deprecated = "AddressValidator was rarely used. Address validation can occur outside of BDK"]
|
||||
pub trait AddressValidator: Send + Sync + fmt::Debug {
|
||||
/// Validate or inspect an address
|
||||
fn validate(
|
||||
@@ -116,12 +115,11 @@ mod test {
|
||||
use std::sync::Arc;
|
||||
|
||||
use super::*;
|
||||
use crate::wallet::test::{get_funded_wallet, get_test_wpkh};
|
||||
use crate::wallet::AddressIndex::New;
|
||||
use crate::wallet::{get_funded_wallet, test::get_test_wpkh};
|
||||
|
||||
#[derive(Debug)]
|
||||
struct TestValidator;
|
||||
#[allow(deprecated)]
|
||||
impl AddressValidator for TestValidator {
|
||||
fn validate(
|
||||
&self,
|
||||
@@ -137,7 +135,6 @@ mod test {
|
||||
#[should_panic(expected = "InvalidScript")]
|
||||
fn test_address_validator_external() {
|
||||
let (mut wallet, _, _) = get_funded_wallet(get_test_wpkh());
|
||||
#[allow(deprecated)]
|
||||
wallet.add_address_validator(Arc::new(TestValidator));
|
||||
|
||||
wallet.get_address(New).unwrap();
|
||||
@@ -147,7 +144,6 @@ mod test {
|
||||
#[should_panic(expected = "InvalidScript")]
|
||||
fn test_address_validator_internal() {
|
||||
let (mut wallet, descriptors, _) = get_funded_wallet(get_test_wpkh());
|
||||
#[allow(deprecated)]
|
||||
wallet.add_address_validator(Arc::new(TestValidator));
|
||||
|
||||
let addr = crate::testutils!(@external descriptors, 10);
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
//! ```
|
||||
//! # use std::str::FromStr;
|
||||
//! # use bitcoin::*;
|
||||
//! # use bdk::wallet::{self, coin_selection::*};
|
||||
//! # use bdk::wallet::coin_selection::*;
|
||||
//! # use bdk::database::Database;
|
||||
//! # use bdk::*;
|
||||
//! # const TXIN_BASE_WEIGHT: usize = (32 + 4 + 4 + 1) * 4;
|
||||
@@ -41,7 +41,7 @@
|
||||
//! optional_utxos: Vec<WeightedUtxo>,
|
||||
//! fee_rate: FeeRate,
|
||||
//! amount_needed: u64,
|
||||
//! fee_amount: u64,
|
||||
//! fee_amount: f32,
|
||||
//! ) -> Result<CoinSelectionResult, bdk::Error> {
|
||||
//! let mut selected_amount = 0;
|
||||
//! let mut additional_weight = 0;
|
||||
@@ -57,8 +57,9 @@
|
||||
//! },
|
||||
//! )
|
||||
//! .collect::<Vec<_>>();
|
||||
//! let additional_fees = fee_rate.fee_wu(additional_weight);
|
||||
//! let amount_needed_with_fees = (fee_amount + additional_fees) + amount_needed;
|
||||
//! let additional_fees = additional_weight as f32 * fee_rate.as_sat_vb() / 4.0;
|
||||
//! let amount_needed_with_fees =
|
||||
//! (fee_amount + additional_fees).ceil() as u64 + amount_needed;
|
||||
//! if amount_needed_with_fees > selected_amount {
|
||||
//! return Err(bdk::Error::InsufficientFunds {
|
||||
//! needed: amount_needed_with_fees,
|
||||
@@ -89,6 +90,7 @@
|
||||
//! ```
|
||||
|
||||
use crate::types::FeeRate;
|
||||
use crate::wallet::Vbytes;
|
||||
use crate::{database::Database, WeightedUtxo};
|
||||
use crate::{error::Error, Utxo};
|
||||
|
||||
@@ -97,7 +99,6 @@ use rand::seq::SliceRandom;
|
||||
use rand::thread_rng;
|
||||
#[cfg(test)]
|
||||
use rand::{rngs::StdRng, SeedableRng};
|
||||
use std::collections::HashMap;
|
||||
use std::convert::TryInto;
|
||||
|
||||
/// Default coin selection algorithm used by [`TxBuilder`](super::tx_builder::TxBuilder) if not
|
||||
@@ -117,7 +118,7 @@ pub struct CoinSelectionResult {
|
||||
/// List of outputs selected for use as inputs
|
||||
pub selected: Vec<Utxo>,
|
||||
/// Total fee amount in satoshi
|
||||
pub fee_amount: u64,
|
||||
pub fee_amount: f32,
|
||||
}
|
||||
|
||||
impl CoinSelectionResult {
|
||||
@@ -164,7 +165,7 @@ pub trait CoinSelectionAlgorithm<D: Database>: std::fmt::Debug {
|
||||
optional_utxos: Vec<WeightedUtxo>,
|
||||
fee_rate: FeeRate,
|
||||
amount_needed: u64,
|
||||
fee_amount: u64,
|
||||
fee_amount: f32,
|
||||
) -> Result<CoinSelectionResult, Error>;
|
||||
}
|
||||
|
||||
@@ -183,8 +184,10 @@ impl<D: Database> CoinSelectionAlgorithm<D> for LargestFirstCoinSelection {
|
||||
mut optional_utxos: Vec<WeightedUtxo>,
|
||||
fee_rate: FeeRate,
|
||||
amount_needed: u64,
|
||||
fee_amount: u64,
|
||||
mut fee_amount: f32,
|
||||
) -> Result<CoinSelectionResult, Error> {
|
||||
let calc_fee_bytes = |wu| (wu as f32) * fee_rate.as_sat_vb() / 4.0;
|
||||
|
||||
log::debug!(
|
||||
"amount_needed = `{}`, fee_amount = `{}`, fee_rate = `{:?}`",
|
||||
amount_needed,
|
||||
@@ -202,127 +205,62 @@ impl<D: Database> CoinSelectionAlgorithm<D> for LargestFirstCoinSelection {
|
||||
.chain(optional_utxos.into_iter().rev().map(|utxo| (false, utxo)))
|
||||
};
|
||||
|
||||
select_sorted_utxos(utxos, fee_rate, amount_needed, fee_amount)
|
||||
}
|
||||
}
|
||||
// Keep including inputs until we've got enough.
|
||||
// Store the total input value in selected_amount and the total fee being paid in fee_amount
|
||||
let mut selected_amount = 0;
|
||||
let selected = utxos
|
||||
.scan(
|
||||
(&mut selected_amount, &mut fee_amount),
|
||||
|(selected_amount, fee_amount), (must_use, weighted_utxo)| {
|
||||
if must_use || **selected_amount < amount_needed + (fee_amount.ceil() as u64) {
|
||||
**fee_amount +=
|
||||
calc_fee_bytes(TXIN_BASE_WEIGHT + weighted_utxo.satisfaction_weight);
|
||||
**selected_amount += weighted_utxo.utxo.txout().value;
|
||||
|
||||
/// OldestFirstCoinSelection always picks the utxo with the smallest blockheight to add to the selected coins next
|
||||
///
|
||||
/// This coin selection algorithm sorts the available UTXOs by blockheight and then picks them starting
|
||||
/// from the oldest ones until the required amount is reached.
|
||||
#[derive(Debug, Default, Clone, Copy)]
|
||||
pub struct OldestFirstCoinSelection;
|
||||
log::debug!(
|
||||
"Selected {}, updated fee_amount = `{}`",
|
||||
weighted_utxo.utxo.outpoint(),
|
||||
fee_amount
|
||||
);
|
||||
|
||||
impl<D: Database> CoinSelectionAlgorithm<D> for OldestFirstCoinSelection {
|
||||
fn coin_select(
|
||||
&self,
|
||||
database: &D,
|
||||
required_utxos: Vec<WeightedUtxo>,
|
||||
mut optional_utxos: Vec<WeightedUtxo>,
|
||||
fee_rate: FeeRate,
|
||||
amount_needed: u64,
|
||||
fee_amount: u64,
|
||||
) -> Result<CoinSelectionResult, Error> {
|
||||
// query db and create a blockheight lookup table
|
||||
let blockheights = optional_utxos
|
||||
.iter()
|
||||
.map(|wu| wu.utxo.outpoint().txid)
|
||||
// fold is used so we can skip db query for txid that already exist in hashmap acc
|
||||
.fold(Ok(HashMap::new()), |bh_result_acc, txid| {
|
||||
bh_result_acc.and_then(|mut bh_acc| {
|
||||
if bh_acc.contains_key(&txid) {
|
||||
Ok(bh_acc)
|
||||
Some(weighted_utxo.utxo)
|
||||
} else {
|
||||
database.get_tx(&txid, false).map(|details| {
|
||||
bh_acc.insert(
|
||||
txid,
|
||||
details.and_then(|d| d.confirmation_time.map(|ct| ct.height)),
|
||||
);
|
||||
bh_acc
|
||||
})
|
||||
None
|
||||
}
|
||||
})
|
||||
})?;
|
||||
},
|
||||
)
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// We put the "required UTXOs" first and make sure the optional UTXOs are sorted from
|
||||
// oldest to newest according to blocktime
|
||||
// For utxo that doesn't exist in DB, they will have lowest priority to be selected
|
||||
let utxos = {
|
||||
optional_utxos.sort_unstable_by_key(|wu| {
|
||||
match blockheights.get(&wu.utxo.outpoint().txid) {
|
||||
Some(Some(blockheight)) => blockheight,
|
||||
_ => &u32::MAX,
|
||||
}
|
||||
let amount_needed_with_fees = amount_needed + (fee_amount.ceil() as u64);
|
||||
if selected_amount < amount_needed_with_fees {
|
||||
return Err(Error::InsufficientFunds {
|
||||
needed: amount_needed_with_fees,
|
||||
available: selected_amount,
|
||||
});
|
||||
}
|
||||
|
||||
required_utxos
|
||||
.into_iter()
|
||||
.map(|utxo| (true, utxo))
|
||||
.chain(optional_utxos.into_iter().map(|utxo| (false, utxo)))
|
||||
};
|
||||
|
||||
select_sorted_utxos(utxos, fee_rate, amount_needed, fee_amount)
|
||||
Ok(CoinSelectionResult {
|
||||
selected,
|
||||
fee_amount,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
fn select_sorted_utxos(
|
||||
utxos: impl Iterator<Item = (bool, WeightedUtxo)>,
|
||||
fee_rate: FeeRate,
|
||||
amount_needed: u64,
|
||||
mut fee_amount: u64,
|
||||
) -> Result<CoinSelectionResult, Error> {
|
||||
let mut selected_amount = 0;
|
||||
let selected = utxos
|
||||
.scan(
|
||||
(&mut selected_amount, &mut fee_amount),
|
||||
|(selected_amount, fee_amount), (must_use, weighted_utxo)| {
|
||||
if must_use || **selected_amount < amount_needed + **fee_amount {
|
||||
**fee_amount +=
|
||||
fee_rate.fee_wu(TXIN_BASE_WEIGHT + weighted_utxo.satisfaction_weight);
|
||||
**selected_amount += weighted_utxo.utxo.txout().value;
|
||||
|
||||
log::debug!(
|
||||
"Selected {}, updated fee_amount = `{}`",
|
||||
weighted_utxo.utxo.outpoint(),
|
||||
fee_amount
|
||||
);
|
||||
|
||||
Some(weighted_utxo.utxo)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
},
|
||||
)
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let amount_needed_with_fees = amount_needed + fee_amount;
|
||||
if selected_amount < amount_needed_with_fees {
|
||||
return Err(Error::InsufficientFunds {
|
||||
needed: amount_needed_with_fees,
|
||||
available: selected_amount,
|
||||
});
|
||||
}
|
||||
|
||||
Ok(CoinSelectionResult {
|
||||
selected,
|
||||
fee_amount,
|
||||
})
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
// Adds fee information to an UTXO.
|
||||
struct OutputGroup {
|
||||
weighted_utxo: WeightedUtxo,
|
||||
// Amount of fees for spending a certain utxo, calculated using a certain FeeRate
|
||||
fee: u64,
|
||||
fee: f32,
|
||||
// The effective value of the UTXO, i.e., the utxo value minus the fee for spending it
|
||||
effective_value: i64,
|
||||
}
|
||||
|
||||
impl OutputGroup {
|
||||
fn new(weighted_utxo: WeightedUtxo, fee_rate: FeeRate) -> Self {
|
||||
let fee = fee_rate.fee_wu(TXIN_BASE_WEIGHT + weighted_utxo.satisfaction_weight);
|
||||
let effective_value = weighted_utxo.utxo.txout().value as i64 - fee as i64;
|
||||
let fee =
|
||||
(TXIN_BASE_WEIGHT + weighted_utxo.satisfaction_weight).vbytes() * fee_rate.as_sat_vb();
|
||||
let effective_value = weighted_utxo.utxo.txout().value as i64 - fee.ceil() as i64;
|
||||
OutputGroup {
|
||||
weighted_utxo,
|
||||
fee,
|
||||
@@ -365,7 +303,7 @@ impl<D: Database> CoinSelectionAlgorithm<D> for BranchAndBoundCoinSelection {
|
||||
optional_utxos: Vec<WeightedUtxo>,
|
||||
fee_rate: FeeRate,
|
||||
amount_needed: u64,
|
||||
fee_amount: u64,
|
||||
fee_amount: f32,
|
||||
) -> Result<CoinSelectionResult, Error> {
|
||||
// Mapping every (UTXO, usize) to an output group
|
||||
let required_utxos: Vec<OutputGroup> = required_utxos
|
||||
@@ -387,7 +325,7 @@ impl<D: Database> CoinSelectionAlgorithm<D> for BranchAndBoundCoinSelection {
|
||||
.iter()
|
||||
.fold(0, |acc, x| acc + x.effective_value);
|
||||
|
||||
let actual_target = fee_amount + amount_needed;
|
||||
let actual_target = fee_amount.ceil() as u64 + amount_needed;
|
||||
let cost_of_change = self.size_of_change as f32 * fee_rate.as_sat_vb();
|
||||
|
||||
let expected = (curr_available_value + curr_value)
|
||||
@@ -407,14 +345,6 @@ impl<D: Database> CoinSelectionAlgorithm<D> for BranchAndBoundCoinSelection {
|
||||
.try_into()
|
||||
.expect("Bitcoin amount to fit into i64");
|
||||
|
||||
if curr_value > actual_target {
|
||||
return Ok(BranchAndBoundCoinSelection::calculate_cs_result(
|
||||
vec![],
|
||||
required_utxos,
|
||||
fee_amount,
|
||||
));
|
||||
}
|
||||
|
||||
Ok(self
|
||||
.bnb(
|
||||
required_utxos.clone(),
|
||||
@@ -439,7 +369,7 @@ impl<D: Database> CoinSelectionAlgorithm<D> for BranchAndBoundCoinSelection {
|
||||
|
||||
impl BranchAndBoundCoinSelection {
|
||||
// TODO: make this more Rust-onic :)
|
||||
// (And perhaps refactor with less arguments?)
|
||||
// (And perhpaps refactor with less arguments?)
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
fn bnb(
|
||||
&self,
|
||||
@@ -448,7 +378,7 @@ impl BranchAndBoundCoinSelection {
|
||||
mut curr_value: i64,
|
||||
mut curr_available_value: i64,
|
||||
actual_target: i64,
|
||||
fee_amount: u64,
|
||||
fee_amount: f32,
|
||||
cost_of_change: f32,
|
||||
) -> Result<CoinSelectionResult, Error> {
|
||||
// current_selection[i] will contain true if we are using optional_utxos[i],
|
||||
@@ -556,7 +486,7 @@ impl BranchAndBoundCoinSelection {
|
||||
mut optional_utxos: Vec<OutputGroup>,
|
||||
curr_value: i64,
|
||||
actual_target: i64,
|
||||
fee_amount: u64,
|
||||
fee_amount: f32,
|
||||
) -> CoinSelectionResult {
|
||||
#[cfg(not(test))]
|
||||
optional_utxos.shuffle(&mut thread_rng());
|
||||
@@ -585,10 +515,10 @@ impl BranchAndBoundCoinSelection {
|
||||
fn calculate_cs_result(
|
||||
mut selected_utxos: Vec<OutputGroup>,
|
||||
mut required_utxos: Vec<OutputGroup>,
|
||||
mut fee_amount: u64,
|
||||
mut fee_amount: f32,
|
||||
) -> CoinSelectionResult {
|
||||
selected_utxos.append(&mut required_utxos);
|
||||
fee_amount += selected_utxos.iter().map(|u| u.fee).sum::<u64>();
|
||||
fee_amount += selected_utxos.iter().map(|u| u.fee).sum::<f32>();
|
||||
let selected = selected_utxos
|
||||
.into_iter()
|
||||
.map(|u| u.weighted_utxo.utxo)
|
||||
@@ -608,9 +538,8 @@ mod test {
|
||||
use bitcoin::{OutPoint, Script, TxOut};
|
||||
|
||||
use super::*;
|
||||
use crate::database::{BatchOperations, MemoryDatabase};
|
||||
use crate::database::MemoryDatabase;
|
||||
use crate::types::*;
|
||||
use crate::wallet::Vbytes;
|
||||
|
||||
use rand::rngs::StdRng;
|
||||
use rand::seq::SliceRandom;
|
||||
@@ -618,92 +547,55 @@ mod test {
|
||||
|
||||
const P2WPKH_WITNESS_SIZE: usize = 73 + 33 + 2;
|
||||
|
||||
const FEE_AMOUNT: u64 = 50;
|
||||
|
||||
fn utxo(value: u64, index: u32) -> WeightedUtxo {
|
||||
assert!(index < 10);
|
||||
let outpoint = OutPoint::from_str(&format!(
|
||||
"000000000000000000000000000000000000000000000000000000000000000{}:0",
|
||||
index
|
||||
))
|
||||
.unwrap();
|
||||
WeightedUtxo {
|
||||
satisfaction_weight: P2WPKH_WITNESS_SIZE,
|
||||
utxo: Utxo::Local(LocalUtxo {
|
||||
outpoint,
|
||||
txout: TxOut {
|
||||
value,
|
||||
script_pubkey: Script::new(),
|
||||
},
|
||||
keychain: KeychainKind::External,
|
||||
is_spent: false,
|
||||
}),
|
||||
}
|
||||
}
|
||||
const FEE_AMOUNT: f32 = 50.0;
|
||||
|
||||
fn get_test_utxos() -> Vec<WeightedUtxo> {
|
||||
vec![
|
||||
utxo(100_000, 0),
|
||||
utxo(FEE_AMOUNT as u64 - 40, 1),
|
||||
utxo(200_000, 2),
|
||||
WeightedUtxo {
|
||||
satisfaction_weight: P2WPKH_WITNESS_SIZE,
|
||||
utxo: Utxo::Local(LocalUtxo {
|
||||
outpoint: OutPoint::from_str(
|
||||
"0000000000000000000000000000000000000000000000000000000000000000:0",
|
||||
)
|
||||
.unwrap(),
|
||||
txout: TxOut {
|
||||
value: 100_000,
|
||||
script_pubkey: Script::new(),
|
||||
},
|
||||
keychain: KeychainKind::External,
|
||||
}),
|
||||
},
|
||||
WeightedUtxo {
|
||||
satisfaction_weight: P2WPKH_WITNESS_SIZE,
|
||||
utxo: Utxo::Local(LocalUtxo {
|
||||
outpoint: OutPoint::from_str(
|
||||
"0000000000000000000000000000000000000000000000000000000000000001:0",
|
||||
)
|
||||
.unwrap(),
|
||||
txout: TxOut {
|
||||
value: FEE_AMOUNT as u64 - 40,
|
||||
script_pubkey: Script::new(),
|
||||
},
|
||||
keychain: KeychainKind::External,
|
||||
}),
|
||||
},
|
||||
WeightedUtxo {
|
||||
satisfaction_weight: P2WPKH_WITNESS_SIZE,
|
||||
utxo: Utxo::Local(LocalUtxo {
|
||||
outpoint: OutPoint::from_str(
|
||||
"0000000000000000000000000000000000000000000000000000000000000002:0",
|
||||
)
|
||||
.unwrap(),
|
||||
txout: TxOut {
|
||||
value: 200_000,
|
||||
script_pubkey: Script::new(),
|
||||
},
|
||||
keychain: KeychainKind::Internal,
|
||||
}),
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
fn setup_database_and_get_oldest_first_test_utxos<D: Database>(
|
||||
database: &mut D,
|
||||
) -> Vec<WeightedUtxo> {
|
||||
// ensure utxos are from different tx
|
||||
let utxo1 = utxo(120_000, 1);
|
||||
let utxo2 = utxo(80_000, 2);
|
||||
let utxo3 = utxo(300_000, 3);
|
||||
|
||||
// add tx to DB so utxos are sorted by blocktime asc
|
||||
// utxos will be selected by the following order
|
||||
// utxo1(blockheight 1) -> utxo2(blockheight 2), utxo3 (blockheight 3)
|
||||
// timestamp are all set as the same to ensure that only block height is used in sorting
|
||||
let utxo1_tx_details = TransactionDetails {
|
||||
transaction: None,
|
||||
txid: utxo1.utxo.outpoint().txid,
|
||||
received: 1,
|
||||
sent: 0,
|
||||
fee: None,
|
||||
confirmation_time: Some(BlockTime {
|
||||
height: 1,
|
||||
timestamp: 1231006505,
|
||||
}),
|
||||
};
|
||||
|
||||
let utxo2_tx_details = TransactionDetails {
|
||||
transaction: None,
|
||||
txid: utxo2.utxo.outpoint().txid,
|
||||
received: 1,
|
||||
sent: 0,
|
||||
fee: None,
|
||||
confirmation_time: Some(BlockTime {
|
||||
height: 2,
|
||||
timestamp: 1231006505,
|
||||
}),
|
||||
};
|
||||
|
||||
let utxo3_tx_details = TransactionDetails {
|
||||
transaction: None,
|
||||
txid: utxo3.utxo.outpoint().txid,
|
||||
received: 1,
|
||||
sent: 0,
|
||||
fee: None,
|
||||
confirmation_time: Some(BlockTime {
|
||||
height: 3,
|
||||
timestamp: 1231006505,
|
||||
}),
|
||||
};
|
||||
|
||||
database.set_tx(&utxo1_tx_details).unwrap();
|
||||
database.set_tx(&utxo2_tx_details).unwrap();
|
||||
database.set_tx(&utxo3_tx_details).unwrap();
|
||||
|
||||
vec![utxo1, utxo2, utxo3]
|
||||
}
|
||||
|
||||
fn generate_random_utxos(rng: &mut StdRng, utxos_number: usize) -> Vec<WeightedUtxo> {
|
||||
let mut res = Vec::new();
|
||||
for _ in 0..utxos_number {
|
||||
@@ -719,7 +611,6 @@ mod test {
|
||||
script_pubkey: Script::new(),
|
||||
},
|
||||
keychain: KeychainKind::External,
|
||||
is_spent: false,
|
||||
}),
|
||||
});
|
||||
}
|
||||
@@ -739,7 +630,6 @@ mod test {
|
||||
script_pubkey: Script::new(),
|
||||
},
|
||||
keychain: KeychainKind::External,
|
||||
is_spent: false,
|
||||
}),
|
||||
};
|
||||
vec![utxo; utxos_number]
|
||||
@@ -766,13 +656,13 @@ mod test {
|
||||
vec![],
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
250_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 3);
|
||||
assert_eq!(result.selected_amount(), 300_010);
|
||||
assert_eq!(result.fee_amount, 254)
|
||||
assert!((result.fee_amount - 254.0).abs() < f32::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -787,13 +677,13 @@ mod test {
|
||||
vec![],
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
20_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 3);
|
||||
assert_eq!(result.selected_amount(), 300_010);
|
||||
assert_eq!(result.fee_amount, 254);
|
||||
assert!((result.fee_amount - 254.0).abs() < f32::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -808,13 +698,13 @@ mod test {
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
20_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 1);
|
||||
assert_eq!(result.selected_amount(), 200_000);
|
||||
assert_eq!(result.fee_amount, 118);
|
||||
assert!((result.fee_amount - 118.0).abs() < f32::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -830,7 +720,7 @@ mod test {
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
500_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
@@ -848,165 +738,7 @@ mod test {
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1000.0),
|
||||
250_000,
|
||||
FEE_AMOUNT,
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_oldest_first_coin_selection_success() {
|
||||
let mut database = MemoryDatabase::default();
|
||||
let utxos = setup_database_and_get_oldest_first_test_utxos(&mut database);
|
||||
|
||||
let result = OldestFirstCoinSelection::default()
|
||||
.coin_select(
|
||||
&database,
|
||||
vec![],
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
180_000,
|
||||
FEE_AMOUNT,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 2);
|
||||
assert_eq!(result.selected_amount(), 200_000);
|
||||
assert_eq!(result.fee_amount, 186)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_oldest_first_coin_selection_utxo_not_in_db_will_be_selected_last() {
|
||||
// ensure utxos are from different tx
|
||||
let utxo1 = utxo(120_000, 1);
|
||||
let utxo2 = utxo(80_000, 2);
|
||||
let utxo3 = utxo(300_000, 3);
|
||||
|
||||
let mut database = MemoryDatabase::default();
|
||||
|
||||
// add tx to DB so utxos are sorted by blocktime asc
|
||||
// utxos will be selected by the following order
|
||||
// utxo1(blockheight 1) -> utxo2(blockheight 2), utxo3 (not exist in DB)
|
||||
// timestamp are all set as the same to ensure that only block height is used in sorting
|
||||
let utxo1_tx_details = TransactionDetails {
|
||||
transaction: None,
|
||||
txid: utxo1.utxo.outpoint().txid,
|
||||
received: 1,
|
||||
sent: 0,
|
||||
fee: None,
|
||||
confirmation_time: Some(BlockTime {
|
||||
height: 1,
|
||||
timestamp: 1231006505,
|
||||
}),
|
||||
};
|
||||
|
||||
let utxo2_tx_details = TransactionDetails {
|
||||
transaction: None,
|
||||
txid: utxo2.utxo.outpoint().txid,
|
||||
received: 1,
|
||||
sent: 0,
|
||||
fee: None,
|
||||
confirmation_time: Some(BlockTime {
|
||||
height: 2,
|
||||
timestamp: 1231006505,
|
||||
}),
|
||||
};
|
||||
|
||||
database.set_tx(&utxo1_tx_details).unwrap();
|
||||
database.set_tx(&utxo2_tx_details).unwrap();
|
||||
|
||||
let result = OldestFirstCoinSelection::default()
|
||||
.coin_select(
|
||||
&database,
|
||||
vec![],
|
||||
vec![utxo3, utxo1, utxo2],
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
180_000,
|
||||
FEE_AMOUNT,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 2);
|
||||
assert_eq!(result.selected_amount(), 200_000);
|
||||
assert_eq!(result.fee_amount, 186)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_oldest_first_coin_selection_use_all() {
|
||||
let mut database = MemoryDatabase::default();
|
||||
let utxos = setup_database_and_get_oldest_first_test_utxos(&mut database);
|
||||
|
||||
let result = OldestFirstCoinSelection::default()
|
||||
.coin_select(
|
||||
&database,
|
||||
utxos,
|
||||
vec![],
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
20_000,
|
||||
FEE_AMOUNT,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 3);
|
||||
assert_eq!(result.selected_amount(), 500_000);
|
||||
assert_eq!(result.fee_amount, 254);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_oldest_first_coin_selection_use_only_necessary() {
|
||||
let mut database = MemoryDatabase::default();
|
||||
let utxos = setup_database_and_get_oldest_first_test_utxos(&mut database);
|
||||
|
||||
let result = OldestFirstCoinSelection::default()
|
||||
.coin_select(
|
||||
&database,
|
||||
vec![],
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
20_000,
|
||||
FEE_AMOUNT,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 1);
|
||||
assert_eq!(result.selected_amount(), 120_000);
|
||||
assert_eq!(result.fee_amount, 118);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic(expected = "InsufficientFunds")]
|
||||
fn test_oldest_first_coin_selection_insufficient_funds() {
|
||||
let mut database = MemoryDatabase::default();
|
||||
let utxos = setup_database_and_get_oldest_first_test_utxos(&mut database);
|
||||
|
||||
OldestFirstCoinSelection::default()
|
||||
.coin_select(
|
||||
&database,
|
||||
vec![],
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
600_000,
|
||||
FEE_AMOUNT,
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic(expected = "InsufficientFunds")]
|
||||
fn test_oldest_first_coin_selection_insufficient_funds_high_fees() {
|
||||
let mut database = MemoryDatabase::default();
|
||||
let utxos = setup_database_and_get_oldest_first_test_utxos(&mut database);
|
||||
|
||||
let amount_needed: u64 =
|
||||
utxos.iter().map(|wu| wu.utxo.txout().value).sum::<u64>() - (FEE_AMOUNT + 50);
|
||||
|
||||
OldestFirstCoinSelection::default()
|
||||
.coin_select(
|
||||
&database,
|
||||
vec![],
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1000.0),
|
||||
amount_needed,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
@@ -1026,13 +758,13 @@ mod test {
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
250_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 3);
|
||||
assert_eq!(result.selected_amount(), 300_000);
|
||||
assert_eq!(result.fee_amount, 254);
|
||||
assert!((result.fee_amount - 254.0).abs() < f32::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1053,7 +785,7 @@ mod test {
|
||||
|
||||
assert_eq!(result.selected.len(), 3);
|
||||
assert_eq!(result.selected_amount(), 300_010);
|
||||
assert_eq!(result.fee_amount, 254);
|
||||
assert!((result.fee_amount - 254.0).abs() < f32::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1074,38 +806,7 @@ mod test {
|
||||
|
||||
assert_eq!(result.selected.len(), 3);
|
||||
assert_eq!(result.selected_amount(), 300010);
|
||||
assert_eq!(result.fee_amount, 254);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bnb_coin_selection_required_not_enough() {
|
||||
let utxos = get_test_utxos();
|
||||
let database = MemoryDatabase::default();
|
||||
|
||||
let required = vec![utxos[0].clone()];
|
||||
let mut optional = utxos[1..].to_vec();
|
||||
optional.push(utxo(500_000, 3));
|
||||
|
||||
// Defensive assertions, for sanity and in case someone changes the test utxos vector.
|
||||
let amount: u64 = required.iter().map(|u| u.utxo.txout().value).sum();
|
||||
assert_eq!(amount, 100_000);
|
||||
let amount: u64 = optional.iter().map(|u| u.utxo.txout().value).sum();
|
||||
assert!(amount > 150_000);
|
||||
|
||||
let result = BranchAndBoundCoinSelection::default()
|
||||
.coin_select(
|
||||
&database,
|
||||
required,
|
||||
optional,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
150_000,
|
||||
FEE_AMOUNT,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.selected.len(), 3);
|
||||
assert_eq!(result.selected_amount(), 300_010);
|
||||
assert!((result.fee_amount as f32 - 254.0).abs() < f32::EPSILON);
|
||||
assert!((result.fee_amount - 254.0).abs() < f32::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1121,7 +822,7 @@ mod test {
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
500_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
@@ -1139,7 +840,7 @@ mod test {
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1000.0),
|
||||
250_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
@@ -1156,7 +857,7 @@ mod test {
|
||||
utxos,
|
||||
FeeRate::from_sat_per_vb(1.0),
|
||||
99932, // first utxo's effective value
|
||||
0,
|
||||
0.0,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -1164,7 +865,7 @@ mod test {
|
||||
assert_eq!(result.selected_amount(), 100_000);
|
||||
let input_size = (TXIN_BASE_WEIGHT + P2WPKH_WITNESS_SIZE).vbytes();
|
||||
let epsilon = 0.5;
|
||||
assert!((1.0 - (result.fee_amount as f32 / input_size as f32)).abs() < epsilon);
|
||||
assert!((1.0 - (result.fee_amount / input_size)).abs() < epsilon);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1183,7 +884,7 @@ mod test {
|
||||
optional_utxos,
|
||||
FeeRate::from_sat_per_vb(0.0),
|
||||
target_amount,
|
||||
0,
|
||||
0.0,
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(result.selected_amount(), target_amount);
|
||||
@@ -1210,7 +911,7 @@ mod test {
|
||||
0,
|
||||
curr_available_value,
|
||||
20_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
cost_of_change,
|
||||
)
|
||||
.unwrap();
|
||||
@@ -1237,7 +938,7 @@ mod test {
|
||||
0,
|
||||
curr_available_value,
|
||||
20_000,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
cost_of_change,
|
||||
)
|
||||
.unwrap();
|
||||
@@ -1249,6 +950,7 @@ mod test {
|
||||
let fee_rate = FeeRate::from_sat_per_vb(1.0);
|
||||
let size_of_change = 31;
|
||||
let cost_of_change = size_of_change as f32 * fee_rate.as_sat_vb();
|
||||
let fee_amount = 50.0;
|
||||
|
||||
let utxos: Vec<_> = generate_same_value_utxos(50_000, 10)
|
||||
.into_iter()
|
||||
@@ -1270,12 +972,12 @@ mod test {
|
||||
curr_value,
|
||||
curr_available_value,
|
||||
target_amount,
|
||||
FEE_AMOUNT,
|
||||
fee_amount,
|
||||
cost_of_change,
|
||||
)
|
||||
.unwrap();
|
||||
assert!((result.fee_amount - 186.0).abs() < f32::EPSILON);
|
||||
assert_eq!(result.selected_amount(), 100_000);
|
||||
assert_eq!(result.fee_amount, 186);
|
||||
}
|
||||
|
||||
// TODO: bnb() function should be optimized, and this test should be done with more utxos
|
||||
@@ -1307,7 +1009,7 @@ mod test {
|
||||
curr_value,
|
||||
curr_available_value,
|
||||
target_amount,
|
||||
0,
|
||||
0.0,
|
||||
0.0,
|
||||
)
|
||||
.unwrap();
|
||||
@@ -1333,10 +1035,12 @@ mod test {
|
||||
utxos,
|
||||
0,
|
||||
target_amount as i64,
|
||||
FEE_AMOUNT,
|
||||
50.0,
|
||||
);
|
||||
|
||||
assert!(result.selected_amount() > target_amount);
|
||||
assert_eq!(result.fee_amount, (50 + result.selected.len() * 68) as u64);
|
||||
assert!(
|
||||
(result.fee_amount - (50.0 + result.selected.len() as f32 * 68.0)).abs() < f32::EPSILON
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -29,8 +29,8 @@
|
||||
//! "label":"testnet"
|
||||
//! }"#;
|
||||
//!
|
||||
//! let import = FullyNodedExport::from_str(import)?;
|
||||
//! let wallet = Wallet::new(
|
||||
//! let import = WalletExport::from_str(import)?;
|
||||
//! let wallet = Wallet::new_offline(
|
||||
//! &import.descriptor(),
|
||||
//! import.change_descriptor().as_ref(),
|
||||
//! Network::Testnet,
|
||||
@@ -45,13 +45,13 @@
|
||||
//! # use bdk::database::*;
|
||||
//! # use bdk::wallet::export::*;
|
||||
//! # use bdk::*;
|
||||
//! let wallet = Wallet::new(
|
||||
//! let wallet = Wallet::new_offline(
|
||||
//! "wpkh([c258d2e4/84h/1h/0h]tpubDD3ynpHgJQW8VvWRzQ5WFDCrs4jqVFGHB3vLC3r49XHJSqP8bHKdK4AriuUKLccK68zfzowx7YhmDN8SiSkgCDENUFx9qVw65YyqM78vyVe/0/*)",
|
||||
//! Some("wpkh([c258d2e4/84h/1h/0h]tpubDD3ynpHgJQW8VvWRzQ5WFDCrs4jqVFGHB3vLC3r49XHJSqP8bHKdK4AriuUKLccK68zfzowx7YhmDN8SiSkgCDENUFx9qVw65YyqM78vyVe/1/*)"),
|
||||
//! Network::Testnet,
|
||||
//! MemoryDatabase::default()
|
||||
//! )?;
|
||||
//! let export = FullyNodedExport::export_wallet(&wallet, "exported wallet", true)
|
||||
//! let export = WalletExport::export_wallet(&wallet, "exported wallet", true)
|
||||
//! .map_err(ToString::to_string)
|
||||
//! .map_err(bdk::Error::Generic)?;
|
||||
//!
|
||||
@@ -64,21 +64,16 @@ use std::str::FromStr;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use miniscript::descriptor::{ShInner, WshInner};
|
||||
use miniscript::{Descriptor, ScriptContext, Terminal};
|
||||
use miniscript::{Descriptor, DescriptorPublicKey, ScriptContext, Terminal};
|
||||
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::types::KeychainKind;
|
||||
use crate::wallet::Wallet;
|
||||
|
||||
/// Alias for [`FullyNodedExport`]
|
||||
#[deprecated(since = "0.18.0", note = "Please use [`FullyNodedExport`] instead")]
|
||||
pub type WalletExport = FullyNodedExport;
|
||||
|
||||
/// Structure that contains the export of a wallet
|
||||
///
|
||||
/// For a usage example see [this module](crate::wallet::export)'s documentation.
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct FullyNodedExport {
|
||||
pub struct WalletExport {
|
||||
descriptor: String,
|
||||
/// Earliest block to rescan when looking for the wallet's transactions
|
||||
pub blockheight: u32,
|
||||
@@ -86,13 +81,13 @@ pub struct FullyNodedExport {
|
||||
pub label: String,
|
||||
}
|
||||
|
||||
impl ToString for FullyNodedExport {
|
||||
impl ToString for WalletExport {
|
||||
fn to_string(&self) -> String {
|
||||
serde_json::to_string(self).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
impl FromStr for FullyNodedExport {
|
||||
impl FromStr for WalletExport {
|
||||
type Err = serde_json::Error;
|
||||
|
||||
fn from_str(s: &str) -> Result<Self, Self::Err> {
|
||||
@@ -101,10 +96,10 @@ impl FromStr for FullyNodedExport {
|
||||
}
|
||||
|
||||
fn remove_checksum(s: String) -> String {
|
||||
s.split_once('#').map(|(a, _)| String::from(a)).unwrap()
|
||||
s.splitn(2, '#').next().map(String::from).unwrap()
|
||||
}
|
||||
|
||||
impl FullyNodedExport {
|
||||
impl WalletExport {
|
||||
/// Export a wallet
|
||||
///
|
||||
/// This function returns an error if it determines that the `wallet`'s descriptor(s) are not
|
||||
@@ -116,18 +111,14 @@ impl FullyNodedExport {
|
||||
///
|
||||
/// If the database is empty or `include_blockheight` is false, the `blockheight` field
|
||||
/// returned will be `0`.
|
||||
pub fn export_wallet<D: BatchDatabase>(
|
||||
wallet: &Wallet<D>,
|
||||
pub fn export_wallet<B, D: BatchDatabase>(
|
||||
wallet: &Wallet<B, D>,
|
||||
label: &str,
|
||||
include_blockheight: bool,
|
||||
) -> Result<Self, &'static str> {
|
||||
let descriptor = wallet
|
||||
.get_descriptor_for_keychain(KeychainKind::External)
|
||||
.to_string_with_secret(
|
||||
&wallet
|
||||
.get_signers(KeychainKind::External)
|
||||
.as_key_map(wallet.secp_ctx()),
|
||||
);
|
||||
.descriptor
|
||||
.to_string_with_secret(&wallet.signers.as_key_map(wallet.secp_ctx()));
|
||||
let descriptor = remove_checksum(descriptor);
|
||||
Self::is_compatible_with_core(&descriptor)?;
|
||||
|
||||
@@ -145,30 +136,18 @@ impl FullyNodedExport {
|
||||
}
|
||||
};
|
||||
|
||||
let export = FullyNodedExport {
|
||||
let export = WalletExport {
|
||||
descriptor,
|
||||
label: label.into(),
|
||||
blockheight,
|
||||
};
|
||||
|
||||
let change_descriptor = match wallet
|
||||
.public_descriptor(KeychainKind::Internal)
|
||||
.map_err(|_| "Invalid change descriptor")?
|
||||
.is_some()
|
||||
{
|
||||
false => None,
|
||||
true => {
|
||||
let descriptor = wallet
|
||||
.get_descriptor_for_keychain(KeychainKind::Internal)
|
||||
.to_string_with_secret(
|
||||
&wallet
|
||||
.get_signers(KeychainKind::Internal)
|
||||
.as_key_map(wallet.secp_ctx()),
|
||||
);
|
||||
Some(remove_checksum(descriptor))
|
||||
}
|
||||
let desc_to_string = |d: &Descriptor<DescriptorPublicKey>| {
|
||||
let descriptor =
|
||||
d.to_string_with_secret(&wallet.change_signers.as_key_map(wallet.secp_ctx()));
|
||||
remove_checksum(descriptor)
|
||||
};
|
||||
if export.change_descriptor() != change_descriptor {
|
||||
if export.change_descriptor() != wallet.change_descriptor.as_ref().map(desc_to_string) {
|
||||
return Err("Incompatible change descriptor");
|
||||
}
|
||||
|
||||
@@ -233,7 +212,7 @@ mod test {
|
||||
use crate::database::{memory::MemoryDatabase, BatchOperations};
|
||||
use crate::types::TransactionDetails;
|
||||
use crate::wallet::Wallet;
|
||||
use crate::BlockTime;
|
||||
use crate::ConfirmationTime;
|
||||
|
||||
fn get_test_db() -> MemoryDatabase {
|
||||
let mut db = MemoryDatabase::new();
|
||||
@@ -247,10 +226,11 @@ mod test {
|
||||
received: 100_000,
|
||||
sent: 0,
|
||||
fee: Some(500),
|
||||
confirmation_time: Some(BlockTime {
|
||||
confirmation_time: Some(ConfirmationTime {
|
||||
timestamp: 12345678,
|
||||
height: 5000,
|
||||
}),
|
||||
verified: true,
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
@@ -262,14 +242,14 @@ mod test {
|
||||
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
|
||||
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/1/*)";
|
||||
|
||||
let wallet = Wallet::new(
|
||||
let wallet = Wallet::new_offline(
|
||||
descriptor,
|
||||
Some(change_descriptor),
|
||||
Network::Bitcoin,
|
||||
get_test_db(),
|
||||
)
|
||||
.unwrap();
|
||||
let export = FullyNodedExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
let export = WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
|
||||
assert_eq!(export.descriptor(), descriptor);
|
||||
assert_eq!(export.change_descriptor(), Some(change_descriptor.into()));
|
||||
@@ -286,8 +266,9 @@ mod test {
|
||||
|
||||
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
|
||||
|
||||
let wallet = Wallet::new(descriptor, None, Network::Bitcoin, get_test_db()).unwrap();
|
||||
FullyNodedExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
let wallet =
|
||||
Wallet::new_offline(descriptor, None, Network::Bitcoin, get_test_db()).unwrap();
|
||||
WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -299,14 +280,14 @@ mod test {
|
||||
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
|
||||
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/50'/0'/1/*)";
|
||||
|
||||
let wallet = Wallet::new(
|
||||
let wallet = Wallet::new_offline(
|
||||
descriptor,
|
||||
Some(change_descriptor),
|
||||
Network::Bitcoin,
|
||||
get_test_db(),
|
||||
)
|
||||
.unwrap();
|
||||
FullyNodedExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -322,14 +303,14 @@ mod test {
|
||||
[c98b1535/48'/0'/0'/2']tpubDCDi5W4sP6zSnzJeowy8rQDVhBdRARaPhK1axABi8V1661wEPeanpEXj4ZLAUEoikVtoWcyK26TKKJSecSfeKxwHCcRrge9k1ybuiL71z4a/1/*\
|
||||
))";
|
||||
|
||||
let wallet = Wallet::new(
|
||||
let wallet = Wallet::new_offline(
|
||||
descriptor,
|
||||
Some(change_descriptor),
|
||||
Network::Testnet,
|
||||
get_test_db(),
|
||||
)
|
||||
.unwrap();
|
||||
let export = FullyNodedExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
let export = WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
|
||||
assert_eq!(export.descriptor(), descriptor);
|
||||
assert_eq!(export.change_descriptor(), Some(change_descriptor.into()));
|
||||
@@ -342,14 +323,14 @@ mod test {
|
||||
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
|
||||
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/1/*)";
|
||||
|
||||
let wallet = Wallet::new(
|
||||
let wallet = Wallet::new_offline(
|
||||
descriptor,
|
||||
Some(change_descriptor),
|
||||
Network::Bitcoin,
|
||||
get_test_db(),
|
||||
)
|
||||
.unwrap();
|
||||
let export = FullyNodedExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
let export = WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
|
||||
|
||||
assert_eq!(export.to_string(), "{\"descriptor\":\"wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44\'/0\'/0\'/0/*)\",\"blockheight\":5000,\"label\":\"Test Label\"}");
|
||||
}
|
||||
@@ -360,7 +341,7 @@ mod test {
|
||||
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/1/*)";
|
||||
|
||||
let import_str = "{\"descriptor\":\"wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44\'/0\'/0\'/0/*)\",\"blockheight\":5000,\"label\":\"Test Label\"}";
|
||||
let export = FullyNodedExport::from_str(import_str).unwrap();
|
||||
let export = WalletExport::from_str(import_str).unwrap();
|
||||
|
||||
assert_eq!(export.descriptor(), descriptor);
|
||||
assert_eq!(export.change_descriptor(), Some(change_descriptor.into()));
|
||||
|
||||
1958
src/wallet/mod.rs
1958
src/wallet/mod.rs
File diff suppressed because it is too large
Load Diff
@@ -26,7 +26,7 @@
|
||||
//! # #[derive(Debug)]
|
||||
//! # struct CustomHSM;
|
||||
//! # impl CustomHSM {
|
||||
//! # fn hsm_sign_input(&self, _psbt: &mut psbt::PartiallySignedTransaction, _input: usize) -> Result<(), SignerError> {
|
||||
//! # fn sign_input(&self, _psbt: &mut psbt::PartiallySignedTransaction, _input: usize) -> Result<(), SignerError> {
|
||||
//! # Ok(())
|
||||
//! # }
|
||||
//! # fn connect() -> Self {
|
||||
@@ -47,29 +47,32 @@
|
||||
//! }
|
||||
//! }
|
||||
//!
|
||||
//! impl SignerCommon for CustomSigner {
|
||||
//! impl Signer for CustomSigner {
|
||||
//! fn sign(
|
||||
//! &self,
|
||||
//! psbt: &mut psbt::PartiallySignedTransaction,
|
||||
//! input_index: Option<usize>,
|
||||
//! _secp: &Secp256k1<All>,
|
||||
//! ) -> Result<(), SignerError> {
|
||||
//! let input_index = input_index.ok_or(SignerError::InputIndexOutOfRange)?;
|
||||
//! self.device.sign_input(psbt, input_index)?;
|
||||
//!
|
||||
//! Ok(())
|
||||
//! }
|
||||
//!
|
||||
//! fn id(&self, _secp: &Secp256k1<All>) -> SignerId {
|
||||
//! self.device.get_id()
|
||||
//! }
|
||||
//! }
|
||||
//!
|
||||
//! impl InputSigner for CustomSigner {
|
||||
//! fn sign_input(
|
||||
//! &self,
|
||||
//! psbt: &mut psbt::PartiallySignedTransaction,
|
||||
//! input_index: usize,
|
||||
//! _secp: &Secp256k1<All>,
|
||||
//! ) -> Result<(), SignerError> {
|
||||
//! self.device.hsm_sign_input(psbt, input_index)?;
|
||||
//!
|
||||
//! Ok(())
|
||||
//! fn sign_whole_tx(&self) -> bool {
|
||||
//! false
|
||||
//! }
|
||||
//! }
|
||||
//!
|
||||
//! let custom_signer = CustomSigner::connect();
|
||||
//!
|
||||
//! let descriptor = "wpkh(tpubD6NzVbkrYhZ4Xferm7Pz4VnjdcDPFyjVu5K4iZXQ4pVN8Cks4pHVowTBXBKRhX64pkRyJZJN5xAKj4UDNnLPb5p2sSKXhewoYx5GbTdUFWq/*)";
|
||||
//! let mut wallet = Wallet::new(descriptor, None, Network::Testnet, MemoryDatabase::default())?;
|
||||
//! let mut wallet = Wallet::new_offline(descriptor, None, Network::Testnet, MemoryDatabase::default())?;
|
||||
//! wallet.add_signer(
|
||||
//! KeychainKind::External,
|
||||
//! SignerOrdering(200),
|
||||
@@ -82,26 +85,22 @@
|
||||
use std::cmp::Ordering;
|
||||
use std::collections::BTreeMap;
|
||||
use std::fmt;
|
||||
use std::ops::{Bound::Included, Deref};
|
||||
use std::ops::Bound::Included;
|
||||
use std::sync::Arc;
|
||||
|
||||
use bitcoin::blockdata::opcodes;
|
||||
use bitcoin::blockdata::script::Builder as ScriptBuilder;
|
||||
use bitcoin::hashes::{hash160, Hash};
|
||||
use bitcoin::secp256k1::Message;
|
||||
use bitcoin::secp256k1::{Message, Secp256k1};
|
||||
use bitcoin::util::bip32::{ChildNumber, DerivationPath, ExtendedPrivKey, Fingerprint};
|
||||
use bitcoin::util::{ecdsa, psbt, schnorr, sighash, taproot};
|
||||
use bitcoin::{secp256k1, XOnlyPublicKey};
|
||||
use bitcoin::{EcdsaSighashType, PrivateKey, PublicKey, SchnorrSighashType, Script};
|
||||
use bitcoin::util::{bip143, psbt};
|
||||
use bitcoin::{PrivateKey, Script, SigHash, SigHashType};
|
||||
|
||||
use miniscript::descriptor::{
|
||||
Descriptor, DescriptorPublicKey, DescriptorSecretKey, DescriptorSinglePriv, DescriptorXKey,
|
||||
KeyMap, SinglePubKey,
|
||||
};
|
||||
use miniscript::{Legacy, MiniscriptKey, Segwitv0, Tap};
|
||||
use miniscript::descriptor::{DescriptorSecretKey, DescriptorSinglePriv, DescriptorXKey, KeyMap};
|
||||
use miniscript::{Legacy, MiniscriptKey, Segwitv0};
|
||||
|
||||
use super::utils::SecpCtx;
|
||||
use crate::descriptor::{DescriptorMeta, XKeyUtils};
|
||||
use crate::descriptor::XKeyUtils;
|
||||
|
||||
/// Identifier of a signer in the `SignersContainers`. Used as a key to find the right signer among
|
||||
/// multiple of them
|
||||
@@ -144,7 +143,7 @@ pub enum SignerError {
|
||||
InvalidNonWitnessUtxo,
|
||||
/// The `witness_utxo` field of the transaction is required to sign this input
|
||||
MissingWitnessUtxo,
|
||||
/// The `witness_script` field of the transaction is required to sign this input
|
||||
/// The `witness_script` field of the transaction is requied to sign this input
|
||||
MissingWitnessScript,
|
||||
/// The fingerprint and derivation path are missing from the psbt input
|
||||
MissingHdKeypath,
|
||||
@@ -154,16 +153,6 @@ pub enum SignerError {
|
||||
/// To enable signing transactions with non-standard sighashes set
|
||||
/// [`SignOptions::allow_all_sighashes`] to `true`.
|
||||
NonStandardSighash,
|
||||
/// Invalid SIGHASH for the signing context in use
|
||||
InvalidSighash,
|
||||
/// Error while computing the hash to sign
|
||||
SighashError(sighash::Error),
|
||||
}
|
||||
|
||||
impl From<sighash::Error> for SignerError {
|
||||
fn from(e: sighash::Error) -> Self {
|
||||
SignerError::SighashError(e)
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Display for SignerError {
|
||||
@@ -174,46 +163,27 @@ impl fmt::Display for SignerError {
|
||||
|
||||
impl std::error::Error for SignerError {}
|
||||
|
||||
/// Signing context
|
||||
/// Trait for signers
|
||||
///
|
||||
/// Used by our software signers to determine the type of signatures to make
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum SignerContext {
|
||||
/// Legacy context
|
||||
Legacy,
|
||||
/// Segwit v0 context (BIP 143)
|
||||
Segwitv0,
|
||||
/// Taproot context (BIP 340)
|
||||
Tap {
|
||||
/// Whether the signer can sign for the internal key or not
|
||||
is_internal_key: bool,
|
||||
},
|
||||
}
|
||||
/// This trait can be implemented to provide customized signers to the wallet. For an example see
|
||||
/// [`this module`](crate::wallet::signer)'s documentation.
|
||||
pub trait Signer: fmt::Debug + Send + Sync {
|
||||
/// Sign a PSBT
|
||||
///
|
||||
/// The `input_index` argument is only provided if the wallet doesn't declare to sign the whole
|
||||
/// transaction in one go (see [`Signer::sign_whole_tx`]). Otherwise its value is `None` and
|
||||
/// can be ignored.
|
||||
fn sign(
|
||||
&self,
|
||||
psbt: &mut psbt::PartiallySignedTransaction,
|
||||
input_index: Option<usize>,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(), SignerError>;
|
||||
|
||||
/// Wrapper structure to pair a signer with its context
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SignerWrapper<S: Sized + fmt::Debug + Clone> {
|
||||
signer: S,
|
||||
ctx: SignerContext,
|
||||
}
|
||||
/// Return whether or not the signer signs the whole transaction in one go instead of every
|
||||
/// input individually
|
||||
fn sign_whole_tx(&self) -> bool;
|
||||
|
||||
impl<S: Sized + fmt::Debug + Clone> SignerWrapper<S> {
|
||||
/// Create a wrapped signer from a signer and a context
|
||||
pub fn new(signer: S, ctx: SignerContext) -> Self {
|
||||
SignerWrapper { signer, ctx }
|
||||
}
|
||||
}
|
||||
|
||||
impl<S: Sized + fmt::Debug + Clone> Deref for SignerWrapper<S> {
|
||||
type Target = S;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.signer
|
||||
}
|
||||
}
|
||||
|
||||
/// Common signer methods
|
||||
pub trait SignerCommon: fmt::Debug + Send + Sync {
|
||||
/// Return the [`SignerId`] for this signer
|
||||
///
|
||||
/// The [`SignerId`] can be used to lookup a signer in the [`Wallet`](crate::Wallet)'s signers map or to
|
||||
@@ -230,65 +200,14 @@ pub trait SignerCommon: fmt::Debug + Send + Sync {
|
||||
}
|
||||
}
|
||||
|
||||
/// PSBT Input signer
|
||||
///
|
||||
/// This trait can be implemented to provide custom signers to the wallet. If the signer supports signing
|
||||
/// individual inputs, this trait should be implemented and BDK will provide automatically an implementation
|
||||
/// for [`TransactionSigner`].
|
||||
pub trait InputSigner: SignerCommon {
|
||||
/// Sign a single psbt input
|
||||
fn sign_input(
|
||||
&self,
|
||||
psbt: &mut psbt::PartiallySignedTransaction,
|
||||
input_index: usize,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(), SignerError>;
|
||||
}
|
||||
|
||||
/// PSBT signer
|
||||
///
|
||||
/// This trait can be implemented when the signer can't sign inputs individually, but signs the whole transaction
|
||||
/// at once.
|
||||
pub trait TransactionSigner: SignerCommon {
|
||||
/// Sign all the inputs of the psbt
|
||||
fn sign_transaction(
|
||||
&self,
|
||||
psbt: &mut psbt::PartiallySignedTransaction,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(), SignerError>;
|
||||
}
|
||||
|
||||
impl<T: InputSigner> TransactionSigner for T {
|
||||
fn sign_transaction(
|
||||
impl Signer for DescriptorXKey<ExtendedPrivKey> {
|
||||
fn sign(
|
||||
&self,
|
||||
psbt: &mut psbt::PartiallySignedTransaction,
|
||||
input_index: Option<usize>,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(), SignerError> {
|
||||
for input_index in 0..psbt.inputs.len() {
|
||||
self.sign_input(psbt, input_index, secp)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl SignerCommon for SignerWrapper<DescriptorXKey<ExtendedPrivKey>> {
|
||||
fn id(&self, secp: &SecpCtx) -> SignerId {
|
||||
SignerId::from(self.root_fingerprint(secp))
|
||||
}
|
||||
|
||||
fn descriptor_secret_key(&self) -> Option<DescriptorSecretKey> {
|
||||
Some(DescriptorSecretKey::XPrv(self.signer.clone()))
|
||||
}
|
||||
}
|
||||
|
||||
impl InputSigner for SignerWrapper<DescriptorXKey<ExtendedPrivKey>> {
|
||||
fn sign_input(
|
||||
&self,
|
||||
psbt: &mut psbt::PartiallySignedTransaction,
|
||||
input_index: usize,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(), SignerError> {
|
||||
let input_index = input_index.unwrap();
|
||||
if input_index >= psbt.inputs.len() {
|
||||
return Err(SignerError::InputIndexOutOfRange);
|
||||
}
|
||||
@@ -299,23 +218,19 @@ impl InputSigner for SignerWrapper<DescriptorXKey<ExtendedPrivKey>> {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let tap_key_origins = psbt.inputs[input_index]
|
||||
.tap_key_origins
|
||||
.iter()
|
||||
.map(|(pk, (_, keysource))| (SinglePubKey::XOnly(*pk), keysource));
|
||||
let (public_key, full_path) = match psbt.inputs[input_index]
|
||||
.bip32_derivation
|
||||
.iter()
|
||||
.map(|(pk, keysource)| (SinglePubKey::FullKey(PublicKey::new(*pk)), keysource))
|
||||
.chain(tap_key_origins)
|
||||
.find_map(|(pk, keysource)| {
|
||||
if self.matches(keysource, secp).is_some() {
|
||||
Some((pk, keysource.1.clone()))
|
||||
.filter_map(|(pk, &(fingerprint, ref path))| {
|
||||
if self.matches(&(fingerprint, path.clone()), secp).is_some() {
|
||||
Some((pk, path))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}) {
|
||||
Some((pk, full_path)) => (pk, full_path),
|
||||
})
|
||||
.next()
|
||||
{
|
||||
Some((pk, full_path)) => (pk, full_path.clone()),
|
||||
None => return Ok(()),
|
||||
};
|
||||
|
||||
@@ -330,48 +245,35 @@ impl InputSigner for SignerWrapper<DescriptorXKey<ExtendedPrivKey>> {
|
||||
None => self.xkey.derive_priv(secp, &full_path).unwrap(),
|
||||
};
|
||||
|
||||
let computed_pk = secp256k1::PublicKey::from_secret_key(secp, &derived_key.private_key);
|
||||
let valid_key = match public_key {
|
||||
SinglePubKey::FullKey(pk) if pk.inner == computed_pk => true,
|
||||
SinglePubKey::XOnly(x_only) if XOnlyPublicKey::from(computed_pk) == x_only => true,
|
||||
_ => false,
|
||||
};
|
||||
if !valid_key {
|
||||
if &derived_key.private_key.public_key(secp) != public_key {
|
||||
Err(SignerError::InvalidKey)
|
||||
} else {
|
||||
// HD wallets imply compressed keys
|
||||
let priv_key = PrivateKey {
|
||||
compressed: true,
|
||||
network: self.xkey.network,
|
||||
inner: derived_key.private_key,
|
||||
};
|
||||
|
||||
SignerWrapper::new(priv_key, self.ctx).sign_input(psbt, input_index, secp)
|
||||
derived_key.private_key.sign(psbt, Some(input_index), secp)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl SignerCommon for SignerWrapper<PrivateKey> {
|
||||
fn sign_whole_tx(&self) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
fn id(&self, secp: &SecpCtx) -> SignerId {
|
||||
SignerId::from(self.public_key(secp).to_pubkeyhash())
|
||||
SignerId::from(self.root_fingerprint(secp))
|
||||
}
|
||||
|
||||
fn descriptor_secret_key(&self) -> Option<DescriptorSecretKey> {
|
||||
Some(DescriptorSecretKey::SinglePriv(DescriptorSinglePriv {
|
||||
key: self.signer,
|
||||
origin: None,
|
||||
}))
|
||||
Some(DescriptorSecretKey::XPrv(self.clone()))
|
||||
}
|
||||
}
|
||||
|
||||
impl InputSigner for SignerWrapper<PrivateKey> {
|
||||
fn sign_input(
|
||||
impl Signer for PrivateKey {
|
||||
fn sign(
|
||||
&self,
|
||||
psbt: &mut psbt::PartiallySignedTransaction,
|
||||
input_index: usize,
|
||||
input_index: Option<usize>,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<(), SignerError> {
|
||||
if input_index >= psbt.inputs.len() || input_index >= psbt.unsigned_tx.input.len() {
|
||||
let input_index = input_index.unwrap();
|
||||
if input_index >= psbt.inputs.len() || input_index >= psbt.global.unsigned_tx.input.len() {
|
||||
return Err(SignerError::InputIndexOutOfRange);
|
||||
}
|
||||
|
||||
@@ -381,124 +283,49 @@ impl InputSigner for SignerWrapper<PrivateKey> {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let pubkey = PublicKey::from_private_key(secp, self);
|
||||
let x_only_pubkey = XOnlyPublicKey::from(pubkey.inner);
|
||||
|
||||
if let SignerContext::Tap { is_internal_key } = self.ctx {
|
||||
if is_internal_key && psbt.inputs[input_index].tap_key_sig.is_none() {
|
||||
let (hash, hash_ty) = Tap::sighash(psbt, input_index, None)?;
|
||||
sign_psbt_schnorr(
|
||||
&self.inner,
|
||||
x_only_pubkey,
|
||||
None,
|
||||
&mut psbt.inputs[input_index],
|
||||
hash,
|
||||
hash_ty,
|
||||
secp,
|
||||
);
|
||||
}
|
||||
|
||||
if let Some((leaf_hashes, _)) =
|
||||
psbt.inputs[input_index].tap_key_origins.get(&x_only_pubkey)
|
||||
{
|
||||
let leaf_hashes = leaf_hashes
|
||||
.iter()
|
||||
.filter(|lh| {
|
||||
!psbt.inputs[input_index]
|
||||
.tap_script_sigs
|
||||
.contains_key(&(x_only_pubkey, **lh))
|
||||
})
|
||||
.cloned()
|
||||
.collect::<Vec<_>>();
|
||||
for lh in leaf_hashes {
|
||||
let (hash, hash_ty) = Tap::sighash(psbt, input_index, Some(lh))?;
|
||||
sign_psbt_schnorr(
|
||||
&self.inner,
|
||||
x_only_pubkey,
|
||||
Some(lh),
|
||||
&mut psbt.inputs[input_index],
|
||||
hash,
|
||||
hash_ty,
|
||||
secp,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let pubkey = self.public_key(secp);
|
||||
if psbt.inputs[input_index].partial_sigs.contains_key(&pubkey) {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let (hash, hash_ty) = match self.ctx {
|
||||
SignerContext::Segwitv0 => Segwitv0::sighash(psbt, input_index, ())?,
|
||||
SignerContext::Legacy => Legacy::sighash(psbt, input_index, ())?,
|
||||
_ => return Ok(()), // handled above
|
||||
// FIXME: use the presence of `witness_utxo` as an indication that we should make a bip143
|
||||
// sig. Does this make sense? Should we add an extra argument to explicitly swith between
|
||||
// these? The original idea was to declare sign() as sign<Ctx: ScriptContex>() and use Ctx,
|
||||
// but that violates the rules for trait-objects, so we can't do it.
|
||||
let (hash, sighash) = match psbt.inputs[input_index].witness_utxo {
|
||||
Some(_) => Segwitv0::sighash(psbt, input_index)?,
|
||||
None => Legacy::sighash(psbt, input_index)?,
|
||||
};
|
||||
sign_psbt_ecdsa(
|
||||
&self.inner,
|
||||
pubkey,
|
||||
&mut psbt.inputs[input_index],
|
||||
hash,
|
||||
hash_ty,
|
||||
secp,
|
||||
|
||||
let signature = secp.sign(
|
||||
&Message::from_slice(&hash.into_inner()[..]).unwrap(),
|
||||
&self.key,
|
||||
);
|
||||
|
||||
let mut final_signature = Vec::with_capacity(75);
|
||||
final_signature.extend_from_slice(&signature.serialize_der());
|
||||
final_signature.push(sighash.as_u32() as u8);
|
||||
|
||||
psbt.inputs[input_index]
|
||||
.partial_sigs
|
||||
.insert(pubkey, final_signature);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
fn sign_psbt_ecdsa(
|
||||
secret_key: &secp256k1::SecretKey,
|
||||
pubkey: PublicKey,
|
||||
psbt_input: &mut psbt::Input,
|
||||
hash: bitcoin::Sighash,
|
||||
hash_ty: EcdsaSighashType,
|
||||
secp: &SecpCtx,
|
||||
) {
|
||||
let sig = secp.sign_ecdsa(
|
||||
&Message::from_slice(&hash.into_inner()[..]).unwrap(),
|
||||
secret_key,
|
||||
);
|
||||
fn sign_whole_tx(&self) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
let final_signature = ecdsa::EcdsaSig { sig, hash_ty };
|
||||
psbt_input.partial_sigs.insert(pubkey, final_signature);
|
||||
}
|
||||
fn id(&self, secp: &SecpCtx) -> SignerId {
|
||||
SignerId::from(self.public_key(secp).to_pubkeyhash())
|
||||
}
|
||||
|
||||
// Calling this with `leaf_hash` = `None` will sign for key-spend
|
||||
fn sign_psbt_schnorr(
|
||||
secret_key: &secp256k1::SecretKey,
|
||||
pubkey: XOnlyPublicKey,
|
||||
leaf_hash: Option<taproot::TapLeafHash>,
|
||||
psbt_input: &mut psbt::Input,
|
||||
hash: taproot::TapSighashHash,
|
||||
hash_ty: SchnorrSighashType,
|
||||
secp: &SecpCtx,
|
||||
) {
|
||||
use schnorr::TapTweak;
|
||||
|
||||
let keypair = secp256k1::KeyPair::from_seckey_slice(secp, secret_key.as_ref()).unwrap();
|
||||
let keypair = match leaf_hash {
|
||||
None => keypair
|
||||
.tap_tweak(secp, psbt_input.tap_merkle_root)
|
||||
.into_inner(),
|
||||
Some(_) => keypair, // no tweak for script spend
|
||||
};
|
||||
|
||||
let sig = secp.sign_schnorr(
|
||||
&Message::from_slice(&hash.into_inner()[..]).unwrap(),
|
||||
&keypair,
|
||||
);
|
||||
|
||||
let final_signature = schnorr::SchnorrSig { sig, hash_ty };
|
||||
|
||||
if let Some(lh) = leaf_hash {
|
||||
psbt_input
|
||||
.tap_script_sigs
|
||||
.insert((pubkey, lh), final_signature);
|
||||
} else {
|
||||
psbt_input.tap_key_sig = Some(final_signature);
|
||||
fn descriptor_secret_key(&self) -> Option<DescriptorSecretKey> {
|
||||
Some(DescriptorSecretKey::SinglePriv(DescriptorSinglePriv {
|
||||
key: *self,
|
||||
origin: None,
|
||||
}))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -533,7 +360,7 @@ impl From<(SignerId, SignerOrdering)> for SignersContainerKey {
|
||||
|
||||
/// Container for multiple signers
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct SignersContainer(BTreeMap<SignersContainerKey, Arc<dyn TransactionSigner>>);
|
||||
pub struct SignersContainer(BTreeMap<SignersContainerKey, Arc<dyn Signer>>);
|
||||
|
||||
impl SignersContainer {
|
||||
/// Create a map of public keys to secret keys
|
||||
@@ -544,37 +371,24 @@ impl SignersContainer {
|
||||
.filter_map(|secret| secret.as_public(secp).ok().map(|public| (public, secret)))
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a new signer container from a [`KeyMap`]
|
||||
///
|
||||
/// Also looks at the corresponding descriptor to determine the [`SignerContext`] to attach to
|
||||
/// the signers
|
||||
pub fn build(
|
||||
keymap: KeyMap,
|
||||
descriptor: &Descriptor<DescriptorPublicKey>,
|
||||
secp: &SecpCtx,
|
||||
) -> SignersContainer {
|
||||
impl From<KeyMap> for SignersContainer {
|
||||
fn from(keymap: KeyMap) -> SignersContainer {
|
||||
let secp = Secp256k1::new();
|
||||
let mut container = SignersContainer::new();
|
||||
|
||||
for (pubkey, secret) in keymap {
|
||||
let ctx = match descriptor {
|
||||
Descriptor::Tr(tr) => SignerContext::Tap {
|
||||
is_internal_key: tr.internal_key() == &pubkey,
|
||||
},
|
||||
_ if descriptor.is_witness() => SignerContext::Segwitv0,
|
||||
_ => SignerContext::Legacy,
|
||||
};
|
||||
|
||||
for (_, secret) in keymap {
|
||||
match secret {
|
||||
DescriptorSecretKey::SinglePriv(private_key) => container.add_external(
|
||||
SignerId::from(private_key.key.public_key(secp).to_pubkeyhash()),
|
||||
SignerId::from(private_key.key.public_key(&secp).to_pubkeyhash()),
|
||||
SignerOrdering::default(),
|
||||
Arc::new(SignerWrapper::new(private_key.key, ctx)),
|
||||
Arc::new(private_key.key),
|
||||
),
|
||||
DescriptorSecretKey::XPrv(xprv) => container.add_external(
|
||||
SignerId::from(xprv.root_fingerprint(secp)),
|
||||
SignerId::from(xprv.root_fingerprint(&secp)),
|
||||
SignerOrdering::default(),
|
||||
Arc::new(SignerWrapper::new(xprv, ctx)),
|
||||
Arc::new(xprv),
|
||||
),
|
||||
};
|
||||
}
|
||||
@@ -595,17 +409,13 @@ impl SignersContainer {
|
||||
&mut self,
|
||||
id: SignerId,
|
||||
ordering: SignerOrdering,
|
||||
signer: Arc<dyn TransactionSigner>,
|
||||
) -> Option<Arc<dyn TransactionSigner>> {
|
||||
signer: Arc<dyn Signer>,
|
||||
) -> Option<Arc<dyn Signer>> {
|
||||
self.0.insert((id, ordering).into(), signer)
|
||||
}
|
||||
|
||||
/// Removes a signer from the container and returns it
|
||||
pub fn remove(
|
||||
&mut self,
|
||||
id: SignerId,
|
||||
ordering: SignerOrdering,
|
||||
) -> Option<Arc<dyn TransactionSigner>> {
|
||||
pub fn remove(&mut self, id: SignerId, ordering: SignerOrdering) -> Option<Arc<dyn Signer>> {
|
||||
self.0.remove(&(id, ordering).into())
|
||||
}
|
||||
|
||||
@@ -618,12 +428,12 @@ impl SignersContainer {
|
||||
}
|
||||
|
||||
/// Returns the list of signers in the container, sorted by lowest to highest `ordering`
|
||||
pub fn signers(&self) -> Vec<&Arc<dyn TransactionSigner>> {
|
||||
pub fn signers(&self) -> Vec<&Arc<dyn Signer>> {
|
||||
self.0.values().collect()
|
||||
}
|
||||
|
||||
/// Finds the signer with lowest ordering for a given id in the container.
|
||||
pub fn find(&self, id: SignerId) -> Option<&Arc<dyn TransactionSigner>> {
|
||||
pub fn find(&self, id: SignerId) -> Option<&Arc<dyn Signer>> {
|
||||
self.0
|
||||
.range((
|
||||
Included(&(id.clone(), SignerOrdering(0)).into()),
|
||||
@@ -667,65 +477,38 @@ pub struct SignOptions {
|
||||
///
|
||||
/// Defaults to `false` which will only allow signing using `SIGHASH_ALL`.
|
||||
pub allow_all_sighashes: bool,
|
||||
|
||||
/// Whether to remove partial_sigs from psbt inputs while finalizing psbt.
|
||||
///
|
||||
/// Defaults to `true` which will remove partial_sigs after finalizing.
|
||||
pub remove_partial_sigs: bool,
|
||||
|
||||
/// Whether to try finalizing psbt input after the inputs are signed.
|
||||
///
|
||||
/// Defaults to `true` which will try fianlizing psbt after inputs are signed.
|
||||
pub try_finalize: bool,
|
||||
}
|
||||
|
||||
#[allow(clippy::derivable_impls)]
|
||||
impl Default for SignOptions {
|
||||
fn default() -> Self {
|
||||
SignOptions {
|
||||
trust_witness_utxo: false,
|
||||
assume_height: None,
|
||||
allow_all_sighashes: false,
|
||||
remove_partial_sigs: true,
|
||||
try_finalize: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) trait ComputeSighash {
|
||||
type Extra;
|
||||
type Sighash;
|
||||
type SighashType;
|
||||
|
||||
fn sighash(
|
||||
psbt: &psbt::PartiallySignedTransaction,
|
||||
input_index: usize,
|
||||
extra: Self::Extra,
|
||||
) -> Result<(Self::Sighash, Self::SighashType), SignerError>;
|
||||
) -> Result<(SigHash, SigHashType), SignerError>;
|
||||
}
|
||||
|
||||
impl ComputeSighash for Legacy {
|
||||
type Extra = ();
|
||||
type Sighash = bitcoin::Sighash;
|
||||
type SighashType = EcdsaSighashType;
|
||||
|
||||
fn sighash(
|
||||
psbt: &psbt::PartiallySignedTransaction,
|
||||
input_index: usize,
|
||||
_extra: (),
|
||||
) -> Result<(Self::Sighash, Self::SighashType), SignerError> {
|
||||
if input_index >= psbt.inputs.len() || input_index >= psbt.unsigned_tx.input.len() {
|
||||
) -> Result<(SigHash, SigHashType), SignerError> {
|
||||
if input_index >= psbt.inputs.len() || input_index >= psbt.global.unsigned_tx.input.len() {
|
||||
return Err(SignerError::InputIndexOutOfRange);
|
||||
}
|
||||
|
||||
let psbt_input = &psbt.inputs[input_index];
|
||||
let tx_input = &psbt.unsigned_tx.input[input_index];
|
||||
let tx_input = &psbt.global.unsigned_tx.input[input_index];
|
||||
|
||||
let sighash = psbt_input
|
||||
.sighash_type
|
||||
.unwrap_or_else(|| EcdsaSighashType::All.into())
|
||||
.ecdsa_hash_ty()
|
||||
.map_err(|_| SignerError::InvalidSighash)?;
|
||||
let sighash = psbt_input.sighash_type.unwrap_or(SigHashType::All);
|
||||
let script = match psbt_input.redeem_script {
|
||||
Some(ref redeem_script) => redeem_script.clone(),
|
||||
None => {
|
||||
@@ -743,11 +526,9 @@ impl ComputeSighash for Legacy {
|
||||
};
|
||||
|
||||
Ok((
|
||||
sighash::SighashCache::new(&psbt.unsigned_tx).legacy_signature_hash(
|
||||
input_index,
|
||||
&script,
|
||||
sighash.to_u32(),
|
||||
)?,
|
||||
psbt.global
|
||||
.unsigned_tx
|
||||
.signature_hash(input_index, &script, sighash.as_u32()),
|
||||
sighash,
|
||||
))
|
||||
}
|
||||
@@ -764,27 +545,18 @@ fn p2wpkh_script_code(script: &Script) -> Script {
|
||||
}
|
||||
|
||||
impl ComputeSighash for Segwitv0 {
|
||||
type Extra = ();
|
||||
type Sighash = bitcoin::Sighash;
|
||||
type SighashType = EcdsaSighashType;
|
||||
|
||||
fn sighash(
|
||||
psbt: &psbt::PartiallySignedTransaction,
|
||||
input_index: usize,
|
||||
_extra: (),
|
||||
) -> Result<(Self::Sighash, Self::SighashType), SignerError> {
|
||||
if input_index >= psbt.inputs.len() || input_index >= psbt.unsigned_tx.input.len() {
|
||||
) -> Result<(SigHash, SigHashType), SignerError> {
|
||||
if input_index >= psbt.inputs.len() || input_index >= psbt.global.unsigned_tx.input.len() {
|
||||
return Err(SignerError::InputIndexOutOfRange);
|
||||
}
|
||||
|
||||
let psbt_input = &psbt.inputs[input_index];
|
||||
let tx_input = &psbt.unsigned_tx.input[input_index];
|
||||
let tx_input = &psbt.global.unsigned_tx.input[input_index];
|
||||
|
||||
let sighash = psbt_input
|
||||
.sighash_type
|
||||
.unwrap_or_else(|| EcdsaSighashType::All.into())
|
||||
.ecdsa_hash_ty()
|
||||
.map_err(|_| SignerError::InvalidSighash)?;
|
||||
let sighash = psbt_input.sighash_type.unwrap_or(SigHashType::All);
|
||||
|
||||
// Always try first with the non-witness utxo
|
||||
let utxo = if let Some(prev_tx) = &psbt_input.non_witness_utxo {
|
||||
@@ -827,72 +599,17 @@ impl ComputeSighash for Segwitv0 {
|
||||
};
|
||||
|
||||
Ok((
|
||||
sighash::SighashCache::new(&psbt.unsigned_tx).segwit_signature_hash(
|
||||
bip143::SigHashCache::new(&psbt.global.unsigned_tx).signature_hash(
|
||||
input_index,
|
||||
&script,
|
||||
value,
|
||||
sighash,
|
||||
)?,
|
||||
),
|
||||
sighash,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
impl ComputeSighash for Tap {
|
||||
type Extra = Option<taproot::TapLeafHash>;
|
||||
type Sighash = taproot::TapSighashHash;
|
||||
type SighashType = SchnorrSighashType;
|
||||
|
||||
fn sighash(
|
||||
psbt: &psbt::PartiallySignedTransaction,
|
||||
input_index: usize,
|
||||
extra: Self::Extra,
|
||||
) -> Result<(Self::Sighash, SchnorrSighashType), SignerError> {
|
||||
if input_index >= psbt.inputs.len() || input_index >= psbt.unsigned_tx.input.len() {
|
||||
return Err(SignerError::InputIndexOutOfRange);
|
||||
}
|
||||
|
||||
let psbt_input = &psbt.inputs[input_index];
|
||||
|
||||
let sighash_type = psbt_input
|
||||
.sighash_type
|
||||
.unwrap_or_else(|| SchnorrSighashType::Default.into())
|
||||
.schnorr_hash_ty()
|
||||
.map_err(|_| SignerError::InvalidSighash)?;
|
||||
let witness_utxos = psbt
|
||||
.inputs
|
||||
.iter()
|
||||
.cloned()
|
||||
.map(|i| i.witness_utxo)
|
||||
.collect::<Vec<_>>();
|
||||
let mut all_witness_utxos = vec![];
|
||||
|
||||
let mut cache = sighash::SighashCache::new(&psbt.unsigned_tx);
|
||||
let is_anyone_can_pay = psbt::PsbtSighashType::from(sighash_type).to_u32() & 0x80 != 0;
|
||||
let prevouts = if is_anyone_can_pay {
|
||||
sighash::Prevouts::One(
|
||||
input_index,
|
||||
witness_utxos[input_index]
|
||||
.as_ref()
|
||||
.ok_or(SignerError::MissingWitnessUtxo)?,
|
||||
)
|
||||
} else if witness_utxos.iter().all(Option::is_some) {
|
||||
all_witness_utxos.extend(witness_utxos.iter().filter_map(|x| x.as_ref()));
|
||||
sighash::Prevouts::All(&all_witness_utxos)
|
||||
} else {
|
||||
return Err(SignerError::MissingWitnessUtxo);
|
||||
};
|
||||
|
||||
// Assume no OP_CODESEPARATOR
|
||||
let extra = extra.map(|leaf_hash| (leaf_hash, 0xFFFFFFFF));
|
||||
|
||||
Ok((
|
||||
cache.taproot_signature_hash(input_index, &prevouts, None, extra, sighash_type)?,
|
||||
sighash_type,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialOrd for SignersContainerKey {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
Some(self.cmp(other))
|
||||
@@ -923,11 +640,12 @@ mod signers_container_tests {
|
||||
use crate::keys::{DescriptorKey, IntoDescriptorKey};
|
||||
use bitcoin::secp256k1::{All, Secp256k1};
|
||||
use bitcoin::util::bip32;
|
||||
use bitcoin::util::psbt::PartiallySignedTransaction;
|
||||
use bitcoin::Network;
|
||||
use miniscript::ScriptContext;
|
||||
use std::str::FromStr;
|
||||
|
||||
fn is_equal(this: &Arc<dyn TransactionSigner>, that: &Arc<DummySigner>) -> bool {
|
||||
fn is_equal(this: &Arc<dyn Signer>, that: &Arc<DummySigner>) -> bool {
|
||||
let secp = Secp256k1::new();
|
||||
this.id(&secp) == that.id(&secp)
|
||||
}
|
||||
@@ -942,11 +660,11 @@ mod signers_container_tests {
|
||||
let (prvkey1, _, _) = setup_keys(TPRV0_STR);
|
||||
let (prvkey2, _, _) = setup_keys(TPRV1_STR);
|
||||
let desc = descriptor!(sh(multi(2, prvkey1, prvkey2))).unwrap();
|
||||
let (wallet_desc, keymap) = desc
|
||||
let (_, keymap) = desc
|
||||
.into_wallet_descriptor(&secp, Network::Testnet)
|
||||
.unwrap();
|
||||
|
||||
let signers = SignersContainer::build(keymap, &wallet_desc, &secp);
|
||||
let signers = SignersContainer::from(keymap);
|
||||
assert_eq!(signers.ids().len(), 2);
|
||||
|
||||
let signers = signers.signers();
|
||||
@@ -1008,20 +726,23 @@ mod signers_container_tests {
|
||||
number: u64,
|
||||
}
|
||||
|
||||
impl SignerCommon for DummySigner {
|
||||
fn id(&self, _secp: &SecpCtx) -> SignerId {
|
||||
SignerId::Dummy(self.number)
|
||||
}
|
||||
}
|
||||
|
||||
impl TransactionSigner for DummySigner {
|
||||
fn sign_transaction(
|
||||
impl Signer for DummySigner {
|
||||
fn sign(
|
||||
&self,
|
||||
_psbt: &mut psbt::PartiallySignedTransaction,
|
||||
_psbt: &mut PartiallySignedTransaction,
|
||||
_input_index: Option<usize>,
|
||||
_secp: &SecpCtx,
|
||||
) -> Result<(), SignerError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn id(&self, _secp: &SecpCtx) -> SignerId {
|
||||
SignerId::Dummy(self.number)
|
||||
}
|
||||
|
||||
fn sign_whole_tx(&self) -> bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
const TPRV0_STR:&str = "tprv8ZgxMBicQKsPdZXrcHNLf5JAJWFAoJ2TrstMRdSKtEggz6PddbuSkvHKM9oKJyFgZV1B7rw8oChspxyYbtmEXYyg1AjfWbL3ho3XHDpHRZf";
|
||||
@@ -1035,7 +756,7 @@ mod signers_container_tests {
|
||||
let secp: Secp256k1<All> = Secp256k1::new();
|
||||
let path = bip32::DerivationPath::from_str(PATH).unwrap();
|
||||
let tprv = bip32::ExtendedPrivKey::from_str(tprv).unwrap();
|
||||
let tpub = bip32::ExtendedPubKey::from_priv(&secp, &tprv);
|
||||
let tpub = bip32::ExtendedPubKey::from_private(&secp, &tprv);
|
||||
let fingerprint = tprv.fingerprint(&secp);
|
||||
let prvkey = (tprv, path.clone()).into_descriptor_key().unwrap();
|
||||
let pubkey = (tpub, path).into_descriptor_key().unwrap();
|
||||
|
||||
@@ -42,7 +42,7 @@ use std::default::Default;
|
||||
use std::marker::PhantomData;
|
||||
|
||||
use bitcoin::util::psbt::{self, PartiallySignedTransaction as Psbt};
|
||||
use bitcoin::{OutPoint, Script, Transaction};
|
||||
use bitcoin::{OutPoint, Script, SigHashType, Transaction};
|
||||
|
||||
use miniscript::descriptor::DescriptorTrait;
|
||||
|
||||
@@ -103,7 +103,10 @@ impl TxBuilderContext for BumpFee {}
|
||||
/// builder.finish()?
|
||||
/// };
|
||||
///
|
||||
/// assert_eq!(psbt1.unsigned_tx.output[..2], psbt2.unsigned_tx.output[..2]);
|
||||
/// assert_eq!(
|
||||
/// psbt1.global.unsigned_tx.output[..2],
|
||||
/// psbt2.global.unsigned_tx.output[..2]
|
||||
/// );
|
||||
/// # Ok::<(), bdk::Error>(())
|
||||
/// ```
|
||||
///
|
||||
@@ -117,8 +120,8 @@ impl TxBuilderContext for BumpFee {}
|
||||
/// [`finish`]: Self::finish
|
||||
/// [`coin_selection`]: Self::coin_selection
|
||||
#[derive(Debug)]
|
||||
pub struct TxBuilder<'a, D, Cs, Ctx> {
|
||||
pub(crate) wallet: &'a Wallet<D>,
|
||||
pub struct TxBuilder<'a, B, D, Cs, Ctx> {
|
||||
pub(crate) wallet: &'a Wallet<B, D>,
|
||||
pub(crate) params: TxParams,
|
||||
pub(crate) coin_selection: Cs,
|
||||
pub(crate) phantom: PhantomData<Ctx>,
|
||||
@@ -137,7 +140,7 @@ pub(crate) struct TxParams {
|
||||
pub(crate) utxos: Vec<WeightedUtxo>,
|
||||
pub(crate) unspendable: HashSet<OutPoint>,
|
||||
pub(crate) manually_selected_only: bool,
|
||||
pub(crate) sighash: Option<psbt::PsbtSighashType>,
|
||||
pub(crate) sighash: Option<SigHashType>,
|
||||
pub(crate) ordering: TxOrdering,
|
||||
pub(crate) locktime: Option<u32>,
|
||||
pub(crate) rbf: Option<RbfValue>,
|
||||
@@ -147,7 +150,6 @@ pub(crate) struct TxParams {
|
||||
pub(crate) add_global_xpubs: bool,
|
||||
pub(crate) include_output_redeem_witness_script: bool,
|
||||
pub(crate) bumping_fee: Option<PreviousFee>,
|
||||
pub(crate) current_height: Option<u32>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
@@ -168,7 +170,7 @@ impl std::default::Default for FeePolicy {
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, Cs: Clone, Ctx, D> Clone for TxBuilder<'a, D, Cs, Ctx> {
|
||||
impl<'a, Cs: Clone, Ctx, B, D> Clone for TxBuilder<'a, B, D, Cs, Ctx> {
|
||||
fn clone(&self) -> Self {
|
||||
TxBuilder {
|
||||
wallet: self.wallet,
|
||||
@@ -180,8 +182,8 @@ impl<'a, Cs: Clone, Ctx, D> Clone for TxBuilder<'a, D, Cs, Ctx> {
|
||||
}
|
||||
|
||||
// methods supported by both contexts, for any CoinSelectionAlgorithm
|
||||
impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
|
||||
TxBuilder<'a, D, Cs, Ctx>
|
||||
impl<'a, B, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
|
||||
TxBuilder<'a, B, D, Cs, Ctx>
|
||||
{
|
||||
/// Set a custom fee rate
|
||||
pub fn fee_rate(&mut self, fee_rate: FeeRate) -> &mut Self {
|
||||
@@ -308,7 +310,7 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
|
||||
/// 2. `psbt_input`: To know the value.
|
||||
/// 3. `satisfaction_weight`: To know how much weight/vbytes the input will add to the transaction for fee calculation.
|
||||
///
|
||||
/// There are several security concerns about adding foreign UTXOs that application
|
||||
/// There are several security concerns about adding foregin UTXOs that application
|
||||
/// developers should consider. First, how do you know the value of the input is correct? If a
|
||||
/// `non_witness_utxo` is provided in the `psbt_input` then this method implicitly verifies the
|
||||
/// value by checking it against the transaction. If only a `witness_utxo` is provided then this
|
||||
@@ -335,9 +337,8 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
|
||||
/// 1. The `psbt_input` does not contain a `witness_utxo` or `non_witness_utxo`.
|
||||
/// 2. The data in `non_witness_utxo` does not match what is in `outpoint`.
|
||||
///
|
||||
/// Note unless you set [`only_witness_utxo`] any non-taproot `psbt_input` you pass to this
|
||||
/// method must have `non_witness_utxo` set otherwise you will get an error when [`finish`]
|
||||
/// is called.
|
||||
/// Note unless you set [`only_witness_utxo`] any `psbt_input` you pass to this method must
|
||||
/// have `non_witness_utxo` set otherwise you will get an error when [`finish`] is called.
|
||||
///
|
||||
/// [`only_witness_utxo`]: Self::only_witness_utxo
|
||||
/// [`finish`]: Self::finish
|
||||
@@ -411,7 +412,7 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
|
||||
/// Sign with a specific sig hash
|
||||
///
|
||||
/// **Use this option very carefully**
|
||||
pub fn sighash(&mut self, sighash: psbt::PsbtSighashType) -> &mut Self {
|
||||
pub fn sighash(&mut self, sighash: SigHashType) -> &mut Self {
|
||||
self.params.sighash = Some(sighash);
|
||||
self
|
||||
}
|
||||
@@ -507,7 +508,7 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
|
||||
pub fn coin_selection<P: CoinSelectionAlgorithm<D>>(
|
||||
self,
|
||||
coin_selection: P,
|
||||
) -> TxBuilder<'a, D, P, Ctx> {
|
||||
) -> TxBuilder<'a, B, D, P, Ctx> {
|
||||
TxBuilder {
|
||||
wallet: self.wallet,
|
||||
params: self.params,
|
||||
@@ -544,25 +545,9 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
|
||||
self.params.rbf = Some(RbfValue::Value(nsequence));
|
||||
self
|
||||
}
|
||||
|
||||
/// Set the current blockchain height.
|
||||
///
|
||||
/// This will be used to:
|
||||
/// 1. Set the nLockTime for preventing fee sniping.
|
||||
/// **Note**: This will be ignored if you manually specify a nlocktime using [`TxBuilder::nlocktime`].
|
||||
/// 2. Decide whether coinbase outputs are mature or not. If the coinbase outputs are not
|
||||
/// mature at `current_height`, we ignore them in the coin selection.
|
||||
/// If you want to create a transaction that spends immature coinbase inputs, manually
|
||||
/// add them using [`TxBuilder::add_utxos`].
|
||||
///
|
||||
/// In both cases, if you don't provide a current height, we use the last sync height.
|
||||
pub fn current_height(&mut self, height: u32) -> &mut Self {
|
||||
self.params.current_height = Some(height);
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>> TxBuilder<'a, D, Cs, CreateTx> {
|
||||
impl<'a, B, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>> TxBuilder<'a, B, D, Cs, CreateTx> {
|
||||
/// Replace the recipients already added with a new list
|
||||
pub fn set_recipients(&mut self, recipients: Vec<(Script, u64)>) -> &mut Self {
|
||||
self.params.recipients = recipients;
|
||||
@@ -575,13 +560,6 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>> TxBuilder<'a, D, Cs, C
|
||||
self
|
||||
}
|
||||
|
||||
/// Add data as an output, using OP_RETURN
|
||||
pub fn add_data(&mut self, data: &[u8]) -> &mut Self {
|
||||
let script = Script::new_op_return(data);
|
||||
self.add_recipient(script, 0u64);
|
||||
self
|
||||
}
|
||||
|
||||
/// Sets the address to *drain* excess coins to.
|
||||
///
|
||||
/// Usually, when there are excess coins they are sent to a change address generated by the
|
||||
@@ -591,9 +569,6 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>> TxBuilder<'a, D, Cs, C
|
||||
/// difference is that it is valid to use `drain_to` without setting any ordinary recipients
|
||||
/// with [`add_recipient`] (but it is perfectly fine to add recipients as well).
|
||||
///
|
||||
/// If you choose not to set any recipients, you should either provide the utxos that the
|
||||
/// transaction should spend via [`add_utxos`], or set [`drain_wallet`] to spend all of them.
|
||||
///
|
||||
/// When bumping the fees of a transaction made with this option, you probably want to
|
||||
/// use [`allow_shrinking`] to allow this output to be reduced to pay for the extra fees.
|
||||
///
|
||||
@@ -624,7 +599,6 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>> TxBuilder<'a, D, Cs, C
|
||||
///
|
||||
/// [`allow_shrinking`]: Self::allow_shrinking
|
||||
/// [`add_recipient`]: Self::add_recipient
|
||||
/// [`add_utxos`]: Self::add_utxos
|
||||
/// [`drain_wallet`]: Self::drain_wallet
|
||||
pub fn drain_to(&mut self, script_pubkey: Script) -> &mut Self {
|
||||
self.params.drain_to = Some(script_pubkey);
|
||||
@@ -633,8 +607,8 @@ impl<'a, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>> TxBuilder<'a, D, Cs, C
|
||||
}
|
||||
|
||||
// methods supported only by bump_fee
|
||||
impl<'a, D: BatchDatabase> TxBuilder<'a, D, DefaultCoinSelectionAlgorithm, BumpFee> {
|
||||
/// Explicitly tells the wallet that it is allowed to reduce the amount of the output matching this
|
||||
impl<'a, B, D: BatchDatabase> TxBuilder<'a, B, D, DefaultCoinSelectionAlgorithm, BumpFee> {
|
||||
/// Explicitly tells the wallet that it is allowed to reduce the fee of the output matching this
|
||||
/// `script_pubkey` in order to bump the transaction fee. Without specifying this the wallet
|
||||
/// will attempt to find a change output to shrink instead.
|
||||
///
|
||||
@@ -857,7 +831,6 @@ mod test {
|
||||
},
|
||||
txout: Default::default(),
|
||||
keychain: KeychainKind::External,
|
||||
is_spent: false,
|
||||
},
|
||||
LocalUtxo {
|
||||
outpoint: OutPoint {
|
||||
@@ -866,7 +839,6 @@ mod test {
|
||||
},
|
||||
txout: Default::default(),
|
||||
keychain: KeychainKind::Internal,
|
||||
is_spent: false,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
@@ -9,11 +9,13 @@
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
use bitcoin::blockdata::script::Script;
|
||||
use bitcoin::secp256k1::{All, Secp256k1};
|
||||
|
||||
use miniscript::{MiniscriptKey, Satisfier, ToPublicKey};
|
||||
|
||||
// De-facto standard "dust limit" (even though it should change based on the output type)
|
||||
pub const DUST_LIMIT_SATOSHI: u64 = 546;
|
||||
|
||||
// MSB of the nSequence. If set there's no consensus-constraint, so it must be disabled when
|
||||
// spending using CSV in order to enforce CSV rules
|
||||
pub(crate) const SEQUENCE_LOCKTIME_DISABLE_FLAG: u32 = 1 << 31;
|
||||
@@ -26,19 +28,18 @@ pub(crate) const SEQUENCE_LOCKTIME_MASK: u32 = 0x0000FFFF;
|
||||
// Threshold for nLockTime to be considered a block-height-based timelock rather than time-based
|
||||
pub(crate) const BLOCKS_TIMELOCK_THRESHOLD: u32 = 500000000;
|
||||
|
||||
/// Trait to check if a value is below the dust limit.
|
||||
/// We are performing dust value calculation for a given script public key using rust-bitcoin to
|
||||
/// keep it compatible with network dust rate
|
||||
/// Trait to check if a value is below the dust limit
|
||||
// we implement this trait to make sure we don't mess up the comparison with off-by-one like a <
|
||||
// instead of a <= etc.
|
||||
// instead of a <= etc. The constant value for the dust limit is not public on purpose, to
|
||||
// encourage the usage of this trait.
|
||||
pub trait IsDust {
|
||||
/// Check whether or not a value is below dust limit
|
||||
fn is_dust(&self, script: &Script) -> bool;
|
||||
fn is_dust(&self) -> bool;
|
||||
}
|
||||
|
||||
impl IsDust for u64 {
|
||||
fn is_dust(&self, script: &Script) -> bool {
|
||||
*self < script.dust_value().as_sat()
|
||||
fn is_dust(&self) -> bool {
|
||||
*self <= DUST_LIMIT_SATOSHI
|
||||
}
|
||||
}
|
||||
|
||||
@@ -137,32 +138,47 @@ impl<Pk: MiniscriptKey + ToPublicKey> Satisfier<Pk> for Older {
|
||||
|
||||
pub(crate) type SecpCtx = Secp256k1<All>;
|
||||
|
||||
pub struct ChunksIterator<I: Iterator> {
|
||||
iter: I,
|
||||
size: usize,
|
||||
}
|
||||
|
||||
#[cfg(any(feature = "electrum", feature = "esplora"))]
|
||||
impl<I: Iterator> ChunksIterator<I> {
|
||||
pub fn new(iter: I, size: usize) -> Self {
|
||||
ChunksIterator { iter, size }
|
||||
}
|
||||
}
|
||||
|
||||
impl<I: Iterator> Iterator for ChunksIterator<I> {
|
||||
type Item = Vec<<I as std::iter::Iterator>::Item>;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
let mut v = Vec::new();
|
||||
for _ in 0..self.size {
|
||||
let e = self.iter.next();
|
||||
|
||||
match e {
|
||||
None => break,
|
||||
Some(val) => v.push(val),
|
||||
}
|
||||
}
|
||||
|
||||
if v.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(v)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::{
|
||||
check_nlocktime, check_nsequence_rbf, IsDust, BLOCKS_TIMELOCK_THRESHOLD,
|
||||
check_nlocktime, check_nsequence_rbf, BLOCKS_TIMELOCK_THRESHOLD,
|
||||
SEQUENCE_LOCKTIME_TYPE_FLAG,
|
||||
};
|
||||
use crate::bitcoin::Address;
|
||||
use crate::types::FeeRate;
|
||||
use std::str::FromStr;
|
||||
|
||||
#[test]
|
||||
fn test_is_dust() {
|
||||
let script_p2pkh = Address::from_str("1GNgwA8JfG7Kc8akJ8opdNWJUihqUztfPe")
|
||||
.unwrap()
|
||||
.script_pubkey();
|
||||
assert!(script_p2pkh.is_p2pkh());
|
||||
assert!(545.is_dust(&script_p2pkh));
|
||||
assert!(!546.is_dust(&script_p2pkh));
|
||||
|
||||
let script_p2wpkh = Address::from_str("bc1qxlh2mnc0yqwas76gqq665qkggee5m98t8yskd8")
|
||||
.unwrap()
|
||||
.script_pubkey();
|
||||
assert!(script_p2wpkh.is_v0_p2wpkh());
|
||||
assert!(293.is_dust(&script_p2wpkh));
|
||||
assert!(!294.is_dust(&script_p2wpkh));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_fee_from_btc_per_kb() {
|
||||
|
||||
@@ -17,7 +17,7 @@ use std::fmt;
|
||||
use bitcoin::consensus::serialize;
|
||||
use bitcoin::{OutPoint, Transaction, Txid};
|
||||
|
||||
use crate::blockchain::GetTx;
|
||||
use crate::blockchain::Blockchain;
|
||||
use crate::database::Database;
|
||||
use crate::error::Error;
|
||||
|
||||
@@ -29,7 +29,7 @@ use crate::error::Error;
|
||||
/// Depending on the [capabilities](crate::blockchain::Blockchain::get_capabilities) of the
|
||||
/// [`Blockchain`] backend, the method could fail when called with old "historical" transactions or
|
||||
/// with unconfirmed transactions that have been evicted from the backend's memory.
|
||||
pub fn verify_tx<D: Database, B: GetTx>(
|
||||
pub fn verify_tx<D: Database, B: Blockchain>(
|
||||
tx: &Transaction,
|
||||
database: &D,
|
||||
blockchain: &B,
|
||||
@@ -104,18 +104,43 @@ impl_error!(bitcoinconsensus::Error, Consensus, VerifyError);
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
use crate::database::{BatchOperations, MemoryDatabase};
|
||||
use std::collections::HashSet;
|
||||
|
||||
use bitcoin::consensus::encode::deserialize;
|
||||
use bitcoin::hashes::hex::FromHex;
|
||||
use bitcoin::{Transaction, Txid};
|
||||
|
||||
use crate::blockchain::{Blockchain, Capability, Progress};
|
||||
use crate::database::{BatchDatabase, BatchOperations, MemoryDatabase};
|
||||
use crate::FeeRate;
|
||||
|
||||
use super::*;
|
||||
|
||||
struct DummyBlockchain;
|
||||
|
||||
impl GetTx for DummyBlockchain {
|
||||
impl Blockchain for DummyBlockchain {
|
||||
fn get_capabilities(&self) -> HashSet<Capability> {
|
||||
Default::default()
|
||||
}
|
||||
fn setup<D: BatchDatabase, P: 'static + Progress>(
|
||||
&self,
|
||||
_database: &mut D,
|
||||
_progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
Ok(())
|
||||
}
|
||||
fn get_tx(&self, _txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
Ok(None)
|
||||
}
|
||||
fn broadcast(&self, _tx: &Transaction) -> Result<(), Error> {
|
||||
Ok(())
|
||||
}
|
||||
fn get_height(&self) -> Result<u32, Error> {
|
||||
Ok(42)
|
||||
}
|
||||
fn estimate_fee(&self, _target: usize) -> Result<FeeRate, Error> {
|
||||
Ok(FeeRate::default_min_relay_fee())
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
BIN
static/bdk.png
BIN
static/bdk.png
Binary file not shown.
|
Before Width: | Height: | Size: 5.5 KiB |
Reference in New Issue
Block a user