Compare commits

...

761 Commits

Author SHA1 Message Date
Steve Myers
9e30a79027 Fix CHANGELOG link for v0.15.0 2021-12-29 13:17:11 -08:00
Steve Myers
d2b6b5545e Bump version to 0.15.1-dev 2021-12-23 10:40:42 -08:00
Steve Myers
4d7c4bc810 Bump version to 0.15.0 2021-12-22 21:10:27 -08:00
Steve Myers
98c26a1ad9 Bump version to 0.15.0-rc.1 2021-12-17 21:41:06 -08:00
Steve Myers
e82edbb7ac Merge bitcoindevkit/bdk#501: Only run clippy for the stable rust version
57a1185aef Only run clippy for the stable rust version (Steve Myers)

Pull request description:

  ### Description

  It was decided during the team call today (2021-12-14)  to only run clippy for the stable rust version.

  ### Notes to the reviewers

  This is required to fix the below build issues when running clippy on rust version 1.46.0.

  ```shell
  cargo clippy --all-targets --features async-interface --no-default-features -- -D warnings
  ```

  ```text
  ...

  Checking bitcoincore-rpc v0.14.0
  error: unknown clippy lint: clippy::no_effect_underscore_binding
    --> src/blockchain/mod.rs:88:1
     |
  88 | #[maybe_async]
     | ^^^^^^^^^^^^^^
     |
     = note: `-D clippy::unknown-clippy-lints` implied by `-D warnings`
     = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unknown_clippy_lints
     = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)

  error: unknown clippy lint: clippy::no_effect_underscore_binding
     --> src/blockchain/mod.rs:220:1
      |
  220 | #[maybe_async]
      | ^^^^^^^^^^^^^^
      |
      = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unknown_clippy_lints
      = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)
  ```

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

Top commit has no ACKs.

Tree-SHA512: 3fe0d2829415c7276d5339e217cefba1255c14d6d73ec0a5eff2b8072d189ffef56088623ef75f84e400d3d05e546f759b8048082b467a3738885796b3338323
2021-12-16 09:30:49 -08:00
Steve Myers
57a1185aef Only run clippy for the stable rust version 2021-12-16 09:12:43 -08:00
Steve Myers
64e88f0e00 Merge bitcoindevkit/bdk#492: bump electrsd dep to 0.13
f7f9bd2409 bump electrsd dep to 0.13 (Riccardo Casatta)

Pull request description:

  <!-- You can erase any parts of this template not applicable to your Pull Request. -->

  ### Description

  bump electrsd dep to 0.13, close #491

ACKs for top commit:
  rajarshimaitra:
    ACK f7f9bd2409
  notmandatory:
    utACK f7f9bd24

Tree-SHA512: 89a8094387896c9296e2f0120d7a2c7419e979049a12fc9a6ae7fe1810b75af43338db235887dd9054a6b52e400491b77f8e006f61451da54aca9635098ab342
2021-12-01 09:34:15 -08:00
Riccardo Casatta
f7f9bd2409 bump electrsd dep to 0.13 2021-12-01 08:45:32 +01:00
Steve Myers
68a3d2b1cc Merge bitcoindevkit/bdk#487: Use "Description" instead of "Descriptive Title" in SoB Issue Template
084ec036a5 Use "Description" instead of "Descriptive Title" (rajarshimaitra)

Pull request description:

  <!-- You can erase any parts of this template not applicable to your Pull Request. -->

  ### Description

  <!-- Describe the purpose of this PR, what's being adding and/or fixed -->

  The `Descriptive Title` part is already in the issue title, and we can use `Description` in the issue body.

  ### Notes to the reviewers

  <!-- In this section you can include notes directed to the reviewers, like explaining why some parts
  of the PR were done in a specific way -->

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)

ACKs for top commit:
  thunderbiscuit:
    ACK 084ec03.
  notmandatory:
    ACK 084ec036a5

Tree-SHA512: 2fc365066849033cce056a46bc9e8ab2d931aa45fd2799ebebe3e07bb789c458491ebf2c05e5868bdff63f60dec0657043f3374135dcfbc79a1f89a2562b1883
2021-11-30 16:21:09 -08:00
Steve Myers
aa13186fb0 Merge bitcoindevkit/bdk#478: Fix typos in comments
7f8103dd76 Fix typos in comments (thunderbiscuit)

Pull request description:

  ### Description

  This PR fixes a bunch of small typos in comments. I'm getting acquainted with the codebase and found a few typos just by chance, and ended up going through it with an IDE searching for typos in all files.

  ### Notes to the reviewers

  To be clear, this PR _only addresses typos that are within comments_.

  ### Checklists

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

ACKs for top commit:
  notmandatory:
    ACK 7f8103dd76

Tree-SHA512: eb3f8f21cbd05de06292affd9ef69c21b52022dfdf25c562c8f4d9c9c011f18175dff0c650cb7efcfb2b665f2af80d9a153be3d12327c47796b0d00bfd5d9803
2021-11-30 16:19:53 -08:00
Steve Myers
02980881ac Merge bitcoindevkit/bdk#473: release/0.14.0
fed4a59728 Bump version to 0.14.1-dev (Steve Myers)
c175dd2aae Bump version to 0.14.0 (Steve Myers)
6b1cbcc4b7 Bump version to 0.14.0-rc.1 (Steve Myers)

Pull request description:

  ### Description

  Merge the 0.14.0 bdk release into the master branch.

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

Top commit has no ACKs.

Tree-SHA512: 0a897988c47f56e24b7e831cb45ff348b5637ca1a172555e9456539f5b75f263007421d63820b20737bcddb6f4c8077271471ea830e500ca6f88b902502f8186
2021-11-30 16:16:37 -08:00
Steve Myers
69b184a0a4 Merge branch 'master' into release/0.14.0 2021-11-30 15:41:41 -08:00
rajarshimaitra
084ec036a5 Use "Description" instead of "Descriptive Title" 2021-11-30 12:26:36 +05:30
Steve Myers
c1af456e58 Merge bitcoindevkit/bdk#475: Fix typo in check_miniscript method declaration and use
b9fc06195b Fix typo in check_miniscript method declaration and use (thunderbiscuit)

Pull request description:

  ### Description

  This PR renames the `check_minsicript()` method on the `CheckMiniscript` trait  and its uses throughout the codebase to `check_miniscript()`.

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

ACKs for top commit:
  notmandatory:
    ACK b9fc06195b

Tree-SHA512: cc4406c653cb86f9b15e60c6f87b95c300784d6b2992abc98b3f2db4b02ce252304cc0ab2c638f080b0caf3889e832885eca19e2d6582a3557c8709311b69644
2021-11-29 14:13:26 -08:00
Steve Myers
d20b649eb8 Merge bitcoindevkit/bdk#477: Update issue templates
8534cd3943 Update issue templates (Steve Myers)

Pull request description:

  <!-- You can erase any parts of this template not applicable to your Pull Request. -->

  ### Description

  Add bug reporting issue template and template for proposing a "Summer of Bitcoin" project.

  ### Notes to the reviewers

  The SoB template is basically a copy of what Adi created but lightly formatted to work as github issue templates.

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)

Top commit has no ACKs.

Tree-SHA512: 956f7833b7cc1daf4880f74be3886fb17508719af37a247065c463d3f95b1f058bad785590714c609b0e069fe00784bc71cfb0812a79a70c18a2b5bdb22aca6b
2021-11-29 10:27:46 -08:00
Steve Myers
fed4a59728 Bump version to 0.14.1-dev 2021-11-27 22:13:29 -08:00
Steve Myers
c175dd2aae Bump version to 0.14.0 2021-11-27 21:07:12 -08:00
Steve Myers
8534cd3943 Update issue templates 2021-11-24 21:55:46 -08:00
Steve Myers
3a07614fdb Merge bitcoindevkit/bdk#471: moving the function wallet_name_from_descriptor from blockchain/rpc.rs to wallet/mod.rs as it can be useful not only for rpc
2fc8114180 moving the function wallet_name_from_descriptor from blockchain/rpc.rs to wallet/mod.rs as it can be useful not only for rpc (Richard Ulrich)

Pull request description:

  ### Description

  Moving the function wallet_name_from_descriptor from rpc.rs to mod.rs
  Since the local cache for compact filters should be separate per wallet, this function can be useful not only for rpc.

  ### Notes to the reviewers

  I thought about renaming it, but waited for opinions on that.

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits
  * [x] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
  * [x] I ran `cargo fmt` and `cargo clippy` before committing

ACKs for top commit:
  notmandatory:
    re-ACK  2fc8114

Tree-SHA512: d5732e74f7a54f54dde39fff77f94f12c611a419bed9683025ecf7be95cde330209f676dfc9346ebcd29194325589710eafdd1d533e8073d0662cb397577119f
2021-11-24 20:44:58 -08:00
Steve Myers
b2ac4a0dfd Merge bitcoindevkit/bdk#461: Restructure electrum/esplora sync logic
9c5770831d Make stop_gap a parameter to EsploraBlockchainConfig::new (LLFourn)
0f0a01a742 s/vin/vout/ (LLFourn)
1a64fd9c95 Delete src/blockchain/utils.rs (LLFourn)
d3779fac73 Fix comments (LLFourn)
d39401162f Less intermediary data states in sync (LLFourn)
dfb63d389b s/observed_txs/finished_txs/g (LLFourn)
188d9a4a8b Make variable names consistent (LLFourn)
5eadf5ccf9 Add some logging to script_sync (LLFourn)
aaad560a91 Always get up to chunk_size heights to request headers for (LLFourn)
e7c13575c8 Don't request conftime during tx request (LLFourn)
808d7d8463 Update changelog (LLFourn)
732166fcb6 Fix feerate calculation for esplora (LLFourn)
3f5cb6997f Invert dependencies in electrum sync (LLFourn)

Pull request description:

  ## Description

  This PR does dependency inversion on the previous sync logic for electrum and esplora captured in the trait `ElectrumLikeSync`. This means that the sync logic does not reference the blockchain at all. Instead the blockchain asks the sync logic (in `script_sync.rs`) what it needs to continue the sync and tries to retrieve it.

  The initial purpose of doing this is to remove invocations of `maybe_await` in the abstract sync logic in preparation for completely removing `maybe_await` in the future. The other major benefit is it gives a lot more freedom for the esplora logic to use the rich data from the responses to complete the sync with less HTTP requests than it did previously.

  ## List of changes

  - sync logic moved to `script_sync.rs` and `ElectrumLikeSync` is gone.
  - esplora makes one http request per sync address. This means it makes half the number of http requests for a fully synced wallet and N*M less requests for a wallet which has N new transactions with M unique input transactions.
  - electrum and esplora save less raw transactions in the database. Electrum still requests input transactions for each of its transactions to calculate the fee but it does not save them to the database anymore.
  - The ureq and reqwest blockchain configuration is now unified into the same struct. This is the only API change. `read_timeout` and `write_timeout` have been removed in favor of a single `timeout` option which is set in both ureq and reqwest.
  - ureq now does concurrent (parallel) requests using threads.
  - An previously unnoticed bug has been fixed where by sending a lot of double spending transactions to the same address you could trick a bdk Esplora wallet into thinking it had a lot of unconfirmed coins. This is because esplora doesn't delete double spent transactions from its indexes immediately (not sure if this is a bug or a feature). A blockchain test is added for this.
  - BONUS: The second commit in this PR fixes the feerate calculation for esplora and adds a test (the previous algorithm didn't work at all). I could have made a separate PR but since I was touching this file a lot I decided to fix it here.

  ## Notes to the reviewers

  - The most important thing to review is the the logic in `script_sync.rs` is sound.
  - Look at the two commits separately.
  - I think CI is failing because of MSRV problems again!
  - It would be cool to measure how much sync time is improved for your existing wallets/projects. For `gun` the speed improvements for modest but it is at least hammering the esplora server much less.
  - I noticed the performance of reqwest in blocking is much worse in this patch than previously. This is because somehow reqwest is not re-using the connection for each request in this new code. I have no idea why. The plan is to get rid of the blocking reqwest implementation in a follow up PR.

  ### Checklists

  #### All Submissions:

  * [x] I've signed all my commits

ACKs for top commit:
  rajarshimaitra:
    Retested ACK a630685a0a

Tree-SHA512: de74981e9d1f80758a9f20a3314ed7381c6b7c635f7ede80b177651fe2f9e9468064fae26bf80d4254098accfacfe50326ae0968e915186e13313f05bf77990b
2021-11-24 19:51:09 -08:00
thunderbiscuit
7f8103dd76 Fix typos in comments 2021-11-23 14:09:54 -05:00
thunderbiscuit
b9fc06195b Fix typo in check_miniscript method declaration and use 2021-11-23 13:25:24 -05:00
LLFourn
a630685a0a Merge branch 'master' into sync_pipeline 2021-11-23 12:53:40 +11:00
Richard Ulrich
2fc8114180 moving the function wallet_name_from_descriptor from blockchain/rpc.rs to wallet/mod.rs as it can be useful not only for rpc 2021-11-22 08:15:47 +01:00
Steve Myers
6b1cbcc4b7 Bump version to 0.14.0-rc.1 2021-11-17 11:54:09 -08:00
Steve Myers
afa1ab4ff8 Fix blockchain_tests::test_send_to_bech32m_addr
Now works with latest released versions of rust-bitcoincore-rpc and
bitcoind. Once these crates are updated to support creating descriptor
wallets and add importdescriptors and bech32m support this test will
need to be updated.
2021-11-11 13:59:11 -08:00
Sandipan Dey
632422a3ab Added wallet blockchain test to send to Bech32m address 2021-11-11 08:20:40 -08:00
Sandipan Dey
54f61d17f2 Added a wallet unit test to send to a Bech32m address 2021-11-11 08:20:38 -08:00
Alekos Filini
5830226216 [database] Wrap BlockTime in another struct to allow adding more
fields in the future
2021-11-10 12:30:42 +01:00
Alekos Filini
2c77329333 Rename ConfirmationTime to BlockTime 2021-11-10 12:30:38 +01:00
Alekos Filini
3e5bb077ac Update CHANGELOG.md 2021-11-10 12:30:33 +01:00
Alekos Filini
7c06f52a07 [wallet] Store the block height and timestamp after syncing
Closes #455
2021-11-10 12:30:02 +01:00
Alekos Filini
12e51b3c06 [wallet] Expose an immutable reference to a wallet's database 2021-11-10 12:29:58 +01:00
Alekos Filini
2892edf94b [db] Add the last_sync_time database entry
This will be used to store the height and timestamp after every sync.
2021-11-10 12:29:47 +01:00
LLFourn
9c5770831d Make stop_gap a parameter to EsploraBlockchainConfig::new 2021-11-10 09:07:36 +11:00
LLFourn
0f0a01a742 s/vin/vout/ 2021-11-10 09:07:36 +11:00
LLFourn
1a64fd9c95 Delete src/blockchain/utils.rs 2021-11-10 09:07:36 +11:00
LLFourn
d3779fac73 Fix comments 2021-11-10 09:07:36 +11:00
LLFourn
d39401162f Less intermediary data states in sync
Use BTrees to store ordered sets rather than HashSets -> VecDequeue
2021-11-10 09:07:36 +11:00
LLFourn
dfb63d389b s/observed_txs/finished_txs/g 2021-11-10 09:07:36 +11:00
LLFourn
188d9a4a8b Make variable names consistent 2021-11-10 09:07:36 +11:00
LLFourn
5eadf5ccf9 Add some logging to script_sync 2021-11-10 09:07:36 +11:00
LLFourn
aaad560a91 Always get up to chunk_size heights to request headers for 2021-11-10 09:07:36 +11:00
LLFourn
e7c13575c8 Don't request conftime during tx request 2021-11-10 09:07:36 +11:00
LLFourn
808d7d8463 Update changelog 2021-11-10 09:07:34 +11:00
LLFourn
732166fcb6 Fix feerate calculation for esplora 2021-11-10 09:06:49 +11:00
LLFourn
3f5cb6997f Invert dependencies in electrum sync
Blockchain calls sync logic rather than the other way around.
Sync logic is captured in script_sync.rs.
2021-11-10 09:06:49 +11:00
Riccardo Casatta
aa075f0b2f fix after merge changing borrow of tx in broadcast 2021-11-09 15:37:18 +01:00
Riccardo Casatta
8010d692e9 Update CHANGELOG 2021-11-09 15:37:13 +01:00
Riccardo Casatta
b2d7412d6d add test for add_data 2021-11-09 15:36:42 +01:00
Riccardo Casatta
fd51029197 add method add_data as a shortcut to create an OP_RETURN output, fix the dust check to consider only spendable output 2021-11-09 15:36:39 +01:00
Alekos Filini
711510006b Merge commit 'refs/pull/464/head' of github.com:bitcoindevkit/bdk 2021-11-08 10:41:12 +01:00
Alekos Filini
d21b6e47ab Merge commit 'refs/pull/458/head' of github.com:bitcoindevkit/bdk 2021-11-08 10:39:50 +01:00
rajarshimaitra
5922c216a1 Update WordsCount -> WordCount 2021-11-06 20:14:03 +05:30
rajarshimaitra
9e29e2d2b1 Update changelog 2021-11-06 20:13:45 +05:30
Alekos Filini
16e832533c Merge commit 'refs/pull/462/head' of github.com:bitcoindevkit/bdk 2021-11-04 15:26:15 +00:00
Steve Myers
7f91bcdf1a Merge commit 'refs/pull/453/head' of github.com:bitcoindevkit/bdk 2021-11-03 13:51:59 -07:00
rajarshimaitra
35695d8795 remove redundant backtrace dependency 2021-11-03 11:14:14 +05:30
rajarshimaitra
756858e882 update module doc 2021-11-03 11:14:13 +05:30
rajarshimaitra
d2ce2714f2 Replace tiny-bip39 with rust-bip39
Use rust-bip39 for mnemonic derivation everywhere.

This requires our own WordCount enum as rust-bip39 doesn't have
explicit mnemonic type definition.
2021-11-03 11:14:05 +05:30
rajarshimaitra
3b2b559910 Update codecov@v2 2021-11-02 15:21:37 +05:30
rajarshimaitra
3c8416bf31 update dependency
dependency updated from tiny-bip39 to rust-bip39
2021-10-31 20:29:11 +05:30
Steve Myers
f6f736609f Bump version to 0.13.1-dev 2021-10-28 13:38:39 -07:00
Steve Myers
5cb0726780 Bump version to 0.13.0 2021-10-28 10:44:56 -07:00
Steve Myers
8781599740 Switch back to rust-bitcoin/rust-bitcoincore-rpc 2021-10-27 13:53:58 -07:00
Steve Myers
ee8b992f8b Update dev-dependencies electrsd to 0.12 2021-10-27 13:42:01 -07:00
Mariusz Klochowicz
3d8efbf8bf Borrow instead of moving transaction when broadcasting
There's no need to take ownership of the transaction for a broadcast.
2021-10-27 21:51:55 +10:30
Alekos Filini
a2e26f1b57 Pin version of ureq to maintain our MSRV
(cherry picked from commit d75d221540)
2021-10-26 16:15:13 -07:00
Alekos Filini
5f5744e897 Pin version of backtrace to maintain our MSRV
(cherry picked from commit 548e43d928)
2021-10-26 16:15:11 -07:00
Alekos Filini
e106136227 [ci] Update the stable version to 1.56
(cherry picked from commit a348dbdcfe)
2021-10-26 16:15:09 -07:00
Alekos Filini
d75d221540 Pin version of ureq to maintain our MSRV 2021-10-22 15:57:40 +02:00
Alekos Filini
548e43d928 Pin version of backtrace to maintain our MSRV 2021-10-22 15:57:36 +02:00
Alekos Filini
a348dbdcfe [ci] Update the stable version to 1.56 2021-10-22 15:57:27 +02:00
Steve Myers
b638039655 Fix CHANGELOG for Unreleased, v0.13.0 2021-10-20 20:14:10 -07:00
Steve Myers
7e085a86dd Bump version to 0.13.0-rc.1 2021-10-20 20:09:31 -07:00
Sudarsan Balaji
59f795f176 Make MemoryDatabase Send + Sync 2021-10-15 21:36:36 +05:30
Steve Myers
2da10382e7 Pin ahash version to 0.7.4 for sqlite feature
The `ahash` crate is used by the `sqlite` feature but the latest update (0.7.5)
breaks compatibility with our current MSRV 1.46.0. See also:
https://github.com/tkaitchuck/aHash/issues/99
2021-10-14 08:24:32 -07:00
Steve Myers
6d18502733 Merge commit 'refs/pull/443/head' of github.com:bitcoindevkit/bdk 2021-10-07 22:52:55 -07:00
Steve Myers
81b263f235 Merge commit 'refs/pull/445/head' of github.com:bitcoindevkit/bdk 2021-10-07 22:48:47 -07:00
rajarshimaitra
2f38d3e526 Update ChangeLog 2021-10-07 20:49:13 +05:30
rajarshimaitra
2ee125655b Expose get_tx() method from DB to Wallet 2021-10-07 20:49:07 +05:30
Steve Myers
22c39b7b78 Fix cargo doc warning and missing sqlite feature 2021-09-30 16:11:42 -07:00
Steve Myers
18f1107c41 Update DEVELOPMENT_CYCLE release instructions 2021-09-30 13:39:40 -07:00
Steve Myers
763bcc22ab Bump version to 0.12.1-dev 2021-09-30 13:39:39 -07:00
Steve Myers
9e4ca516a8 Bump version to 0.12.0 2021-09-30 11:42:21 -07:00
Steve Myers
b60465f31e Bump bdk-macros version to 0.6.0 2021-09-30 11:24:01 -07:00
Steve Myers
1469a3487a Downgrade tiny-bip39 to version < 0.8
This is required until BDK MSRV is changed to 1.51 or we replace
tiny-bip39 dependency.
2021-09-27 12:47:58 -07:00
Steve Myers
8c21bcf40a Downgrade tiny-bip39 to version < 0.8
This is required until BDK MSRV is changed to 1.51 or we replace
tiny-bip39 dependency.
2021-09-26 20:01:58 -07:00
Steve Myers
c9ed8bdf6c Bump version to 0.12.0-rc.1 2021-09-24 10:25:12 -07:00
Steve Myers
919522a456 Fix clippy warning 2021-09-23 18:57:55 -07:00
Steve Myers
678607e673 Move new CHANGELOG entries to Unreleased 2021-09-23 18:28:27 -07:00
John Cantrell
c06d9f1d33 implement sqlite database 2021-09-23 20:54:08 -04:00
Steve Myers
5a6a2cefdd Merge commit 'refs/pull/442/head' of github.com:bitcoindevkit/bdk 2021-09-23 15:28:57 -07:00
Alekos Filini
3fe2380d6c [esplora] Support proxies in EsploraBlockchain 2021-09-23 21:38:19 +02:00
Lucas Soriano del Pino
eea8b135a4 Activate miniscript/use-serde feature 2021-09-23 19:49:06 +10:00
Steve Myers
a685b22aa6 [ci] Change check-wasm job to use ubuntu-20.04 runner 2021-09-22 10:08:10 -07:00
LLFourn
c601ae3271 [fix-build] Fix version of zeroize_derive to 1.1.0 2021-09-22 11:01:37 +10:00
Riccardo Casatta
c23692824d [rpc] rescan in chunks of 10_000 blocks 2021-09-17 15:19:52 +02:00
Steve Myers
46f7b440f5 Merge commit 'refs/pull/438/head' of github.com:bitcoindevkit/bdk 2021-09-16 11:03:52 -07:00
Steve Myers
562fde7953 Merge commit 'refs/pull/434/head' of github.com:bitcoindevkit/bdk 2021-09-16 08:45:53 -07:00
rajarshimaitra
9e508748a3 Update CI blockchain tests
(cherry picked from commit 10b53a56d7)
2021-09-15 13:44:11 -07:00
rajarshimaitra
84b8579df5 Test refactor
- Fix esplora module level feature flag
- Move esplora blockchain tests to module, to cover for both variants

(cherry picked from commit 8d1d92e71e)
2021-09-15 13:44:09 -07:00
rajarshimaitra
7cb0116c44 Fix reqwest blockchain test
- add back await_or_block! to bdk-macros
- use await_or_block! in reqwest tests

(cherry picked from commit a41a0030dc)
2021-09-15 13:44:06 -07:00
rajarshimaitra
6e12468b12 Update Cargo.toml
- Changed to local bdk-macro
- Added back tokio
- Update esplora-reqwest and test-esplora feature guards

(cherry picked from commit 2459740f72)
2021-09-15 13:44:04 -07:00
Alekos Filini
326b64de3a [descriptor] Add a test for extract_policy() on pk_h() operands 2021-09-15 10:38:36 +02:00
Alekos Filini
5edf663f3d [descriptor] Add an alias for and_or()
The descriptor syntax encodes it with `andor()`, without the underscore
2021-09-15 10:37:35 +02:00
Alekos Filini
e3dd755396 [descriptor] Fix pk_h() in the descriptor!() macro
Instead of accepting just a `DescriptorPublicKey` it now accepts
anything that implements `IntoDescriptorKey` like `pk_k()` does.
2021-09-15 10:37:33 +02:00
Alekos Filini
b500cfe4e5 [descriptor] Fix extract_policy() for descriptors with pk_h() 2021-09-15 10:37:30 +02:00
rajarshimaitra
10b53a56d7 Update CI blockchain tests 2021-09-14 11:29:29 +05:30
rajarshimaitra
8d1d92e71e Test refactor
- Fix esplora module level feature flag
- Move esplora blockchain tests to module, to cover for both variants
2021-09-14 11:29:28 +05:30
rajarshimaitra
a41a0030dc Fix reqwest blockchain test
- add back await_or_block! to bdk-macros
- use await_or_block! in reqwest tests
2021-09-14 11:29:28 +05:30
rajarshimaitra
2459740f72 Update Cargo.toml
- Changed to local bdk-macro
- Added back tokio
- Update esplora-reqwest and test-esplora feature guards
2021-09-14 11:29:28 +05:30
Steve Myers
5694b98304 Bump version to 0.11.1-dev 2021-09-04 11:43:24 -07:00
Steve Myers
aa786fbb21 Bump version to 0.11.0 2021-09-04 10:46:03 -07:00
Steve Myers
8c570ae7eb Update version in src/lib.rs 2021-09-04 10:45:18 -07:00
Steve Myers
56a7bc9874 Update changelog 2021-09-04 10:44:44 -07:00
Steve Myers
dd4bd96f79 Merge commit 'refs/pull/428/head' of github.com:bitcoindevkit/bdk 2021-08-31 08:33:07 -07:00
rajarshimaitra
2caa590438 Use ureq with default features 2021-08-31 14:37:50 +05:30
Steve Myers
2a53cfc23f Merge commit 'refs/pull/426/head' of github.com:bitcoindevkit/bdk 2021-08-30 12:41:25 -07:00
Steve Myers
cf1815a1c0 Bump version to 0.11.0-rc.1 2021-08-30 10:27:24 -07:00
Lucas Soriano del Pino
acf157a99a Fix use statements in populate_test_db macro
- Use re-exported `bitcoin` so that users of the macro don't need to
depend on `bitcoin` directly.
- Add missing `use std::str::FromStr`.
2021-08-30 14:08:17 +10:00
Lucas Soriano del Pino
fb813427eb Use re-exported bitcoin and miniscript in testutils macro
Otherwise users of the macro must depend on `bitcoin` and `miniscript`
directly, which defeats the point of re-exporting these crates in the
first place.
2021-08-30 13:48:34 +10:00
Steve Myers
721748e98f Fix CHANGELOG after merging release/0.10.0 branch 2021-08-25 22:20:20 +02:00
Steve Myers
976e641ba6 Merge commit 'refs/pull/411/head' of github.com:bitcoindevkit/bdk 2021-08-25 21:55:43 +02:00
Thomas Eizinger
7117557dea Add deprecation policy to CONTRIBUTING.md 2021-08-25 17:43:06 +02:00
Richard Ulrich
fa013aeb83 moving get_funded_wallet out of the test section to make it available for bdk-reserves 2021-08-25 11:18:50 +02:00
Roman Zeyde
470d02c81c Fix a small typo in log_progress() description 2021-08-24 23:56:57 +03:00
Steve Myers
38d1d0b0e2 Merge branch 'release/0.10.0' 2021-08-19 19:55:24 +02:00
Steve Myers
582d2f3814 Remove unneeded cache paths for test-blockchains CI job 2021-08-19 18:17:20 +02:00
Steve Myers
5e0011e1a8 Change dependencies bitcoincore-rpc to core-rpc, update bitcoin to ^0.27 and miniscript to ^6.0 2021-08-19 18:16:40 +02:00
Steve Myers
39d2bd0d21 Update dev-dependencies electrsd to 0.10 2021-08-19 18:13:50 +02:00
Steve Myers
0e10952b80 Merge commit 'refs/pull/409/head' of github.com:bitcoindevkit/bdk 2021-08-19 14:08:05 +02:00
Steve Myers
19d74955e2 Update Database BatchOperations flush() documentation 2021-08-19 13:56:38 +02:00
Steve Myers
73a7faf144 Remove unneeded cache paths for test-blockchains CI job 2021-08-18 09:10:47 +02:00
Steve Myers
ea56a87b4b Change dependencies bitcoincore-rpc to core-rpc, update bitcoin to ^0.27 and miniscript to ^6.0 2021-08-17 22:52:17 +02:00
Steve Myers
67f5f45e07 Update dev-dependencies electrsd to 0.10 2021-08-17 22:51:40 +02:00
Alekos Filini
b8680b299d Bump version to 0.10.1-dev 2021-08-09 17:00:05 +02:00
Alekos Filini
a5d3a4d31a Bump version to 0.10.0 2021-08-09 14:58:32 +02:00
Alekos Filini
d03d3c0dbd Update bdk-macros 2021-08-09 14:57:58 +02:00
Alekos Filini
9aba3196ff Bump version of bdk-macros to v0.5.0 2021-08-09 14:57:06 +02:00
Alekos Filini
c8593ecf70 Update version in src/lib.rs 2021-08-09 14:56:22 +02:00
Alekos Filini
cbec0b0bcf Update changelog 2021-08-09 14:55:17 +02:00
Tobin Harding
e80be49d1e Disable default features for rocksdb
In an effort to reduce the build times of `rocksdb` we can set
`default-features` to false.

Please note, the build speed up is minimil

With default features:
```
cargo check --features compact_filters  890.91s user 47.62s system 352% cpu 4:26.55 total
```

Without default features:
```
cargo check --features compact_filters  827.07s user 47.63s system 352% cpu 4:08.39 total
```

Enable `snappy` since it seems like this is the current default compression
algorithm, therefore this patch (hopefully) makes no changes to the usage of the
`rocksdb` library in `bdk`. From the `rocksdb` code:

```
    /// Sets the compression algorithm that will be used for compressing blocks.
    ///
    /// Default: `DBCompressionType::Snappy` (`DBCompressionType::None` if
    /// snappy feature is not enabled).
    ///
    /// # Examples
    ///
    /// ```
    /// use rocksdb::{Options, DBCompressionType};
    ///
    /// let mut opts = Options::default();
    /// opts.set_compression_type(DBCompressionType::Snappy);
    /// ```
    pub fn set_compression_type(&mut self, t: DBCompressionType) {
        ....
```
2021-08-04 10:22:08 +10:00
Riccardo Casatta
fe30716fa2 update CHANGELOG citing new flush method 2021-08-03 12:34:26 +02:00
Riccardo Casatta
e52550cfec Add flush method to Database trait 2021-08-03 12:33:31 +02:00
Riccardo Casatta
f57c0ca98e in tests enable daemons logging if log level is Debug 2021-08-03 12:15:16 +02:00
Alekos Filini
c54e1e9652 Bump version to 0.10.0-rc.1 2021-07-30 17:47:45 +02:00
Tobin Harding
5cdc5fb58a Move estimate -> fee rate logic to esplora module
Currently we have duplicate code for converting the fee estimate we get
back from esplora into a fee rate. This logic can be moved to a separate
function and live in the `esplora` module.
2021-07-29 10:12:19 +10:00
Tobin Harding
27cd9bbcd6 Improve feature combinations for ureq/reqwest
Our features are a bit convoluted, most annoyingly we cannot build with
`--all-features`. However we can make life for users a little easier.

Explicitly we want users to be able to:

- Use async-interface/WASM without using esplora (to implement their own blockchain)
- Use esplora in an ergonomic manner

Currently using esplora requires either reqwest or ureq. Instead of
making the user add all the features manually we can add features that
add the required feature sets, this makes it easier for users to
understand what is required and also makes usage easier.

With this patch applied we can do

- `cargo check --no-default-features --features=use-esplora-reqwest`
- `cargo check --no-default-features --features=use-esplora-ureq`
- `cargo check --features=use-esplora-ureq`
- `cargo check --no-default-features --features=async-trait`
2021-07-29 10:12:17 +10:00
Tobin Harding
f37e735b43 Add a ureq version of esplora module
The `Blockchain` implementation for connecting to an Esplora instance is
currently based on `reqwest`. Some users may not wish to use reqwest.

`ureq` is a simple HTTP client (no async) that is useful when `reqwest`
is not suitable.

- Move `esplora.rs` -> `esplora/reqwest.rs`
- Add an implementation based on the `reqwest` esplora code but using `ureq`
- Add feature flags and conditional includes to re-export everything to
  the `esplora` module so we don't effect the rest of the code base.
- Remove the forced dependency on `tokio`.
- Make esplora independent of async-interface
- Depend on local version of macros crate
2021-07-29 09:16:44 +10:00
codeShark149
adceafa40c Fix float substraction error 2021-07-28 11:52:51 +02:00
Alekos Filini
2b0c4f0817 Merge commit 'refs/pull/407/head' of github.com:bitcoindevkit/bdk 2021-07-28 11:34:41 +02:00
Alekos Filini
e9428433a0 Merge commit 'refs/pull/408/head' of github.com:bitcoindevkit/bdk 2021-07-28 11:32:44 +02:00
Alekos Filini
63592f169f Merge commit 'refs/pull/392/head' of github.com:bitcoindevkit/bdk 2021-07-27 13:23:25 +02:00
Alekos Filini
27600f4a11 Merge commit 'refs/pull/398/head' of github.com:bitcoindevkit/bdk 2021-07-27 13:07:56 +02:00
Riccardo Casatta
77eae76459 add link to upstream PR 2021-07-27 12:17:12 +02:00
Riccardo Casatta
ad69702aa3 Update electrsd dep 2021-07-26 17:09:00 +02:00
Riccardo Casatta
fd254536d3 update CHANGELOG.md 2021-07-26 16:36:35 +02:00
Riccardo Casatta
c4d5dd14fa Use RPC backend in any 2021-07-26 16:36:32 +02:00
Riccardo Casatta
13bed2667a Create Auth struct proxy of the same upstream struct but serializable 2021-07-26 15:55:40 +02:00
Tobin Harding
2db24fb8c5 Add unit test required not enough
Add a unit test that passes a required utxo to the coin selection
algorithm that is less than the required spend. This tests that we get
that utxo included as well as tests that the rest of the coin selection
algorithm code also executes (i.e., that we do not short circuit
incorrectly).
2021-07-23 10:27:16 +10:00
Tobin Harding
d2d37fc06d Return early if required UTXOs already big enough
If the required UTXO set is already bigger (including fees) than the
amount required for the transaction we can return early, no need to go
through the BNB algorithm or random selection.
2021-07-23 09:48:22 +10:00
Tobin Harding
2986fce7c6 Fix vbytes and fee rate code
It was just pointed out that we are calculating the virtual bytes
incorrectly by forgetting to take the ceiling after division by 4 [1]

Add helper functions to encapsulate all weight unit -> virtual byte
calculations including fee to and from fee rate. This makes the code
easier to read, easier to write, and gives us a better chance that bugs
like this will be easier to see.

As an added bonus we can also stop using f32 values for fee amount,
which is by definition an amount in sats so should be a u64. This
removes a bunch of casts and the need for epsilon comparisons and just
deep down feels nice :)

[1] https://github.com/bitcoindevkit/bdk/pull/386#discussion_r670882678
2021-07-23 09:43:12 +10:00
Roman Zeyde
1dc648508c Fix a small typo in comments 2021-07-21 21:32:35 +03:00
Steve Myers
474620e6a5 [keys] limit version of zeroize to support rust 1.47+ 2021-07-19 14:35:16 -07:00
Steve Myers
a5919f4ab0 Remove stop_gap param from Blockchain trait setup and sync functions 2021-07-16 08:52:41 -07:00
Steve Myers
7e986fd904 Add stop_gap param to electrum and esplora blockchain configs 2021-07-16 08:50:36 -07:00
Alekos Filini
77379e9262 Merge commit 'refs/pull/371/head' of github.com:bitcoindevkit/bdk 2021-07-16 11:24:19 +02:00
Alekos Filini
ea699a6ec1 Merge commit 'refs/pull/393/head' of github.com:bitcoindevkit/bdk 2021-07-16 09:05:51 +02:00
Lloyd Fournier
81c1ccb185 Apply typo fixes from @tcharding
Co-authored-by: Tobin C. Harding <me@tobin.cc>
2021-07-14 16:43:02 +10:00
Steve Myers
4f4802b0f3 Merge commit 'refs/pull/388/head' of github.com:bitcoindevkit/bdk 2021-07-13 16:10:30 -07:00
Steve Myers
bab9d99a00 Merge commit 'refs/pull/375/head' of github.com:bitcoindevkit/bdk 2021-07-13 15:12:53 -07:00
Alekos Filini
22f4db0de1 Merge commit 'refs/pull/389/head' of github.com:bitcoindevkit/bdk 2021-07-12 14:26:05 +02:00
Riccardo Casatta
a6ce75fa2d [docs] clarify when the fee could be unknown 2021-07-12 10:06:08 +02:00
LLFourn
7597645ed6 Replace set_single_recipient with drain_to
What set_single_recipient does turns out to be useful with multiple
recipients.
Effectively, set_single_recipient was simply creating a change
output that was arbitrarily required to be the only output.
But what if you want to send excess funds to one address but still have
additional recipients who receive a fixed value?
Generalizing this to `drain_to` simplifies the logic and removes several
error cases while also allowing new use cases.

"maintain_single_recipient" is also replaced with "allow_shrinking"
which has more general semantics.
2021-07-12 16:38:42 +10:00
LLFourn
618e0d3700 Replace set_single_recipient with drain_to
What set_single_recipient does turns out to be useful with multiple
recipients.
Effectively, set_single_recipient was simply creating a change
output that was arbitrarily required to be the only output.
But what if you want to send excess funds to one address but still have
additional recipients who receive a fixed value?
Generalizing this to `drain_to` simplifies the logic and removes several
error cases while also allowing new use cases.

"maintain_single_recipient" is also replaced with "allow_shrinking"
which has more general semantics.
2021-07-12 16:21:53 +10:00
Alekos Filini
44d0e8d07c [rpc] Show in the docs that the RPC APIs are feature-gated 2021-07-09 09:11:02 +02:00
Alekos Filini
7a9b691f68 Bump version to 0.9.1-dev 2021-07-08 15:20:28 +02:00
Alekos Filini
4e813e8869 Bump version to 0.9.0 2021-07-08 13:37:19 +02:00
Alekos Filini
53409ef3ae Update version in src/lib.rs 2021-07-08 13:37:05 +02:00
Alekos Filini
f8a6e1c3f4 Update CHANGELOG 2021-07-08 13:36:20 +02:00
Tobin Harding
c1077b95cf Add Vbytes trait
We convert weight units into vbytes in various places. Lets add a trait
to do it, this makes the code slightly cleaner.
2021-07-08 11:33:39 +10:00
Alekos Filini
fa5103b0eb Merge commit 'refs/pull/383/head' of github.com:bitcoindevkit/bdk into release/0.9.0 2021-07-06 09:58:40 +02:00
Alekos Filini
e5d4994329 Merge commit 'refs/pull/383/head' of github.com:bitcoindevkit/bdk 2021-07-06 09:58:22 +02:00
Alekos Filini
d1658a2eda Merge commit 'refs/pull/385/head' of github.com:bitcoindevkit/bdk into release/0.9.0 2021-07-06 09:57:22 +02:00
Evgenii P
879e5cf319 rustfmt 2021-07-03 14:08:38 +07:00
Evgenii P
928f9c6112 dsl: add regression test for and_or() descriptor 2021-07-03 13:52:05 +07:00
Evgenii P
814ab4c855 dsl: fix descriptor macro when and_or() used 2021-07-03 13:51:43 +07:00
Alekos Filini
58cf46050f Build the rpc feature on docs.rs 2021-07-02 10:09:58 +02:00
Alekos Filini
b6beef77e7 [rpc] Mark the RPC backend as experimental 2021-07-02 10:09:55 +02:00
Alekos Filini
7ed0676e44 Build the rpc feature on docs.rs 2021-07-02 10:09:09 +02:00
Alekos Filini
595e1bdbe1 [rpc] Mark the RPC backend as experimental 2021-07-02 10:07:44 +02:00
Alekos Filini
7555d3b430 Bump version to 0.9.0-rc.1 2021-07-02 10:06:31 +02:00
Alekos Filini
fbdee52f2f [verify] Build the verify feature on docs.rs 2021-07-01 16:37:03 +02:00
Alekos Filini
50597fd73f [verify] Use impl_error!() whenever possible 2021-07-01 16:37:00 +02:00
Alekos Filini
975905c8ea [verify] Add documentation 2021-07-01 16:36:56 +02:00
Alekos Filini
a67aca32c0 [verify] Cache txs to avoid multiple db/network lookups 2021-07-01 16:36:52 +02:00
Alekos Filini
7873dd5e40 [wallet] Verify unconfirmed transactions after syncing
Verify the unconfirmed transactions we download against the consensus
rules. This is currently exposed as an extra `verify` feature, since it
depends on a pre-release version of `bitcoinconsensus`.

Closes #352
2021-07-01 16:36:48 +02:00
Alekos Filini
a186d82f9a [wallet] Verify unconfirmed transactions after syncing
Verify the unconfirmed transactions we download against the consensus
rules. This is currently exposed as an extra `verify` feature, since it
depends on a pre-release version of `bitcoinconsensus`.

Closes #352
2021-07-01 16:36:42 +02:00
Riccardo Casatta
7109f7d9b4 fix readme 2021-06-29 11:35:02 +02:00
Riccardo Casatta
f52fda4b4b update github ci removing electrs download and fixing cache 2021-06-29 11:35:00 +02:00
Riccardo Casatta
a6be470fe4 use electrsd with feature to download the binary 2021-06-29 11:34:58 +02:00
Riccardo Casatta
8e41c4587d use bitcoind with feature to download the binary 2021-06-29 11:34:56 +02:00
Riccardo Casatta
2ecae348ea use cfg! instead of #[cfg] and use semver 2021-06-29 11:34:54 +02:00
Riccardo Casatta
f4ecfa0d49 Remove container and test blockchains downloading backends executables 2021-06-29 11:34:48 +02:00
Riccardo Casatta
696647b893 trigger electrs when polling 2021-06-29 11:32:30 +02:00
Riccardo Casatta
18dcda844f remove serial_test 2021-06-29 11:32:28 +02:00
Riccardo Casatta
6394c3e209 use bitcoind and electrsd crate to launch daemons 2021-06-29 11:32:26 +02:00
Riccardo Casatta
42adad7dbd bump bitcoind dep to 0.11.0 2021-06-29 11:32:24 +02:00
Alekos Filini
4498e0f7f8 [testutils] Allow the generated blockchain tests to access test_client 2021-06-29 11:32:20 +02:00
William Casarin
476fa3fd7d add Copy trait to Progress types 2021-06-23 08:31:55 -07:00
Alekos Filini
2755b09e7b Bump CI stable version to 1.53
Fixes #374
2021-06-21 12:16:54 +02:00
Alekos Filini
5e6286a493 Fix clippy warnings on 1.53
Fix `clippy::inconsistent_struct_constructor`: the constructor field
order was inconsistent with the struct declaration.
2021-06-21 12:16:45 +02:00
Alekos Filini
67714adc80 Fix CHANGELOG
The `Rpc` backend is not part of the release but it accidentally ended
up there during the merge
2021-06-21 09:07:15 +02:00
Alekos Filini
9ff86ea37c Merge commit 'refs/pull/370/head' of github.com:bitcoindevkit/bdk 2021-06-18 12:54:11 +02:00
Steve Myers
ceeb3a40cf [ci] Revert change to run_blockchain_tests.sh back to using container id 2021-06-15 15:57:14 -07:00
Steve Myers
e3316aee4c [ci] Change blockchain tests to use bitcoind rpc cookie authentication 2021-06-15 15:39:54 -07:00
Steve Myers
c2567b61aa Merge branch 'release/0.8.0' 2021-06-14 11:47:39 -07:00
Steve Myers
e1a77b87ab Fix CHANGELOG unreleased link 2021-06-14 11:43:48 -07:00
Steve Myers
5bf758b03a Add CHANGELOG v0.8.0 link 2021-06-14 11:40:50 -07:00
Riccardo Casatta
0bbfa5f989 make fee in TransactionDetails Option, add confirmation_time field as Option
confirmation_time contains both a block height and block timestamp and is
Some only for confirmed transaction
2021-06-14 15:29:24 +02:00
Alekos Filini
18254110c6 Merge commit 'refs/pull/348/head' of github.com:bitcoindevkit/bdk 2021-06-11 11:41:23 +02:00
Alekos Filini
44217539e5 Bump version to 0.8.1-dev 2021-06-11 11:29:42 +02:00
Alekos Filini
33b45ebe82 Bump version to 0.8.0 2021-06-10 16:00:01 +02:00
Alekos Filini
2faed425ed Update CHANGELOG 2021-06-10 15:59:24 +02:00
Alekos Filini
2cc05c07a5 Bump version in src/lib.rs 2021-06-10 15:59:08 +02:00
Riccardo Casatta
fe371f9d92 Use bitcoin's base64 feature for Psbts 2021-06-10 15:50:44 +02:00
Tobin Harding
12de13b95c Remove redundant borrows
Clippy emits:

  warning: this expression borrows a reference

As suggested remove the borrows from the front of vars that are already references.
2021-06-10 13:16:07 +10:00
Alekos Filini
9205295332 Merge commit 'refs/pull/365/head' of github.com:bitcoindevkit/bdk into release/0.8.0 2021-06-09 16:05:16 +02:00
Tobin Harding
3b446c9e14 Use no_run instead of ignore
We have an attribute `no_run` that builds but does not run example code
in Rustdocs, this keeps the examples building as the codebase evolves.

use `no_run` and fix example code so it builds cleanly during test run.

Some examples that require the `electrum` feature to be available have
been feature-gated to make sure they aren't accidentally compiled when
that feature is not enabled.

Co-authored-by: Alekos Filini <alekos.filini@gmail.com>
2021-06-09 11:29:57 +02:00
Alekos Filini
378167efca Remove explicit feature(external_doc)
It looks like this is now enabled by default as of `cargo 1.54.0-nightly (0cecbd673 2021-06-01)`
2021-06-09 11:27:25 +02:00
Alekos Filini
224be27aa8 Fix example/doctests format 2021-06-04 15:53:15 +02:00
Alekos Filini
4a23070cc8 [ci] Check fmt for examples/doctests 2021-06-04 15:07:02 +02:00
Riccardo Casatta
ba2e3042cc add details to TODO, format doc example 2021-06-04 15:05:35 +02:00
Alekos Filini
f8117c0f9f Bump version to 0.8.0-rc.1 2021-06-04 09:42:14 +02:00
Riccardo Casatta
1639984b56 move scan in setup 2021-06-03 15:26:47 +02:00
Riccardo Casatta
ab54a17eb7 update bitcoind dep 2021-06-03 11:07:39 +02:00
Riccardo Casatta
ae5aa06586 use storage address instead of satoshi's 2021-06-03 11:06:24 +02:00
Riccardo Casatta
ab98283159 always ask node for tx no matter capabilities 2021-06-03 10:56:02 +02:00
Riccardo Casatta
81851190f0 correctly initialize UTXO keychain kind 2021-06-03 10:56:02 +02:00
Riccardo Casatta
e1b037a921 change feature to execute sync from rpc to test-rpc 2021-06-03 10:56:01 +02:00
Riccardo Casatta
9b7ed08891 rename struct to CallResult 2021-06-03 10:56:01 +02:00
Riccardo Casatta
dffb753ce3 match also on signet 2021-06-03 10:56:00 +02:00
Riccardo Casatta
0b969657cd update changelog with rpc feature 2021-06-03 10:55:59 +02:00
Riccardo Casatta
bfef2e3cfe Implements RPC Backend 2021-06-03 10:55:58 +02:00
Paul Miller
0ec064ef13 Use AddressInfo in private methods 2021-05-27 17:11:16 -04:00
Paul Miller
6b60914ca1 return AddressInfo from get_address 2021-05-27 17:11:16 -04:00
Alekos Filini
881ca8d1e3 [signer] Add an option to explicitly allow using non-ALL sighashes
Instead of blindly using the `sighash_type` set in a psbt input, we
now only sign `SIGHASH_ALL` inputs by default, and require the user to
explicitly opt-in to using other sighashes if they desire to do so.

Fixes #350
2021-05-26 10:38:15 +02:00
Alekos Filini
5633475ce8 Merge commit 'refs/pull/347/head' of github.com:bitcoindevkit/bdk 2021-05-26 08:56:38 +02:00
LLFourn
ea8488b2a7 Initialize env_logger at start of blockchain tests 2021-05-21 13:21:59 +10:00
LLFourn
d2a981efee run_blockchain_tests.sh improvements 2021-05-21 13:21:41 +10:00
LLFourn
4c92daf517 Uppercase 'Test' so that github can see what's up
It is expecting something named 'Test electrum'
2021-05-20 14:33:02 +10:00
LLFourn
aba2a05d83 Add script for running the blockchain tests locally 2021-05-19 16:45:48 +10:00
LLFourn
5b194c268d Fix clippy warnings inside testutils macro
Now that it's inside the main repo clippy is having a go at me.
2021-05-19 16:45:48 +10:00
LLFourn
00bdf08f2a Remove testutils feature so doctests worka again
I wanted to only conditionally compile testutils but it's needed in
doctests which we can't conditionally compile for:

https://github.com/rust-lang/rust/issues/67295
2021-05-19 16:45:48 +10:00
LLFourn
38b0470b14 Move blockchain related stuff to blockchain_tests 2021-05-19 16:45:48 +10:00
LLFourn
d60c5003bf Merge testutils crate into the main crate
This avoids having to keep the apis in sync between the macros and the
main project.
2021-05-19 16:45:48 +10:00
LLFourn
fcae5adabd Run blockchain tests on esplora
They were only being run on electrum before.
2021-05-19 15:47:44 +10:00
Steve Myers
9f04a9d82d Merge commit 'refs/pull/338/head' of github.com:bitcoindevkit/bdk 2021-05-18 16:41:45 -07:00
LLFourn
465ef6e674 Roll blockchain tests proc macro into normal macro
This means one less crate in the repo. Had to do a Default on TestClient
to satisfy clippy.
2021-05-18 20:02:33 +10:00
Steve Myers
aaa9943a5f Merge commit 'refs/pull/346/head' of github.com:bitcoindevkit/bdk 2021-05-14 10:49:30 -07:00
Alekos Filini
3897e29740 Fix changelog 2021-05-12 15:11:20 +02:00
Alekos Filini
8f06e45872 Bump version to 0.7.1-dev 2021-05-12 15:10:28 +02:00
Alekos Filini
766570abfd Bump version to 0.7.0 2021-05-12 14:20:58 +02:00
Alekos Filini
934ec366d9 Use the released testutils-macros 2021-05-12 14:20:23 +02:00
Alekos Filini
d0733e9496 Bump version in src/lib.rs 2021-05-12 14:19:58 +02:00
Alekos Filini
3c7a1f5918 Bump testutils-macros to v0.6.0 2021-05-12 14:19:00 +02:00
Alekos Filini
85aadaccd2 Update changelog in preparation of v0.7.0 2021-05-12 14:17:46 +02:00
Tobin Harding
fad0fe9f30 Update create transaction example code
The transaction builder changed a while ago, looks like some of the
example code did not get updated.

Update the transaction creation code to use a mutable builder.
2021-05-12 14:13:23 +02:00
Tobin Harding
6546b77c08 Remove stale comments
The two fields this comment references are not `Option` type. This
comment seems to be stale.
2021-05-11 13:29:22 +10:00
Tobin Harding
e1066e955c Remove unneeded unit expression
Clippy emits:

  warning: unneeded unit expression

As suggested, remove the unneeded unit expression.
2021-05-11 10:52:08 +10:00
Tobin Harding
7f06dc3330 Clear clippy manual_map warning
The lint `manual_map` is new so we cannot explicitly allow it and
maintain backwards comparability. Instead, allow all lints for
`get_utxo_for` with a comment explaining why.
2021-05-11 10:52:07 +10:00
Tobin Harding
de40351710 Use consistent field ordering
Clippy emits:

  warning: struct constructor field order is inconsistent with struct
  definition field order

As suggested, re-order the fields to be consistent with the struct
definition.
2021-05-11 10:51:44 +10:00
Tobin Harding
de811bea30 Use !any() instead of find()...is_none()
Clippy emits:

  warning: called `is_none()` after searching an `Iterator` with `find`

As suggested, use the construct: `!foo.iter().any(...)`
2021-05-11 10:51:44 +10:00
Tobin Harding
74cc80d127 Remove unnecessary clone
Clippy emits:

  warning: using `clone` on type `descriptor::policy::Condition` which
  implements the `Copy` trait

Remove the clone and rely on `Copy`.
2021-05-11 10:51:44 +10:00
Tobin Harding
009f68a06a Use assert!(foo) instead of assert_eq!(foo, true)
It is redundant to pass true/false to `assert_eq!` since `assert!`
already asserts true/false.

This may, however, be controversial if someone thinks that

```
    assert_eq!(foo, false);
```

is more clear than

```
    assert!(!foo);
```

Use `assert!` directly instead of `assert_eq!` with true/false argument.
2021-05-11 10:51:44 +10:00
Riccardo Casatta
47f26447da continue signing when finding already finalized inputs 2021-05-07 16:32:24 +02:00
Tobin Harding
12641b9e8f Use PsbtKey instead of PSBT
We recently converted uses of `PSBT` -> `Psbt` inline with idiomatic
Rust acronym identifiers. Do the same to `PSBTKey`.

Use `PsbtKey` instead of `PSBTKey` when aliasing the import of
`psbt::raw::Key` from `bitcoin` library.
2021-05-07 16:29:53 +02:00
Tobin Harding
aa3707b5b4 Use Psbt instead of PSBT
Idiomatic Rust uses lowercase for acronyms for all characters after the
first e.g. `std::net::TcpStream`. PSBT (Partially Signed Bitcoin
Transaction) should be rendered `Psbt` in Rust code if we want to write
idiomatic Rust.

Use `Psbt` instead of `PSBT` when aliasing the import of
`PartiallySignedTransaction` from `bitcoin` library.
2021-05-07 16:29:50 +02:00
Riccardo Casatta
f6631e35b8 continue signing when finding already finalized inputs 2021-05-07 13:52:20 +02:00
Alekos Filini
3608ff9f14 Merge commit 'refs/pull/341/head' of github.com:bitcoindevkit/bdk into release/0.7.0 2021-05-07 11:00:00 +02:00
Alekos Filini
7fdb98e147 Merge commit 'refs/pull/341/head' of github.com:bitcoindevkit/bdk 2021-05-07 10:59:32 +02:00
Tobin Harding
9aea90bd81 Use default: D mirroring Rust documentation
Currently we use `F: f` for the argument that is the default function
passed to `map_or_else` and pass a closure for the second argument. This
bent my brain while reading the documentation because the docs use
`default: D` for the first and `f: F` for the second. Although this is
totally trivial it makes deciphering the combinator chain easier if we
name the arguments the same way the Rust docs do.

Use `default: D` for the identifier of the default function passed into `map_or_else`.
2021-05-07 09:08:49 +10:00
Riccardo Casatta
898dfe6cf1 get psbt inputs with bounds check 2021-05-06 16:10:11 +02:00
Riccardo Casatta
7961ae7f8e Check index out of bound also for tx inputs not only for psbt inputs 2021-05-06 15:13:25 +02:00
Alekos Filini
8bf77c8f07 Bump version to 0.7.0-rc.1 2021-05-06 13:56:38 +02:00
Alekos Filini
3c7bae9ce9 Rewrite the non_witness_utxo check 2021-05-06 11:39:01 +02:00
Alekos Filini
17bcd8ed7d [signer] Replace force_non_witness_utxo with only_witness_utxo
Instead of providing an opt-in option to force the addition of the
`non_witness_utxo`, we will now add them by default and provide the
option to disable them when they aren't considered necessary.
2021-05-06 08:58:39 +02:00
Alekos Filini
b5e9589803 [signer] Adjust signing behavior with SignOptions 2021-05-06 08:58:38 +02:00
Alekos Filini
1d628d84b5 [signer] Fix error variant 2021-05-05 16:59:59 +02:00
Alekos Filini
b84fd6ea5c Fix import for FromStr 2021-05-05 16:59:57 +02:00
Alekos Filini
8fe4222c33 Merge commit 'refs/pull/336/head' of github.com:bitcoindevkit/bdk 2021-05-05 14:51:36 +02:00
codeShark149
e626f2e255 Added grcov based code coverage reporting in github action 2021-04-30 17:20:20 +05:30
LLFourn
5a0c150ff9 Make wallet methods take &mut psbt
Rather than consuming it because that is unergonomic.
2021-04-28 15:34:25 +10:00
Alekos Filini
00f07818f9 Merge commit 'refs/pull/321/head' of github.com:bitcoindevkit/bdk 2021-04-16 14:08:26 +02:00
Riccardo Casatta
136a4bddb2 Verify PSBT input satisfaction 2021-04-16 12:22:49 +02:00
Riccardo Casatta
ff7b74ec27 destructure tuple to improve clarity 2021-04-16 12:22:47 +02:00
Steve Myers
8c00326990 [ci] Revert fixed nightly-2021-03-23, use actual nightly 2021-04-15 10:15:13 -07:00
Riccardo Casatta
afcd26032d comment out println in tests, fix doc 2021-04-15 16:48:42 +02:00
Riccardo Casatta
8f422a1bf9 Add timelocks to policy satisfaction results
Also for signature the logic has been refactored to handle appropriately nested cases.
2021-04-15 15:57:35 +02:00
Alekos Filini
45983d2166 Merge commit 'refs/pull/322/head' of github.com:bitcoindevkit/bdk 2021-04-15 11:40:34 +02:00
Steve Myers
89cb4de7f6 [ci] Update 'test-readme-examples' job to use nightly-2021-03-23 2021-04-14 20:34:32 -07:00
Steve Myers
7ca0e0e2bd [ci] Update 'Tarpaulin to codecov.io' job to use nightly-2021-03-23 2021-04-14 20:33:42 -07:00
Alekos Filini
2bddd9baed Update CHANGELOG for v0.6.0 2021-04-14 18:49:52 +02:00
Alekos Filini
0135ba29c5 Bump version to 0.6.1-dev 2021-04-14 18:47:31 +02:00
Alekos Filini
549cd24812 Bump version to 0.6.0 2021-04-14 17:27:28 +02:00
Alekos Filini
a841b5d635 Use published bdk-testutils-macros 2021-04-14 17:26:40 +02:00
Alekos Filini
16ceb6cb30 Bump version of bdk-testutils-macros 2021-04-14 17:25:11 +02:00
Alekos Filini
edfd7d454c Merge commit 'refs/pull/325/head' of github.com:bitcoindevkit/bdk into release/0.6.0 2021-04-13 09:25:47 +02:00
Alekos Filini
1d874e50c2 Merge commit 'refs/pull/326/head' of github.com:bitcoindevkit/bdk into release/0.6.0 2021-04-13 09:25:27 +02:00
Richard Ulrich
98127cc5da Allow setting RBF when bumping the fee of a transaction. This enables to further bump the fee. 2021-04-13 09:18:46 +02:00
Richard Ulrich
e243107bb6 Adding tests to demonstrate that we can't keep RBF when bumping the fee of a transaction. 2021-04-13 09:18:43 +02:00
Steve Myers
237a8d4e69 [ci] Update 'Build docs' job to use nightly-2021-03-23 2021-04-12 10:33:54 -07:00
Steve Myers
7f4042ba1b Bump version to 0.6.0-rc.1 2021-04-09 15:30:34 -07:00
Steve Myers
3ed44ce8cf Remove unneeded script 2021-04-09 09:19:19 -07:00
Steve Myers
8e7d8312a9 [ci] Update 'build-test' job to clippy check all-targets 2021-04-08 14:44:35 -07:00
Steve Myers
4da7488dc4 Update 'cargo-check.sh' to not check +nightly 2021-04-08 14:36:07 -07:00
Steve Myers
e37680af96 Use .flatten() instead of .filter_map(|x| x), clippy warning
https://rust-lang.github.io/rust-clippy/master/index.html#filter_map_identity
2021-04-08 14:18:07 -07:00
Steve Myers
5f873ae500 Use .any() instead of .find().is_some(), clippy warning
https://rust-lang.github.io/rust-clippy/master/index.html#search_is_some
2021-04-08 14:18:07 -07:00
Steve Myers
2380634496 Use .get(0) instead of .iter().next(), clippy warning
https://rust-lang.github.io/rust-clippy/master/index.html#iter_next_slice
2021-04-08 14:18:07 -07:00
Steve Myers
af98b8da06 Compare float equality using error margin EPSILON, clippy warning
https://rust-lang.github.io/rust-clippy/master/index.html#float_cmp
2021-04-08 14:17:59 -07:00
Steve Myers
b68ec050e2 Remove redundant clone, clippy warning
https://rust-lang.github.io/rust-clippy/master/index.html#redundant_clone
2021-04-08 11:41:58 -07:00
Steve Myers
ac7df09200 Remove needlessly taken reference of both operands, clippy warning
https://rust-lang.github.io/rust-clippy/master/index.html#op_ref
2021-04-08 11:39:38 -07:00
Riccardo Casatta
192965413c Convert upper-case acronyms as suggested by CamelCase convention
see https://rust-lang.github.io/rust-clippy/master/index.html#upper_case_acronyms
2021-04-07 22:14:54 +02:00
Riccardo Casatta
745be7bea8 remove format! from assert! (will be an error in rust edition 2021) 2021-04-07 22:09:08 +02:00
Riccardo Casatta
b6007e05c1 upgrade CI rust version to 1.51.0 2021-04-07 22:08:56 +02:00
Steve Myers
f53654d9f4 Merge commit 'refs/pull/314/head' of github.com:bitcoindevkit/bdk 2021-04-06 10:21:07 -07:00
Daniel Karzel
e5ecc7f541 Avoid over-/underflow error in coin_select
Adds fix for edge-cases involving small UTXOs (where value < fee) where the coin_select calculation would panic with overflow/underflow errors.
Bitcoin is limited to 21*(10^6), so any Bitcoin amount fits into i64.
2021-04-06 10:21:55 +10:00
LLFourn
882a9c27cc Use tagged serialization for blockchain config
also make the config types Clone and PartialEq
2021-04-03 15:30:49 +11:00
Steve Myers
1e6b8e12b2 Merge commit 'refs/pull/310/head' of github.com:bitcoindevkit/bdk 2021-03-31 16:06:53 -07:00
Steve Myers
b226658977 [ci] update MSRV to 1.46.0 2021-03-29 11:17:50 -07:00
Alekos Filini
6d6776eb58 Merge branch 'release/0.5.1' 2021-03-29 19:48:00 +02:00
Alekos Filini
f1f844a5b6 Bump version to 0.5.2-dev 2021-03-29 19:10:47 +02:00
Alekos Filini
a3e45358de Bump version to 0.5.1 2021-03-29 18:28:06 +02:00
Alekos Filini
07e79f6e8a Update CHANGELOG.md 2021-03-29 18:28:04 +02:00
Steve Myers
d94b8f87a3 Pin hyper version to =0.14.4 2021-03-29 10:12:56 +02:00
Steve Myers
fdb895d26c Update DEVELOPMENT_CYCLE for unreleased dev-dependencies 2021-03-22 10:48:39 -07:00
Steve Myers
7041e96737 Fix new test to use new get_address() fn 2021-03-22 10:26:56 -07:00
Steve Myers
199f716ebb Fix bdk-testutils-macros version 2021-03-22 10:24:21 -07:00
Steve Myers
b12e358c1d Fix 0.5.1-dev CHANGELOG.md 2021-03-20 11:42:00 -07:00
Alekos Filini
f786f0e624 Merge branch 'release/0.5.0' of github.com:bitcoindevkit/bdk 2021-03-17 22:27:44 +01:00
Alekos Filini
71e0472dc9 Bump version to 0.5.1-dev 2021-03-17 20:58:23 +01:00
Alekos Filini
f7944e871b Bump version to 0.5.0 2021-03-17 15:21:37 +01:00
Alekos Filini
2fea1761c1 Bump deps version 2021-03-17 15:21:07 +01:00
Alekos Filini
fa27ae210f Update version in lib.rs 2021-03-17 15:14:35 +01:00
Alekos Filini
46fa41470e Update CHANGELOG with the new release tag 2021-03-17 15:13:46 +01:00
Alekos Filini
c456a252f8 Merge commit 'refs/pull/296/head' of github.com:bitcoindevkit/bdk 2021-03-17 11:30:31 +01:00
Riccardo Casatta
d837a762fc update changelog and fix docs 2021-03-17 11:24:48 +01:00
davemo88
e82dfa971e brevity 2021-03-16 10:20:07 -04:00
davemo88
cc17ac8859 update changelog 2021-03-15 21:58:03 -04:00
davemo88
3798b4d115 add get_psbt_input 2021-03-15 21:50:51 -04:00
Steve Myers
2d0f6c4ec5 [wallet] Add get_address(AddressIndex::Reset(u32)), update CHANGELOG 2021-03-15 09:13:23 -07:00
Steve Myers
f3b475ff0e [wallet] Refactor get_*_address() into get_address(AddressIndex), update CHANGELOG 2021-03-15 08:58:11 -07:00
Steve Myers
41ae202d02 [wallet] Add get_unused_address() function, update CHANGELOG 2021-03-15 08:58:09 -07:00
Steve Myers
fef6176275 [wallet] Add fetch_index() helper function 2021-03-15 08:58:07 -07:00
Alekos Filini
8ebe7f0ea5 Merge commit 'refs/pull/308/head' of github.com:bitcoindevkit/bdk into release/0.5.0 2021-03-15 10:53:49 +01:00
Alekos Filini
eb85390846 Merge commit 'refs/pull/309/head' of github.com:bitcoindevkit/bdk into release/0.5.0 2021-03-15 10:53:29 +01:00
davemo88
dc83db273a better derivation path building 2021-03-11 21:54:00 -05:00
davemo88
201bd6ee02 better derivation path building 2021-03-11 21:35:16 -05:00
davemo88
396ffb42f9 handle descriptor xkey origin 2021-03-11 17:39:02 -05:00
Steve Myers
9cf62ce874 [ci] Manually install libclang-common-10-dev to 'check-wasm' job 2021-03-11 11:10:10 -08:00
Alekos Filini
9c6b98d98b Bump version to 0.5.0-rc.1 2021-03-11 10:07:26 +01:00
Riccardo Casatta
14ae64e09d [policy] Populate satisfaction with singatures already present in a PSBT 2021-03-08 16:58:56 +01:00
Riccardo Casatta
48215675b0 [policy] uncomment and update 4 tests: 2 ignored and 2 restored 2021-03-08 16:51:43 +01:00
Riccardo Casatta
37fa35b24a [policy] pass existing context instead of new one 2021-03-08 16:51:42 +01:00
Riccardo Casatta
23ec9c3ba0 [policy] pass secp context to setup_keys 2021-03-08 16:51:40 +01:00
Steve Myers
e33a6a12c1 Update README license badge 2021-03-05 16:48:57 -08:00
Steve Myers
12ae1c3479 Update license to Apache 2.0 or MIT, copyright to Bitcoin Dev Kit Developers 2021-03-03 13:23:25 -08:00
Thomas Eizinger
fdde0e691e Make constructor functions on FeeRate const
This allows `FeeRate`s to be stored inside `const`s.

For example:

const MY_FEE_RATE: FeeRate = FeeRate::from_sat_per_vb(10.0);

Unfortunately, floating point maths inside const expressions is
still unstable, hence we cannot make `from_btc_per_kvb` const.
2021-03-01 11:04:39 +11:00
Alekos Filini
1cbd47b988 Merge commit 'refs/pull/285/head' of github.com:bitcoindevkit/bdk 2021-02-26 10:14:01 +01:00
Alekos Filini
e0183ed5c7 Merge commit 'refs/pull/279/head' of github.com:bitcoindevkit/bdk 2021-02-26 10:09:24 +01:00
Alekos Filini
dae900cc59 Merge commit 'refs/pull/297/head' of github.com:bitcoindevkit/bdk 2021-02-26 10:00:01 +01:00
Alekos Filini
4c2042ab01 [descriptor] Ensure that there are no duplicated keys 2021-02-26 09:46:38 +01:00
Thomas Eizinger
2f0ca206f3 Update electrum-client to 0.7 2021-02-26 14:09:46 +11:00
LLFourn
ac7c1bd97b Clean up add_foreign_utxo tests a bit
Noticed some suboptimal things while reviewing myself.
2021-02-26 13:33:52 +11:00
LLFourn
d9a102afa9 Improve docs of satisfaction_weight 2021-02-26 13:33:52 +11:00
Lloyd Fournier
7c1dcd8a72 Apply typo fixes from @tcharding
Co-authored-by: Tobin C. Harding <me@tobin.cc>
2021-02-26 13:33:52 +11:00
LLFourn
1fbfeabd77 Added add_foreign_utxo
To allow adding UTXOs external to the current wallet.
The caller must provide the psbt::Input so we can create a coherent PSBT
at the end and so this is compatible with existing PSBT workflows.

Main changes:

- There are now two types of UTXOs, local and foreign reflected in a
`Utxo` enum.
- `WeightedUtxo` now captures floating `(Utxo, usize)` tuples
- `CoinSelectionResult` now has methods on it for distinguishing between
local amount included vs total.
2021-02-26 13:33:52 +11:00
LLFourn
9a918f285d Make TxBuilder actually Clone
it derived Clone but in practice it was never clone because some of the
parameters were not Clone.
2021-02-26 13:33:52 +11:00
LLFourn
a7183f34ef s/UTXO/LocalUtxo/g
Since this struct has a "keychain" it is not a general "UTXO" but a
local wallet UTXO.
2021-02-26 13:33:52 +11:00
Tobin Harding
bda416df0a Use mixed order insertions
Currently we have a unit test to test that signers are sorted by
ordering. We call `add_external` to add them but currently we add them
in the same order we expect them to be in. This means if the
implementation happens to insert them simply in the order they are
added (i.e. insert to end of list) then this test will still pass.

Insert in a mixed order, including one lower followed by one higher -
this ensures we are not inserting at the front or at the back but are
actually sorting based on the `SignerOrdering`.
2021-02-24 13:39:36 +11:00
Tobin Harding
a838c2bacc Use id() for DummySigner comparison
If we give the `DummySigner` a valid identifier then we can use this to
do comparison.

Half the time we do comparison we only have a `dyn Signer` so we cannot
use `PartialEq`, add a helper function to check equality (this is in
test code so its not toooo ugly).

Thanks @afilini for the suggestion.
2021-02-24 13:37:41 +11:00
Tobin Harding
d2a094aa4c Align multi-line string
Recently we shortened the first line of a multi-line string and failed
to re-align the rest of the lines.

Thanks @afilini
2021-02-24 13:30:49 +11:00
Tobin Harding
bdb2a53597 Add cargo check script
We build against various targets on CI, in order to not abuse CI its
nice to see if code is good before pushing.

Add a script that runs `cargo check` with various combinations of
features and targets in order to enable thorough checking of the project
source code during development.
2021-02-24 13:30:49 +11:00
Tobin Harding
97ad0f1b4f Remove unused macro_use
Found by Clippy, we don't need this `macro_use` statement.
2021-02-24 13:30:48 +11:00
Tobin Harding
2b5e177ab2 Use lazy_static
`lazy_static` is not imported, this error is hidden behind the
`compact_filters` feature flag.
2021-02-24 13:30:48 +11:00
Tobin Harding
bfe29c4ef6 Use map instead of and_then
As suggested by Clippy us `map` instead of `and_then` with an inner
option.
2021-02-24 13:30:48 +11:00
Tobin Harding
e35601bb19 Use vec! instead of mut and push
As suggested by Clippy, use the `vec!` macro directly instead of
declaring a mutable vector and pushing elements onto it.
2021-02-24 13:30:48 +11:00
Tobin Harding
24df438607 Remove useless question mark operator
Clippy emits:

  warning: Question mark operator is useless here

No need to use the `?` operator inside an `Ok()` statement when
returning, just return directly.
2021-02-24 13:30:48 +11:00
Tobin Harding
cb3b8cf21b Do not compare vtable
Clippy emits error:

 comparing trait object pointers compares a non-unique vtable address

The vtable is an implementation detail, it may change in future. we
should not be comparing vtable addresses for equality. Instead we can
get a pointer to the data field of a fat pointer and compare on that.
2021-02-24 13:30:48 +11:00
Tobin Harding
0e6add0cfb Refactor db/batch matching
Remove the TODO; refactor matching to correctly handle conditionally
built `Sled` variants. Use `unreachable` instead of `unimplemented` with
a comment hinting that this is a bug, this makes it explicit, both at
runtime and when reading the code, that this match arm should not be hit.
2021-02-24 13:30:47 +11:00
Tobin Harding
343e97da0e Conditionally compile constructor
The `ChunksIterator` constructor is only used when either `electrum` or
`esplora` features are enabled. Conditionally build it so that we do not
get a clippy warning when building without these features.
2021-02-24 13:30:47 +11:00
Tobin Harding
ba8ce7233d Allow mutex_atomic
Clippy complains about use of a mutex, suggesting we use an
 `AtomicUsize`. While the same functionality _could_ be achieved using an
 `AtomicUsize` and a CAS loop it makes the code harder to reason about
 for little gain. Lets just quieten clippy with an allow attribute and
 document why we did so.
2021-02-24 13:30:47 +11:00
Tobin Harding
35184e6908 Use default pattern
Clippy emits warning:

  warning: field assignment outside of initializer for an instance
  created with Default::default()

Do as suggested by clippy and use the default init pattern.

```
    let foo = Foo {
    	bar: ...,
        Default::default()
    }
```
2021-02-24 13:30:47 +11:00
Tobin Harding
824b00c9e0 Use next instead of nth(0)
As suggested by clippy we can use `.next()` on an iterator instead of
`nth(0)`. Although arguably no clearer, and certainly no worse, it keeps
clippy quiet and a clean lint is a good thing.
2021-02-24 13:30:47 +11:00
Tobin Harding
79cab93d49 Use count instead of collect and len
Clippy emits warning:

	warning: avoid using `collect()` when not needed

As suggested by clippy just use `count` directly on the iterator instead
of `collect` followed by `len`.
2021-02-24 13:30:47 +11:00
Tobin Harding
2afc9faa08 Remove needles explicit reference
Clippy emits warning:

	warning: needlessly taken reference of both operands

Remove the explicit reference's as suggested.
2021-02-24 13:30:46 +11:00
Tobin Harding
0e99d02fbe Remove redundant calls to clone
No need to clone copy types, found by clippy.
2021-02-24 13:30:46 +11:00
Tobin Harding
3a0a1e6d4a Remove static lifetime
const str types do not need an explicit lifetime, remove it. Found by
clippy.
2021-02-24 13:30:46 +11:00
Tobin Harding
2057c35468 Use ! is_empty instead of len > 0
As directed by clippy use `!a.is_empty()` instead of `a.len() > 0`.
2021-02-24 13:30:46 +11:00
Tobin Harding
5eaa3b0916 Use unwrap_or_else
As directed by clippy use `unwrap_or_else` in order to take advantage of
lazy evaluation.
2021-02-24 13:30:46 +11:00
Steve Myers
4ad0f54c30 [ci] Rename MAGICAL_ env vars to BDK_, for tests use wallet name in RPC calls 2021-02-21 19:47:06 -08:00
Steve Myers
eeff3b5049 [ci] Update start-core.sh to create default wallet for bitcoind 0.21.0 2021-02-21 19:04:52 -08:00
Steve Myers
5e352489a0 Merge branch 'release/0.4.0' 2021-02-17 18:33:11 -08:00
Steve Myers
7ee262ef4b Fix CHANGELOG 'Unreleased' link 2021-02-17 18:30:18 -08:00
Steve Myers
2759231f7b Bump version to 0.4.1-dev 2021-02-17 16:32:23 -08:00
Steve Myers
e3f893dbd1 Bump version to 0.4.0 2021-02-17 12:08:43 -08:00
Steve Myers
3f5513a2d6 Update 'bdk-macros', 'bdk-testutils', 'bdk-testutils-macros' dep versions 2021-02-17 12:08:41 -08:00
Steve Myers
fcf5e971a6 Bump 'bdk-macros' version to 0.3.0 2021-02-17 12:08:39 -08:00
Steve Myers
cdf7b33104 Bump 'bdk-testutils' version to 0.3.0 2021-02-17 12:08:37 -08:00
Steve Myers
7bbff79d4b Bump 'bdk-testutils-macros' version to 0.3.0 2021-02-17 12:08:35 -08:00
Steve Myers
3a2b8bdb85 Small CHANGELOG cleanup 2021-02-17 12:08:33 -08:00
Alekos Filini
7843732e17 [descriptor] Perform additional checks before using a descriptor
Fixes #287
2021-02-17 12:08:31 -08:00
Alekos Filini
fa5a5c8c05 Merge commit 'refs/pull/290/head' of github.com:bitcoindevkit/bdk 2021-02-16 11:54:52 -05:00
Lloyd Fournier
6092c6e789 Don't fix tokio minor version
This is also what they give as an example in their docs: https://docs.rs/tokio/1.2.0/tokio/
2021-02-16 09:57:54 -05:00
Lloyd Fournier
7fe5a30424 Don't fix tokio minor version
This is also what they give as an example in their docs: https://docs.rs/tokio/1.2.0/tokio/
2021-02-16 16:31:55 +11:00
Steve Myers
a82b2155e9 [ci] Manually set rust stable version in CI pipeline 2021-02-15 14:33:10 -08:00
Alekos Filini
b61427c07b [policy] Allow specifying a policy path for Multisig
While technically it's not required since there are no timelocks inside,
it's still less confusing for the end user if we allow this instead of
failing like we do currently.
2021-02-13 11:17:07 -05:00
Alekos Filini
fa2610538f [policy] Remove the TooManyItemsSelected error
The `TooManyItemsSelected` error has been removed, since it's not technically an
error but potentailly more of an "over-constraint" over a tx: for instance,
given a `thresh(3,pk(a),pk(b),older(10),older(20))` descriptor one could create
a spending tx with the `[0,1,2]` items that would only be spendable after `10`
blocks, or a tx with the `[0,2,3]` items that would be spendable after `20`.

In this case specifying more items than the threshold would create a tx with
the maximum constraint possible, in this case the `20` blocks. This is not
necessarily an error, so we should allow it without failing.
2021-02-13 11:10:31 -05:00
Alekos Filini
d0ffcdd009 Merge branch 'master' into release/0.4.0
Merging in fixes for the CI after Rust 1.50.0
2021-02-13 11:08:03 -05:00
Steve Myers
1c6864aee8 Rename ToDescriptorKey to IntoDescriptorKey 2021-02-12 23:23:20 -08:00
Steve Myers
d638da2f10 Rename ToWalletDescriptor to IntoWalletDescriptor 2021-02-12 23:23:20 -08:00
Steve Myers
2f7513753c Update CHANGELOG for rust 1.50.0 clippy changes 2021-02-12 23:23:09 -08:00
Steve Myers
c90a1f70a6 Fix clippy warn on compact_filters peer::_recv() 2021-02-12 22:23:48 -08:00
Steve Myers
04348d0090 Fix clippy warning 'wrong_self_convention' 2021-02-12 22:23:48 -08:00
Steve Myers
eda23491c0 Fix clippy warning 'unnecessary_wraps' 2021-02-12 22:23:29 -08:00
Alekos Filini
dccf09861c Update version in the examples 2021-02-11 09:29:44 -05:00
Alekos Filini
02b9eda6fa Update CHANGELOG for release v0.4.0 2021-02-11 09:29:27 -05:00
Alekos Filini
6611ef0e5f Bump version to 0.4.0-rc.1 2021-02-11 09:27:34 -05:00
Riccardo Casatta
db5e663f05 compact filters balance example 2021-02-10 12:38:07 +01:00
Alekos Filini
c4f21799a6 Merge commit 'refs/pull/278/head' of github.com:bitcoindevkit/bdk 2021-02-05 17:22:52 -05:00
Alekos Filini
fedd92c022 Properly handle the Signet network
Closes #62
2021-02-05 16:51:48 -05:00
Alekos Filini
19eca4e2d1 [compact_filters] Use the new rust-bitcoin API 2021-02-05 16:51:46 -05:00
Alekos Filini
023dabd9b2 Update changelog 2021-02-05 16:51:44 -05:00
Alekos Filini
b44d1f7a92 Update bitcoin, miniscript, electrum-client 2021-02-05 16:51:41 -05:00
Alekos Filini
3d9d6fee07 Update bitcoin, miniscript, electrum-client 2021-02-05 09:11:27 -05:00
Alekos Filini
4c36020e95 Merge commit 'refs/pull/274/head' of github.com:bitcoindevkit/bdk 2021-02-03 10:14:16 -05:00
Alekos Filini
6d01c51c63 Un-pin the version of cc
Fixes #183
2021-02-03 09:57:12 -05:00
Lucas Soriano del Pino
693fb24e02 Emit specific compile error if incompatible features are enabled
This is motivated by the feature `electrum` being part of the
`default` features of this crate. It is easy to naively enable
`esplora` and `async-interface` and forget that `electrum` is enabled
by default, running into not so obvious compile errors.
2021-02-03 18:16:13 +11:00
LLFourn
6689384c8a Merge branch 'master' into make_txbuilder_take_ref_to_wallet 2021-01-30 13:12:13 +11:00
LLFourn
35a61f5759 Fix whitespace and curse emacs 2021-01-30 13:05:23 +11:00
LLFourn
d0ffd5606a Fix CHANGELOG to mention s/utxos/add_utxos/ 2021-01-30 12:58:05 +11:00
Alekos Filini
c2b1268675 Merge commit 'refs/pull/270/head' of github.com:bitcoindevkit/bdk 2021-01-29 15:24:39 -05:00
Alekos Filini
ccbbad3e9e [keys] Improve the API of DerivableKey
A new `ExtendedKey` type has been added, which is basically an enum of
`bip32::ExtendedPubKey` and `bip32::ExtendedPrivKey`, with some extra metadata
regarding the `ScriptContext`.

This type has some methods that make it very easy to extract its content as
either an `xprv` or `xpub`.

The `DerivableKey` trait has been updated so that the user now only has to
implement a method (`DerivableKey::into_extended_key()`) to perform the
conversion into an `ExtendedKey`.

The method that was previously called `add_metadata()` has now been renamed
to `into_descriptor_key()`, and it has
a blanket implementation.
2021-01-29 15:21:36 -05:00
LLFourn
dbf8cf7674 Make maintain_single_recipient return a Result
preferable to panicking.
2021-01-29 12:33:07 +11:00
Steve Myers
eb96ac374b [ci] Update rust toolchains 2021-01-27 14:53:48 -08:00
Alekos Filini
c431a60171 [signer] Add Signer::id()
Closes #261
2021-01-27 11:43:28 -05:00
Alekos Filini
2e0ca4fe05 Fix the crate version in src/lib.rs 2021-01-26 09:34:14 -05:00
Alekos Filini
df32c849bb Add a function to return the version of BDK at runtime 2021-01-25 15:14:54 -05:00
Lloyd Fournier
33426d4c3a Merge branch 'master' into make_txbuilder_take_ref_to_wallet 2021-01-23 17:36:01 +11:00
Alekos Filini
03e6e8126d Merge commit 'refs/pull/174/head' of github.com:bitcoindevkit/bdk 2021-01-22 10:38:17 -05:00
LLFourn
ff10aa5ceb Add "add_utxos" method on TxBuilder
To replace the previously existing ".utxos"
2021-01-22 15:08:31 +11:00
LLFourn
21d382315a Remove not very useful comment
Thanks @tcharding.
2021-01-22 15:08:30 +11:00
LLFourn
6fe3be0243 Derive Clone + Debug for TxBuilder
And make Wallet Debug while I'm at it.
2021-01-22 15:08:30 +11:00
LLFourn
10fcba9439 Revert back to Vec to hold utxos in builder
Due to brain malfunction I made utxos into a BTree. This made a test
pass but is incorrect. The test itself was incorrect as per comment in

https://github.com/bitcoindevkit/bdk/pull/258#issuecomment-758370380

So I (1) reverted utxos back to a Vec, (2) fixed the test and expanded
the comment in the test.
2021-01-22 15:08:30 +11:00
LLFourn
890d6191a1 Remove Option trickery from TxBuilder API
see: https://github.com/bitcoindevkit/bdk/pull/258#issuecomment-754685962
2021-01-22 15:08:30 +11:00
LLFourn
735db02850 Assert that .finish() hasn't been called already in coin_selection 2021-01-22 14:33:37 +11:00
LLFourn
7bf46c7d71 Add comment explaining why params and coin_selection are Options 2021-01-22 14:33:37 +11:00
LLFourn
8319b32466 Fix wrong doc links 2021-01-22 14:33:37 +11:00
LLFourn
0faca43744 Make testutils dependency path relative 2021-01-22 14:33:37 +11:00
LLFourn
6f66de3d16 Update Changelog for Tx creation overhaul 2021-01-22 14:33:37 +11:00
LLFourn
5fb7fdffe1 [wallet] Use doctest_wallet!() to remove some no_runs from doctests
...and improve the fee bumping example while trying to make it
no_run (but failed).
2021-01-22 14:33:37 +11:00
LLFourn
7553b905c4 [wallet] Overhaul TxBuilder internals and externals
Fixes #251

TxBuilders are now not created directly but are created through the
wallet with `build_tx` and `build_fee_bump`.
The advantages of this realised in this commit are:

1. Normal tx creation and fee bumping use the code internally. The only
difference between normal tx and fee bump is how the builder is created.
2. The TxBuilder now has a refernce to the wallet and can therefore
lookup things as methods are called on it. `add_utxo` now uses this to
look up UTXO deta when it is called (rather than having to do it and
possibly error later on).

To support these changes `get_utxo` and `get_descriptor_for_keychain`
public methods have been added to Wallet. I could have kept them
pub(crate) but they seem like fine APIs to have publicly.
2021-01-22 14:33:37 +11:00
LLFourn
f74f17e227 Change "received_tx" into "populate_test_db" macro
A `[cfg(test)]` function is not as helpful as a macro since it can't be
called in the context of a doctest.

Also adds doctest_wallet macro which can be used to create a wallet in a
doctest.
2021-01-22 14:23:36 +11:00
Alekos Filini
7566904926 Merge branch 'release/0.3.0' 2021-01-20 10:59:09 -05:00
Alekos Filini
1420cf8d0f Bump version to 0.3.1-dev 2021-01-20 10:58:04 -05:00
Alekos Filini
bddd418c8e Bump version to 0.3.0 2021-01-20 10:39:24 -05:00
Alekos Filini
49db898acb Update CHANGELOG.md in preparation of tag v0.3.0 2021-01-20 10:27:28 -05:00
Tobin Harding
01585227c5 Use contains combinator
As suggested by clippy, use the `contains` combinator instead of doing
manual range check on floats.
2021-01-18 11:39:37 -08:00
Tobin Harding
03b7c1b46b Use contains combinator
As suggested by clippy, use the `contains` combinator instead of doing
manual range check on floats.
2021-01-18 10:46:12 -08:00
Tobin Harding
4686ebb420 Add full stops to list items
Super anal patch to make list items uniform, add full stop to the items
where it is missing.
2021-01-18 10:46:10 -08:00
Tobin Harding
082db351c0 Remove unexplainable newlines
It seems the documentation of this project uses arbitrarily long
lines (i.e. no set column width) along with the occasional newline
before some sentences (within a paragraph). When to split a sentence
onto a newline does not seem to follow any discernible pattern.

There are a few instances of newline characters appearing randomly in
the middle of a sentence and since, as observed above, there is no
fixed column width is use these new lines are out of place.

Remove them so the documentation is slightly more uniform and nice to
read in an editor.

This patch is whitespace only, no other textual changes.
2021-01-18 10:46:08 -08:00
Tobin Harding
84db6ce453 Do minor grammar fix 2021-01-18 10:46:06 -08:00
Justin Moon
52b45c5b89 [wallet] Add "needed" and "available" metadata to Error::InsufficientFunds 2021-01-18 11:15:10 +01:00
Justin Moon
5c82789e57 Update CHANGELOG 2021-01-13 23:04:23 -06:00
Justin Moon
7bc8c3c380 [wallet] Add "needed" and "available" metadata to Error::InsufficientFunds 2021-01-13 23:00:37 -06:00
Justin Moon
813c1ddcd0 [blockchain] Upgrade tokio
- Also upgrade reqwest
- Switch to `tokio::runtime::Builder::new_single_thread()` because
`tokio::runtime::Runtime::new()` changed it's behavior to create a
multithreaded runtime.
- `enable_all` enables time and io resource drivers as explained
[here](https://docs.rs/tokio/0.2.24/tokio/runtime/index.html#resource-drivers)
2021-01-13 22:58:02 -06:00
Alekos Filini
733355a6ae Bump version to 0.3.0-rc.1 2021-01-12 21:41:30 +01:00
Alekos Filini
6955a7776d Merge commit 'refs/pull/264/head' of github.com:bitcoindevkit/bdk 2021-01-12 14:02:41 +01:00
Alekos Filini
bf04a2cf69 descriptor: Use DescriptorError instead of Error when reasonable
Change the return type of the `descriptor!()` macro and `ToWalletDescriptor` to
avoid having to map errors.

Also introduce more checks to validate descriptors built using the macro.
2021-01-12 12:21:22 +01:00
Riccardo Casatta
2b669afd3e Permit to not set timeout in ElectrumBlockchainConfig
Allowing to use socks5 which requires None timeout
2021-01-11 14:06:56 +01:00
Steve Myers
8510b2b86e Fix crates.io license info 2021-01-09 10:41:48 -08:00
Alekos Filini
a95a9f754c Merge commit 'refs/pull/260/head' of github.com:bitcoindevkit/bdk 2021-01-05 16:06:32 +01:00
Alekos Filini
3980b90bff Merge commit 'refs/pull/248/head' of github.com:bitcoindevkit/bdk 2021-01-05 16:04:53 +01:00
Alekos Filini
b2bd1b5831 Merge commit 'refs/pull/257/head' of github.com:bitcoindevkit/bdk 2021-01-05 16:01:15 +01:00
Steve Myers
aa31c96821 [ci] Fail 'Build docs' job if warnings 2021-01-04 16:39:11 -08:00
Steve Myers
f74bfdd493 Remove 'cli.rs' module, 'cli-utils' feature and 'repl.rs' example 2020-12-31 09:44:30 -08:00
Steve Myers
5034ca2267 Fix clippy warnings for compact_filters feature 2020-12-30 19:23:35 -08:00
Steve Myers
8094263028 [ci] Fix clippy step to check matrix features 2020-12-30 19:23:00 -08:00
LLFourn
0c9c0716a4 [wallet] Fix details.fees being wrong when change is dust 2020-12-29 16:36:35 +11:00
Alekos Filini
c2b2da7601 Merge commit 'refs/pull/252/head' of github.com:bitcoindevkit/bdk 2020-12-23 18:39:05 +01:00
Alekos Filini
407f14add9 Merge commit 'refs/pull/250/head' of github.com:bitcoindevkit/bdk 2020-12-23 17:48:59 +01:00
LLFourn
656c9c9da8 Use () to indicate a missing blockchain
So that:
1. There are no runtime errors
2. There less type annotations needed
3. Less traits and stuff to document
2020-12-23 14:52:29 +11:00
LLFourn
a578d20282 Fix incredibly annoying cargo-fmt problem
I must have a newer version of cargo-fmt which stops me from making
commits every time because of this.
2020-12-22 14:37:53 +11:00
Steve Myers
2e222c7ad9 [docs] Add badges for crates.io, mit license. Fix docs.rs badge and link 2020-12-21 20:14:25 +01:00
Alekos Filini
7d6cd6d4f5 Fix the changelog after release v0.2.0 2020-12-21 20:14:23 +01:00
Alekos Filini
e31bd812ed Bump version to 0.2.1-dev 2020-12-21 14:51:49 +01:00
Alekos Filini
76b5273040 Bump version to 0.2.0 2020-12-21 14:16:14 +01:00
Alekos Filini
c910668ce3 Add metadata to Cargo.toml, remove local deps 2020-12-21 14:03:32 +01:00
Alekos Filini
2c7a28337d Add metadata for bdk-testutils and bdk-testutils-macros, bump their version 2020-12-21 13:16:41 +01:00
Alekos Filini
7be193faa5 [testutils-macros] Fix deps features 2020-12-21 13:16:39 +01:00
Alekos Filini
a5f914b56d Add metadata for bdk-macros, bump its version 2020-12-21 12:15:10 +01:00
Alekos Filini
c68716481b Document the development cycle 2020-12-21 12:06:17 +01:00
Alekos Filini
a217494bb1 Bump version to 0.2.0-rc.1 2020-12-18 10:52:10 +01:00
Alekos Filini
63aabe203f Merge commit 'refs/pull/235/head' of github.com:bitcoindevkit/bdk 2020-12-18 10:41:37 +01:00
Steve Myers
b8c6732c74 [ci] Remove unneeded skip step conditionals in CI 2020-12-17 09:52:48 -08:00
Steve Myers
baa919c96a Fix empty checkboxes in PR template 2020-12-17 09:52:47 -08:00
Steve Myers
2325a1fcc2 [docs] Format code in docs with '--config format_code_in_doc_comments=true' 2020-12-16 15:12:51 -08:00
Steve Myers
fb5c70fc64 [docs] Replace all 'allow(missing_docs)' with basic docs 2020-12-16 15:12:49 -08:00
Steve Myers
8cfbf1f0a2 [docs] Add more docs to 'types.rs' 2020-12-16 15:12:47 -08:00
Alekos Filini
713411ea5d [keys] impl ToDescriptorKey for &str 2020-12-16 19:06:02 +01:00
Alekos Filini
7e90657ee1 [descriptor] Make the syntax of descriptor!() more consistent
The syntax now is pretty much the same as the normal descriptor syntax,
with the only difference that modifiers cannot be grouped together (i.e.
`sdv:older(144)` must be turned into `s:d:v:older(144)`.
2020-12-16 19:00:55 +01:00
Riccardo Casatta
635d98c069 [docs] use only sled instead of crate::sled 2020-12-16 12:11:49 +01:00
Riccardo Casatta
680aa2aaf4 [docs] fix NetworkMessage::Ping docs link 2020-12-16 12:11:26 +01:00
Alekos Filini
5f373180ff Merge commit 'refs/pull/223/head' of github.com:bitcoindevkit/bdk 2020-12-16 11:11:38 +01:00
Alekos Filini
931a110e4e Merge commit 'refs/pull/229/head' of github.com:bitcoindevkit/bdk 2020-12-16 10:48:10 +01:00
Riccardo Casatta
d2aac4848c always build docs and create artifacts, publish only on master 2020-12-16 10:16:45 +01:00
Steve Myers
148e8c6088 [docs] Add docs to the 'wallet' module 2020-12-15 15:12:32 -08:00
Steve Myers
1d1d539154 [ci] Fix publishing coverage to codecov.io 2020-12-15 13:36:36 -08:00
Evgenii P
09730c0898 Take ID into account in SignersContainerKey's PartialEq impl 2020-12-15 22:40:07 +07:00
Alekos Filini
6d9472793c Merge commit 'refs/pull/228/head' of github.com:bitcoindevkit/bdk 2020-12-15 14:33:59 +01:00
Alekos Filini
eadf50042c [wallet] Add tests for check_nsequence_rbf and check_nlocktime 2020-12-15 12:01:44 +01:00
Alekos Filini
322122afc8 [wallet] Set the correct nSequence when RBF and OP_CSV are used
This commit also fixes the timelock comparing logic in the policy module, since
the rules are different for absolute (OP_CLTV) and relative (OP_CSV) timelocks.

Fixes #215
2020-12-15 12:01:41 +01:00
Evgenii P
5315c3ef25 rustfmt 2020-12-15 11:36:26 +07:00
Evgenii P
c58236fcd7 Fix SignersContainer::find to filter out incorrect IDs 2020-12-15 11:36:26 +07:00
Evgenii P
2658a9b05a Fix SignersContainerKey PartialOrd to respect the ID 2020-12-15 11:36:26 +07:00
Evgenii P
c075183a7b Revert replacing BTreeMap to HashMap in SingersContainer 2020-12-15 11:35:34 +07:00
LLFourn
9b31ae9153 Fix doc comment fallout from s/script type/keychain 2020-12-15 08:39:19 +11:00
Alekos Filini
1713d621d4 Rename ScriptType to KeychainKind
This avoids confusion with the "type of script".
2020-12-14 17:14:24 +01:00
Alekos Filini
7adaaf227c [ci] Ignore empty nightly docs commits instead of failing 2020-12-14 15:16:38 +01:00
Alekos Filini
4ede4a4ad0 Merge commit 'refs/pull/222/head' of github.com:bitcoindevkit/bdk 2020-12-14 11:44:06 +01:00
Alekos Filini
c83cec3777 Merge commit 'refs/pull/221/head' of github.com:bitcoindevkit/bdk 2020-12-14 11:27:51 +01:00
Alekos Filini
0ef0b45745 Merge commit 'refs/pull/224/head' of github.com:bitcoindevkit/bdk 2020-12-14 11:18:51 +01:00
Evgenii P
351b656a82 Use unstable sort by key for performance 2020-12-14 16:27:54 +07:00
Alekos Filini
6c768e5388 Add the pull request template 2020-12-14 10:21:22 +01:00
Steve Myers
f8d3cdca9f [docs] Add experimental warning to compact_filters and policy modules 2020-12-13 21:04:17 -08:00
Steve Myers
0f2dc05c08 [docs] Add docs to the 'descriptor' module 2020-12-13 20:57:28 -08:00
Steve Myers
4e771d6546 [docs] Add docs to the 'template' module 2020-12-13 20:41:32 -08:00
Steve Myers
60e5cf1f8a [docs] Add docs to the 'policy' module 2020-12-13 20:40:23 -08:00
Evgenii P
641d9554b1 Ignore broken tests. (#225) 2020-12-14 10:17:12 +07:00
Evgenii P
95af38a01d rustfmt 2020-12-14 01:03:14 +07:00
Evgenii P
3ceaa33de0 Add unit tests for SignersContainer 2020-12-14 01:03:14 +07:00
Evgenii P
5d190aa87d Remove debug output 2020-12-14 01:02:48 +07:00
Evgenii P
20e0a4d421 Replace BTreeMap with a HashMap 2020-12-13 18:37:27 +07:00
Alekos Filini
010b7eed97 Merge commit 'refs/pull/218/head' of github.com:bitcoindevkit/bdk 2020-12-11 16:31:17 +01:00
Evgenii P
c9a05c0deb Fix the REPL example to have optional esplora 2020-12-11 22:19:08 +07:00
Evgenii P
7d7b78534a Remove unused macro imports 2020-12-11 22:19:08 +07:00
Evgenii P
ff7ba04180 Make "esplora" feature optional for REPL binary 2020-12-11 22:19:07 +07:00
Alekos Filini
c0a92bd084 [keys] Replace (Fingerprint, DerivationPath) with KeySource 2020-12-11 11:16:41 +01:00
Alekos Filini
1a90832f3a [docs] Add the docs to the keys module 2020-12-11 11:16:39 +01:00
Alekos Filini
9bafdfe2d4 [docs] Various fixes to the docs 2020-12-11 11:16:38 +01:00
Alekos Filini
a1db9f633b [ci] Test the examples in README.md 2020-12-11 11:16:36 +01:00
Steve Myers
8d6f67c764 Add warn and TODOs for missing_docs and add lib.rs docs 2020-12-08 15:57:31 -08:00
Steve Myers
602ae3d63a Add TODOs for missing_docs 2020-12-07 18:25:16 -08:00
Steve Myers
3491bfbf30 Fix README.md examples 2020-12-07 18:19:54 -08:00
Steve Myers
400b4a85f3 Fix unused import warning and docs link warning 2020-12-07 11:28:25 -08:00
Alekos Filini
aed2414cad Merge commit 'refs/pull/214/head' of github.com:bitcoindevkit/bdk 2020-12-07 11:57:32 +01:00
Alekos Filini
592c37897e Merge commit 'refs/pull/213/head' of github.com:bitcoindevkit/bdk 2020-12-07 11:57:03 +01:00
Alekos Filini
eef59e463d Merge commit 'refs/pull/210/head' of github.com:bitcoindevkit/bdk 2020-12-07 11:21:21 +01:00
Alekos Filini
8d9365099e Merge commit 'refs/pull/208/head' of github.com:bitcoindevkit/bdk 2020-12-07 11:09:40 +01:00
Riccardo Casatta
46092a200a [docs] database/any.rs 2020-12-05 13:26:00 +01:00
Riccardo Casatta
95bfe7c983 [docs] types.rs 2020-12-05 13:25:58 +01:00
Riccardo Casatta
8b1a9d2518 [docs] descriptor/error.rs 2020-12-05 13:25:58 +01:00
Riccardo Casatta
9028d2a16a [docs] compact_filters/mod.rs 2020-12-05 13:25:57 +01:00
Riccardo Casatta
87eebe466f [docs] error.rs 2020-12-05 13:25:47 +01:00
Alekos Filini
ee854b9d73 Merge commit 'refs/pull/209/head' of github.com:bitcoindevkit/bdk 2020-12-04 11:57:44 +01:00
Riccardo Casatta
81519555cf generalize impl_error! macro so that used for other errors type 2020-12-04 11:23:01 +01:00
Riccardo Casatta
586b874a19 Remove EsploraHeader json in favor of raw hex block header 2020-12-04 11:04:31 +01:00
Steve Myers
364b47bfcb Update cli module to use StructOpt and add docs 2020-12-03 16:18:47 -08:00
LLFourn
8dcb75dfa4 Replace UTXO::is_internal with script_type
This means less conversion and logic mapping from bool to ScriptType and
back again.
2020-12-04 10:46:25 +11:00
Alekos Filini
4aac833073 [ci] Build and publish nightly docs 2020-12-03 19:05:06 +01:00
Steve Myers
2e7f98a371 Fix docs 2020-12-02 16:57:59 -08:00
Riccardo Casatta
a89dd85833 allow missing docs on self-explanatory variants 2020-12-02 14:19:46 -08:00
Riccardo Casatta
a766441fe0 missing docs for esplora.rs (also remove useless pubs) 2020-12-02 14:19:41 -08:00
Riccardo Casatta
68db07b2e3 Missing docs for electrum.rs 2020-12-02 14:19:35 -08:00
Alekos Filini
6b5c3bca82 [changelog] Update CHANGELOG.md to document PSBT_GLOBAL_XPUB
Log the changes made in PR #200
2020-12-01 16:43:41 +01:00
Alekos Filini
5d352ecb63 [wallet] Add tests for TxBuilder::add_global_xpubs() 2020-12-01 16:43:40 +01:00
Alekos Filini
ebfe5db0c3 [wallet] Add a flag to fill-in PSBT_GLOBAL_XPUB 2020-12-01 16:43:38 +01:00
Alekos Filini
e1a59336f8 [cli] Add a flag to build PSBTs for offline signers
The `--offline_signer` flag forces the addition of `non_witness_utxo` and the full
witness and redeem script for every output, which makes it easier for the signer
to identify the change output.

Closes #199
2020-12-01 14:53:00 +01:00
Alekos Filini
59482f795b [blockchain] Fix clippy warnings 2020-12-01 14:41:59 +01:00
LLFourn
67957a93b9 [wallet] Add wallet.network() 2020-12-01 13:29:20 +11:00
Alekos Filini
9073f761d8 Merge commit 'refs/pull/189/head' of github.com:bitcoindevkit/bdk 2020-11-30 15:38:09 +01:00
Alekos Filini
d6ac752b65 Merge commit 'refs/pull/191/head' of github.com:bitcoindevkit/bdk 2020-11-30 15:17:09 +01:00
Riccardo Casatta
6d1d5d5f57 use electurm-client updated 2020-11-30 13:25:23 +01:00
Alekos Filini
7425985850 Merge commit 'refs/pull/188/head' of github.com:bitcoindevkit/bdk 2020-11-24 11:14:47 +01:00
Alekos Filini
93afdc599c Switch to miniscript from crates.io 2020-11-24 10:07:37 +01:00
Alekos Filini
4f6e3a4f68 Update tiny-bip39 to v0.8
Fixes #185
2020-11-24 10:01:42 +01:00
Steve Myers
6eac2ca4cf Fix typo in CONTRIBUTING.md 2020-11-23 21:40:40 -08:00
Steve Myers
790fd52abe Add CHANGELOG.md 2020-11-23 20:57:23 -08:00
LLFourn
dd35903660 Remove trait bounds on Wallet struct
see: https://github.com/rust-lang/api-guidelines/issues/6
2020-11-24 12:40:58 +11:00
LLFourn
acc0ae14ec [wallet] Eagerly finalize inputs
If we know the final witness/scriptsig for an input we should add it
right away to the PSBT. Before, if we couldn't finalize any of them we
finalized none of them.
2020-11-23 16:07:50 +11:00
LLFourn
d2490d9ce3 Fix to at least bitcoin ^0.25.2
And fix the fallout.
2020-11-23 15:06:13 +11:00
Alekos Filini
196c2f5450 Merge commit 'refs/pull/172/head' of github.com:bitcoindevkit/bdk 2020-11-20 12:06:41 +01:00
Alekos Filini
8eaf377d2f Merge commit 'refs/pull/184/head' of github.com:bitcoindevkit/bdk 2020-11-20 11:58:31 +01:00
Riccardo Casatta
73326068f8 Use dirs-next instead of dirs since the latter is unmantained 2020-11-19 17:57:59 +01:00
Justin Moon
9e2b2d04ba More consistent references with 'signers' variables 2020-11-19 10:27:34 -06:00
Justin Moon
b1b2f2abd6 [wallet] Don't wrap SignersContainer arguments in Arc 2020-11-19 10:27:33 -06:00
Alekos Filini
fc3b6ad0b9 Merge commit 'refs/pull/169/head' of github.com:bitcoindevkit/bdk 2020-11-19 15:41:17 +01:00
Riccardo Casatta
dbfa0506db Add scheduled audit check in CI 2020-11-19 15:18:04 +01:00
Alekos Filini
0edcc83c13 [ci] Generate a different cache key for every job 2020-11-19 13:59:19 +01:00
Riccardo Casatta
25bde82048 pin cc version because last breaks rocksdb build 2020-11-19 13:11:27 +01:00
Justin Moon
f9d3467397 [wallet] Add witness and redeem scripts to PSBT outputs 2020-11-18 11:40:34 -06:00
Riccardo Casatta
c9079a7292 Allow to set concurrency in Esplora config and optionally pass it in repl 2020-11-18 11:55:20 +01:00
Riccardo Casatta
4c59809f8e Make esplora call in parallel 2020-11-18 11:08:19 +01:00
Alekos Filini
fe7ecd3dd2 Merge commit 'refs/pull/167/head' of github.com:bitcoindevkit/bdk 2020-11-18 10:44:54 +01:00
Alekos Filini
a601337e0c Merge commit 'refs/pull/166/head' of github.com:bitcoindevkit/bdk 2020-11-18 10:31:51 +01:00
Riccardo Casatta
ae16c8b602 fix typo 2020-11-18 09:27:01 +01:00
Alekos Filini
6f4d2846d3 [descriptor] Add support for sortedmulti in descriptor! 2020-11-17 23:57:33 +01:00
Alekos Filini
7a42c5e095 Switch to "mainline" rust-miniscript 2020-11-17 23:57:28 +01:00
Riccardo Casatta
b79fa27aa4 Remove unused varaint HeaderParseFail 2020-11-17 18:54:34 +01:00
Riccardo Casatta
8dfbbf2763 Require esplora feature for repl example 2020-11-17 16:47:58 +01:00
Riccardo Casatta
42480ea37b Bring less data around 2020-11-17 16:38:19 +01:00
Riccardo Casatta
02c0ad2fca eagerly unwrap height option, save one collect 2020-11-17 16:37:10 +01:00
Riccardo Casatta
16fde66c6a use flatten instead of unwrap_or 2020-11-17 15:24:26 +01:00
Riccardo Casatta
2844ddec63 avoid a max() call by checking minus or equal 2020-11-17 15:20:33 +01:00
Riccardo Casatta
7a58d3dd7a Use filter_map instead of filter and map 2020-11-17 15:16:18 +01:00
Riccardo Casatta
4d1617f4e0 use proper type for EsploraHeader, make conversion to BlockHeader infallible 2020-11-17 15:08:04 +01:00
Riccardo Casatta
3c8b8e4fca conditionally remove cli args according to enabled feature 2020-11-17 15:00:18 +01:00
Riccardo Casatta
2f39a19b01 Use our Instant struct to be compatible with wasm 2020-11-17 14:25:27 +01:00
Riccardo Casatta
d9985c4bbb [examples] support esplora blockchain source in repl 2020-11-17 09:39:44 +01:00
Riccardo Casatta
c5dba115a0 [sync] Improve sync
Make every request in batch, to save round trip times
Fetch timestamp of blockheader to populate timestamp field in transaction
Remove listunspent requests because we can compute it from our history
2020-11-17 09:39:43 +01:00
LLFourn
35579cb216 [wallet] Build output lookup inside complete transaction
To avoid the caller having to do it.
2020-11-17 15:11:47 +11:00
LLFourn
fcc408f346 [wallet] Add test that shwpkh populates witness_utxo 2020-11-17 15:11:47 +11:00
LLFourn
004f81b0a8 [wallet] Make coin_select return UTXOs instead of TxIns
- We want to keep the metadata in the UTXO around for things later
- It is easier to turn a UTXO into a TxIn outside
2020-11-17 15:11:47 +11:00
Steve Myers
13c1170304 [ci] Remove actions-rs, cleanup names 2020-11-16 18:46:16 -08:00
Alekos Filini
a30ad49f63 [wallet] Use the branch-and-bound cs by default
Keep the `LargestFirst` coin selection for the tests, to make them more
predictable.
2020-11-16 14:08:04 +01:00
Riccardo Casatta
755d76bf54 remove unneeded pub modifier 2020-11-16 12:11:37 +01:00
Riccardo Casatta
25da54d5ec ignore .idea 2020-11-16 12:09:14 +01:00
Riccardo Casatta
4f99c77abe [sync] check last derivation in cache to avoid recomputation 2020-11-16 12:06:48 +01:00
Alekos Filini
ac18fb119f [keys] Add a shortcut to generate keys with the default options 2020-11-13 17:43:57 +01:00
Alekos Filini
f2edee0e2e [keys] impl ToDescriptorKey for GeneratedKey 2020-11-13 17:29:01 +01:00
Alekos Filini
f4affbd039 [keys] impl GeneratableKey for bitcoin::PrivateKey 2020-11-13 17:27:19 +01:00
Alekos Filini
d269c9e0b2 [cli] Split the internal and external policy paths 2020-11-13 15:00:22 +01:00
Alekos Filini
7c80aec454 [wallet] Take both spending policies into account in create_tx
This allows specifying different "policy paths" for the internal and external
descriptors, and adds additional checks to make sure they are compatibile (i.e.
the timelocks are expressed in the same unit).

It's still suboptimal, since the `n_sequence`s are per-input and not per-transaction,
so it should be possibile to spend different inputs with different, otherwise
incompatible, `CSV` timelocks, but that requires a larger refactor that
can be done in a future patch.

This commit also tries to clarify how the "policy path" should be used by adding
a fairly detailed example to the docs.
2020-11-13 12:55:42 +01:00
Daniela Brozzoni
9f31ad1bc8 [wallet] Replace must_use with required in coin selection 2020-11-13 12:42:07 +01:00
Daniela Brozzoni
c43f201e35 [wallet] Add tests for BranchAndBoundCoinSelection::single_random_draw 2020-11-13 12:42:06 +01:00
Daniela Brozzoni
23824321ba [wallet] Add tests for BranchAndBoundCoinSelection::bnb 2020-11-13 12:42:06 +01:00
Daniela Brozzoni
be91997d84 [wallet] Add tests for BranchAndBoundCoinSelection::coin_select 2020-11-13 12:42:06 +01:00
Daniela Brozzoni
99060c5627 [wallet] Add Branch and Bound coin selection 2020-11-13 12:42:06 +01:00
Daniela Brozzoni
a86706d1a6 [wallet] Use TXIN_DEFAULT_WEIGHT constant in coin selection
Replace all the occurences of `serialize(&txin)`
with TXIN_DEFAULT_WEIGHT.
2020-11-13 12:42:06 +01:00
Alekos Filini
36c5a4dc0c [wallet] Split send_all into set_single_recipient and drain_wallet
Previously `send_all` was particularly confusing, because when used on a
`create_tx` it implied two things:
- spend everything that's in the wallet (if no utxos are specified)
- don't create a change output

But when used on a `bump_fee` it only meant to not add a change output
and instead reduce the only existing output to increase the fee.

This has now been split into two separate options that should hopefully
make it more clear to use, as described in #142.

Additionally, `TxBuilder` now has a "context", that basically allows to
make some flags available only when they are actually meaningful, either
for `create_tx` or `bump_fee`.

Closes #142.
2020-11-05 12:06:43 +01:00
Alekos Filini
f67bfe7bfc Merge commit 'refs/pull/156/head' of github.com:bitcoindevkit/bdk 2020-11-05 11:44:29 +01:00
LLFourn
796f9f5a70 Make Signer and AddressValidator Send and Sync 2020-11-03 16:16:32 +11:00
LLFourn
3b3659fc0c Remove redundant Box around signers 2020-11-03 16:06:43 +11:00
LLFourn
5784a95e48 Remove redundant Box around address validators 2020-11-03 16:06:43 +11:00
Steve Myers
f7499cb65d [ci] test with all features enabled in single run 2020-11-02 19:06:41 -08:00
Steve Myers
40bf9f8b79 [ci] Add code coverage github actions workflow 2020-11-02 13:18:52 -08:00
Riccardo Casatta
30f1ff5ab5 [repl] add max_addresses param in sync 2020-10-30 15:04:09 +01:00
Alekos Filini
e6c2823a36 Merge commit 'refs/pull/146/head' of github.com:bitcoindevkit/bdk 2020-10-29 11:53:22 +01:00
Steve Myers
4a75f96d35 [ci] Enable clippy for stable and tests by default 2020-10-28 21:48:40 -07:00
Steve Myers
4f7355ec82 [ci] Fix all-keys and cli-utils tests 2020-10-28 21:34:04 -07:00
Steve Myers
7b9df5bbe5 [ci] Enable clippy and test for optional features 2020-10-28 17:51:03 -07:00
Steve Myers
8d04128c74 [ci] Fix or ignore clippy warnings for all optional features except compact_filters 2020-10-28 17:50:12 -07:00
Murch
457e70e70f Rename get_must_may_use_utxos to preselect_utxos 2020-10-27 23:24:03 -04:00
Murch
84aee3baab Rename may_use_utxos to optional_uxtos 2020-10-27 23:24:03 -04:00
Alekos Filini
297e92a829 Merge commit 'refs/pull/115/head' of github.com:bitcoindevkit/bdk 2020-10-27 11:04:00 +01:00
Steve Myers
8927d68a69 [descriptor] Comment out incomplete ExtractPolicy trait tests 2020-10-26 12:48:31 -07:00
Steve Myers
3a80e87ccb [descriptor] Fix compile errors after rebase 2020-10-26 12:48:27 -07:00
Steve Myers
e31f5306d2 [descriptor] Add descriptor macro tests 2020-10-26 12:48:23 -07:00
Steve Myers
9fa9a304b9 [descriptor] Add get_checksum tests, cleanup tests 2020-10-26 12:48:19 -07:00
Steve Myers
bc0e9c9831 [descriptor] Add ExtractPolicy trait tests 2020-10-26 12:48:15 -07:00
Murch
43a51a1ec3 Rename must_use_utxos to required_utxos 2020-10-26 14:40:44 -04:00
Murch
b2ec6e3683 Rename DumbCS to LargestFirstCoinSelection 2020-10-26 14:20:44 -04:00
LLFourn
8d65581825 Incorporate RBF rules into utxo selection function 2020-10-23 13:54:59 +11:00
LLFourn
a6b70af2fb [wallet] Stop implicitly enforcing manaul selection by .add_utxo
This makes it possible to choose a UTXO manually without having to
choose them *all* manually. I introduced the `manually_selected_only`
option to enforce that only manually selected utxos can be used.

To stop the cli semantics changing I made the `utxos` keep the old
behaviour by calling `manually_selected_only`.
2020-10-23 13:54:59 +11:00
LLFourn
b87c7c5dc7 [wallet] Make 'unspendable' into a HashSet
to avoid awkwardly later on.
2020-10-23 13:54:59 +11:00
LLFourn
c549281ace [wallet] Replace ChangeSpendPolicy::filter_utxos with a predicate
To make composing it with other filtering conditions easier.
2020-10-23 13:54:59 +11:00
Richard Ulrich
365a91f805 Merging two match expressions for fee calculation 2020-10-22 13:41:26 +02:00
Richard Ulrich
49894ffa6d Implementing review suggestions from afilini 2020-10-22 09:11:58 +02:00
Richard Ulrich
759f6eac43 complying with clippy from the github CI 2020-10-20 18:22:37 +02:00
Richard Ulrich
27890cfcff allow to definie static fees for transactions Fixes #137 2020-10-20 18:10:59 +02:00
Alekos Filini
872d55cb4c [wallet] Default to SIGHASH_ALL if not specified
Closes #133
2020-10-16 15:40:30 +02:00
Alekos Filini
12635e603f [wallet] Refactor Wallet::bump_fee() 2020-10-16 14:49:05 +02:00
Alekos Filini
a5713a8348 [wallet] Improve CoinSelectionAlgorithm
Implement the improvements described in issue #121.

Closes #121, closes #131.
2020-10-16 14:30:44 +02:00
LLFourn
17f7294c8e [wallet] Make coin_select take may/must use utxo lists
so that in the future you can add a UTXO that you *must* spend and let
the coin selection fill in the rest.

This partially addresses #121
2020-10-16 14:28:22 +02:00
LLFourn
64b4cfe308 Use collect to avoid iter unwrapping Options 2020-10-15 13:41:36 +11:00
Alekos Filini
0caad5f3d9 [blockchain] Fix receiving a coinbase using Electrum/Esplora
Closes #107
2020-10-13 11:56:59 +02:00
Alekos Filini
848b52c50e [keys]: Re-export tiny-bip39
Closes #104
2020-10-13 10:57:40 +02:00
Alekos Filini
100f0aaa0a Bump rust-bitcoin to 0.25, fix Cargo dependencies
Closes #112, closes #113, closes #124
2020-10-13 10:39:48 +02:00
Steve Myers
69ef56cfed [ci] Remove travis.yml 2020-10-12 09:30:20 -07:00
Steve Myers
070d481849 [ci] Fix clippy warnings for 1.47.0 2020-10-10 10:31:08 -07:00
Steve Myers
98803b2573 [ci] Use bitcoindevkit/electrs base image for electrum tests 2020-10-10 10:31:08 -07:00
Steve Myers
aea9abff8a [ci] Fix clippy warnings, enable clippy checks 2020-10-10 10:31:07 -07:00
Steve Myers
6402fd07c2 [ci] Consolidate build, test, clippy jobs 2020-10-10 10:31:07 -07:00
Alekos Filini
8e7b195e93 Add a Discord badge to the README 2020-10-07 10:00:06 +02:00
Steve Myers
56bcbc4aff [ci] add CI github actions 2020-10-05 09:35:54 -07:00
Alekos Filini
1faf0ed0a0 Fix the recovery of a descriptor given a PSBT
This commit upgrades `rust-miniscript` with a fix to only return the prefix that
matches a `hd_keypath` instead of the full derivation path, and then adapts the
signer code accordingly.

This commit closes #108 and #109.
2020-10-02 17:52:11 +02:00
LLFourn
490c88934e [keys] Less convoluted entropy generation
Since const generics aren't in rust yet you have to find some awkward
workarounds. This improves the workaround for specifying entropy length.
2020-09-30 20:05:17 +10:00
Steve Myers
eae15563d8 [descriptor] add ToWalletDescriptor trait tests 2020-09-25 22:21:11 -07:00
Alekos Filini
82251a8de4 [keys] Fix entropy generation 2020-09-24 15:59:46 +02:00
Alekos Filini
b294b11c54 [keys] Add a trait for keys that can be generated 2020-09-24 09:53:56 +02:00
Alekos Filini
c93cd1414a [descriptor] Add descriptor templates, add DerivableKey 2020-09-24 09:53:54 +02:00
Alekos Filini
c51ba4a99f [keys] Add a way to restrict the networks in which keys are valid
Thanks to the `ToWalletDescriptor` trait we can also very easily validate the checksum
for descriptors that are loaded from strings, if they contain one. Fixes #20.
2020-09-24 09:53:51 +02:00
Alekos Filini
bc8acaf088 [keys] Take ScriptContext into account when converting keys 2020-09-24 09:53:48 +02:00
Alekos Filini
ab9d964868 [keys] Add BIP39 support 2020-09-24 09:53:46 +02:00
Alekos Filini
751a553925 [descriptor] Improve the descriptor macro, add traits for key and descriptor types 2020-09-24 09:53:42 +02:00
Alekos Filini
9832ecb660 [descriptor] Add a macro to write descriptors from code 2020-09-24 09:53:41 +02:00
willcl-ark
4970d1e522 Add CONTRIBUTING.md
Add a CONTRIBUTING.md file based on a template taken from the
rust-lightning project.
2020-09-23 10:19:53 +01:00
willcl-ark
844820dcfa Prettify README examples on github 2020-09-21 15:32:38 +01:00
Alekos Filini
33a5ba6cd2 [signer] Fix signing for ShWpkh inputs 2020-09-16 17:50:54 +02:00
Alekos Filini
cf2a8bccac [cargo] Add the required rand features for wasm32 2020-09-16 17:30:11 +02:00
Alekos Filini
57ea653f1c [database] Add AnyDatabase and ConfigurableDatabase
This is related to #43
2020-09-15 15:39:15 +02:00
Alekos Filini
5b0fd3bba0 [blockchain] Document AnyBlockchain and ConfigurableBlockchain 2020-09-15 15:38:59 +02:00
Alekos Filini
e5cc8d9529 [blockchain] Add an AnyBlockchain enum to allow switching at runtime
This is related to #43
2020-09-15 12:03:07 +02:00
Alekos Filini
5eee18bed2 [blockchain] Add a trait to create Blockchains from a configuration
This is the first set of changes for #42
2020-09-15 12:03:04 +02:00
Alekos Filini
6094656a54 Update the README 2020-09-14 15:13:43 -07:00
Alekos Filini
10ab293e18 [cargo] Remove the magic alias for repl 2020-09-14 15:13:43 -07:00
Alekos Filini
d7ee38cc52 Rename the library to bdk 2020-09-14 15:13:43 -07:00
Alekos Filini
efdd11762c [blockchain] Simplify the architecture of blockchain traits
Instead of having two traits, `Blockchain` and `OnlineBlockchain` that need
to be implemented by the user, only the relevant one (`OnlineBlockchain`, here
renamed to `Blockchain`) will need to be implemented, since we provide a
blanket implementation for the "marker" trait (previously `Blockchain`, here
renamed to `BlockchainMarker`).

Users of the library will probably never need to implement `BlockchainMarker`
by itself, since we expose the `OfflineBlockchain` type that already does
that and should be good for any "offline" wallet. Still, it's exposed since
they might need to import it to define types with generics.
2020-09-10 10:45:07 +02:00
Alekos Filini
24fcb38565 [repl] Revert back the repl example to use Electrum 2020-09-09 17:06:35 +02:00
Alekos Filini
5d977bc617 Bump version to 0.1.0-beta.1 2020-09-08 15:26:44 +02:00
Alekos Filini
cc07c61b47 Change docs link while we can't publish the crate 2020-09-08 15:24:44 +02:00
Alekos Filini
c4f4f20d8b Improve the README, add examples 2020-09-07 16:33:08 +02:00
LLFourn
6a2d0db674 Remove assumed "/api" prefix from esplora
It is not necessary that esplora is hosted with a /api prefix
2020-09-05 14:00:50 +10:00
Alekos Filini
7b58a4ad6f Fix the last_derivation_index calculation
It should be set to `0` if not transactions are found during sync.

Closes #44
2020-09-04 21:53:31 +02:00
Alekos Filini
43cb0331bf Rename the crate to just "magical" 2020-09-04 17:01:33 +02:00
Alekos Filini
ac06e35c49 Add docs for Wallet 2020-09-04 16:29:25 +02:00
Alekos Filini
eee75219e0 Write more docs, make TxBuilder::with_recipients take Scripts 2020-09-04 16:07:41 +02:00
Alekos Filini
7065c1fed6 Write more docs 2020-09-04 11:44:49 +02:00
Alekos Filini
6b9c363937 Write the docs for blockchain::* 2020-09-03 11:36:07 +02:00
Alekos Filini
c0867a6adc General cleanup for the docs 2020-08-31 15:04:27 +02:00
Alekos Filini
d61e974dbe Add the license to every file 2020-08-31 11:48:25 +02:00
Alekos Filini
7a127d0275 [wallet] Add tests for Wallet::sign() 2020-08-30 20:38:24 +02:00
Alekos Filini
ff50087de5 [wallet] Support signing the whole tx instead of individual inputs 2020-08-30 20:38:22 +02:00
Alekos Filini
991db28170 [wallet] Add explicit ordering for the signers 2020-08-30 20:38:20 +02:00
Alekos Filini
f54243fd18 [error] implement std::error::Error 2020-08-30 20:38:17 +02:00
Alekos Filini
895c6b0808 [blockchain] impl OnlineBlockchain for types wrapped in Arc 2020-08-30 20:38:14 +02:00
Alekos Filini
557f7ef8c9 [wallet] Add AddressValidators 2020-08-30 20:36:25 +02:00
Alekos Filini
37a7547e9c [descriptor] Tests for DescriptorMeta::derive_from_psbt_input() 2020-08-30 20:36:23 +02:00
Alekos Filini
21318eb940 [cli] Make the REPL return JSON 2020-08-30 20:36:21 +02:00
Alekos Filini
5777431135 Use miniscript::DescriptorPublicKey
This allows us to remove all our custom "ExtendedDescriptor" implementation since that is
now built directly in miniscript.
2020-08-30 20:36:19 +02:00
Alekos Filini
ddc2bded99 [compact_filters] Add support for Tor 2020-08-30 17:24:04 +02:00
Alekos Filini
77c95b93ac Compact Filters blockchain implementation 2020-08-30 17:23:33 +02:00
Alekos Filini
c12aa3d327 Implement RBF and add a few tests 2020-08-14 12:48:07 +02:00
Alekos Filini
8f8c393f6f [tests] Run Bitcoin Core and Electrs on Travis 2020-08-11 11:31:19 +02:00
Alekos Filini
53b5f23fb2 [tests] Add tests for Wallet::create_tx() 2020-08-11 11:31:11 +02:00
Alekos Filini
9e5023670e [tests] Add a proc macro to generate tests for OnlineBlockchain types 2020-08-10 17:18:17 +02:00
Alekos Filini
c90c752f21 [wallet] Add force_non_witness_utxo() to TxBuilder 2020-08-10 17:18:15 +02:00
Alekos Filini
8d9ccf8d0b [wallet] Allow limiting the use of internal utxos in TxBuilder 2020-08-10 17:18:13 +02:00
Alekos Filini
85090a28eb [wallet] Add RBF and custom versions in TxBuilder 2020-08-10 17:18:11 +02:00
Alekos Filini
0665c9e854 [wallet] TxOrdering, shuffle/bip69 support 2020-08-10 17:18:09 +02:00
Alekos Filini
7005a26fc5 [wallet] Nicer interface for WalletExport 2020-08-10 13:20:48 +02:00
Alekos Filini
f7f99172fe Add a feature to enable the async interface on non-wasm32 platforms
Follow-up to: #28
2020-08-10 11:41:19 +02:00
Alekos Filini
f0a1e670df [examples] Use MemoryDatabase in the compiler example 2020-08-08 09:37:25 +02:00
Alekos Filini
a5188209b2 Update .travis.ci to test the miniscriptc example 2020-08-08 09:27:52 +02:00
Dominik Spicher
462d413b02 [examples] Fix Miniscript variants issue in compiler example
miniscript has extended the `Miniscript` struct to be generic
over a `ScriptContext`. This context is different for the `Sh`
variant (`Legacy`) than for the `Wsh` and `ShWsh` variants
(`Segwitv0`). Therefore, Rust is not happy with the single
`compiled` variable if it is used as an argument for all three
variants.
2020-08-07 16:08:23 +02:00
Dominik Spicher
a581457ba8 [examples] Add missing dependency for compiler example 2020-08-07 16:02:32 +02:00
Dominik Spicher
796a3a5c91 [examples] Fix renamed thresh_m descriptor
In miniscript 1.0, `thresh_m` has been renamed to `multi`
2020-08-07 15:17:59 +02:00
Alekos Filini
08792b2fcd [wallet] Add a type convert fee units, add Wallet::estimate_fee() 2020-08-07 11:23:46 +02:00
Alekos Filini
5f80950971 [export] Implement the wallet import/export format from FullyNoded
This commit closes #31
2020-08-07 10:19:06 +02:00
Alekos Filini
82c7e11bd5 Improve .travis.ci 2020-08-06 19:30:17 +02:00
Alekos Filini
b67bbeb202 [wallet] Refill the address pool whenever necessary 2020-08-06 18:11:07 +02:00
Alekos Filini
7a23b2b558 [wallet] Abstract coin selection in a separate trait 2020-08-06 16:56:41 +02:00
Alekos Filini
499e579824 [wallet] Add a TxBuilder struct to simplify create_tx()'s interface 2020-08-06 14:28:22 +02:00
Alekos Filini
927c2f37b9 [wallet] Abstract, multi-platform datetime utils 2020-08-06 14:28:20 +02:00
Alekos Filini
0954049df0 [wallet] Cleanup, remove unnecessary mutable references 2020-08-06 14:28:12 +02:00
Alekos Filini
5683a83288 [repl] Expose list_transactions() in the REPL 2020-07-21 18:37:15 +02:00
Alekos Filini
4fcf7ac89e Make the blockchain interface async again on wasm32-unknown-unknown
The procedural macro `#[maybe_async]` makes a method or every method of a trait
"async" whenever the target_arch is `wasm32`, and leaves them untouched on
every other platform.

The macro `maybe_await!($e:expr)` can be used to call `maybe_async` methods on
multi-platform code: it expands to `$e` on non-wasm32 platforms and to
`$e.await` on wasm32.

The macro `await_or_block!($e:expr)` can be used to contain async code as much
as possible: it expands to `tokio::runtime::Runtime::new().unwrap().block_on($e)`
on non-wasm32 platforms, and to `$e.await` on wasm32.
2020-07-20 20:02:24 +02:00
Alekos Filini
4a51d50e1f Update miniscript to version 1.0 2020-07-19 19:31:40 +02:00
Alekos Filini
123984e99d Remove async, upgrade electrum-client 2020-07-17 09:44:01 +02:00
Alekos Filini
c3923b66f8 Remove unused file 2020-06-30 17:21:56 +02:00
79 changed files with 23426 additions and 4285 deletions

26
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,26 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
<!-- Steps or code to reproduce the behavior. -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Build environment**
- BDK tag/commit: <!-- e.g. v0.13.0, 3a07614 -->
- OS+version: <!-- e.g. ubuntu 20.04.01, macOS 12.0.1, windows -->
- Rust/Cargo version: <!-- e.g. 1.56.0 -->
- Rust/Cargo target: <!-- e.g. x86_64-apple-darwin, x86_64-unknown-linux-gnu, etc. -->
**Additional context**
<!-- Add any other context about the problem here. -->

View File

@@ -0,0 +1,77 @@
---
name: Summer of Bitcoin Project
about: Template to suggest a new https://www.summerofbitcoin.org/ project.
title: ''
labels: 'summer-of-bitcoin'
assignees: ''
---
<!--
## Overview
Project ideas are scoped for a university-level student with a basic background in CS and bitcoin
fundamentals - achievable over 12-weeks. Below are just a few types of ideas:
- Low-hanging fruit: Relatively short projects with clear goals; requires basic technical knowledge
and minimal familiarity with the codebase.
- Core development: These projects derive from the ongoing work from the core of your development
team. The list of features and bugs is never-ending, and help is always welcome.
- Risky/Exploratory: These projects push the scope boundaries of your development effort. They
might require expertise in an area not covered by your current development team. They might take
advantage of a new technology. There is a reasonable chance that the project might be less
successful, but the potential rewards make it worth the attempt.
- Infrastructure/Automation: These projects are the code that your organization uses to get its
development work done; for example, projects that improve the automation of releases, regression
tests and automated builds. This is a category where a Summer of Bitcoin student can be really
helpful, doing work that the development team has been putting off while they focus on core
development.
- Quality Assurance/Testing: Projects that work on and test your project's software development
process. Additionally, projects that involve a thorough test and review of individual PRs.
- Fun/Peripheral: These projects might not be related to the current core development focus, but
create new innovations and new perspectives for your project.
-->
**Description**
<!-- Description: 3-7 sentences describing the project background and tasks to be done. -->
**Expected Outcomes**
<!-- Short bullet list describing what is to be accomplished -->
**Resources**
<!-- 2-3 reading materials for candidate to learn about the repo, project, scope etc -->
<!-- Recommended reading such as a developer/contributor guide -->
<!-- [Another example a paper citation](https://arxiv.org/pdf/1802.08091.pdf) -->
<!-- [Another example an existing issue](https://github.com/opencv/opencv/issues/11013) -->
<!-- [An existing related module](https://github.com/opencv/opencv_contrib/tree/master/modules/optflow) -->
**Skills Required**
<!-- 3-4 technical skills that the candidate should know -->
<!-- hands on experience with git -->
<!-- mastery plus experience coding in C++ -->
<!-- basic knowledge in matrix and tensor computations, college course work in cryptography -->
<!-- strong mathematical background -->
<!-- Bonus - has experience with React Native. Best if you have also worked with OSSFuzz -->
**Mentor(s)**
<!-- names of mentor(s) for this project go here -->
**Difficulty**
<!-- Easy, Medium, Hard -->
**Competency Test (optional)**
<!-- 2-3 technical tasks related to the project idea or repository youd like a candidate to
perform in order to demonstrate competency, good first bugs, warm-up exercises -->
<!-- ex. Read the instructions here to get Bitcoin core running on your machine -->
<!-- ex. pick an issue labeled as “newcomer” in the repository, and send a merge request to the
repository. You can also suggest some other improvement that we did not think of yet, or
something that you find interesting or useful -->
<!-- ex. fixes for coding style are usually easy to do, and are good issues for first time
contributions for those learning how to interact with the project. After you are done with the
coding style issue, try making a different contribution. -->
<!-- ex. setup a full Debian packaging development environment and learn the basics of Debian
packaging. Then identify and package the missing dependencies to package Specter Desktop -->
<!-- ex. write a pull parser for CSV files. You'll be judged by the decisions to store the parser
state and how flexible it is to wrap this parser in other scenarios. -->
<!-- ex. Stretch Goal: Implement some basic metaprogram/app to prove you're very familiar with BDK.
Be prepared to make adjustments as we judge your solution. -->

30
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,30 @@
<!-- You can erase any parts of this template not applicable to your Pull Request. -->
### Description
<!-- Describe the purpose of this PR, what's being adding and/or fixed -->
### Notes to the reviewers
<!-- In this section you can include notes directed to the reviewers, like explaining why some parts
of the PR were done in a specific way -->
### Checklists
#### All Submissions:
* [ ] I've signed all my commits
* [ ] I followed the [contribution guidelines](https://github.com/bitcoindevkit/bdk/blob/master/CONTRIBUTING.md)
* [ ] I ran `cargo fmt` and `cargo clippy` before committing
#### New Features:
* [ ] I've added tests for the new feature
* [ ] I've added docs for the new feature
* [ ] I've updated `CHANGELOG.md`
#### Bugfixes:
* [ ] This pull request breaks the existing API
* [ ] I've added tests to reproduce the issue which are now passing
* [ ] I'm linking the issue being fixed by this PR

19
.github/workflows/audit.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: Audit
on:
push:
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
schedule:
- cron: '0 0 * * 0' # Once per week
jobs:
security_audit:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions-rs/audit-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}

37
.github/workflows/code_coverage.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
on: [push]
name: Code Coverage
jobs:
Codecov:
name: Code Coverage
runs-on: ubuntu-latest
env:
CARGO_INCREMENTAL: '0'
RUSTFLAGS: '-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off'
RUSTDOCFLAGS: '-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off'
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install rustup
run: curl https://sh.rustup.rs -sSf | sh -s -- -y
- name: Set default toolchain
run: rustup default nightly
- name: Set profile
run: rustup set profile minimal
- name: Update toolchain
run: rustup update
- name: Test
run: cargo test --features all-keys,compiler,esplora,ureq,compact_filters --no-default-features
- id: coverage
name: Generate coverage
uses: actions-rs/grcov@v0.1.5
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
with:
file: ${{ steps.coverage.outputs.report }}
directory: ./coverage/reports/

166
.github/workflows/cont_integration.yml vendored Normal file
View File

@@ -0,0 +1,166 @@
on: [push, pull_request]
name: CI
jobs:
build-test:
name: Build and test
runs-on: ubuntu-latest
strategy:
matrix:
rust:
- version: 1.56.0 # STABLE
clippy: true
- version: 1.46.0 # MSRV
features:
- default
- minimal
- all-keys
- minimal,use-esplora-ureq
- key-value-db
- electrum
- compact_filters
- esplora,ureq,key-value-db,electrum
- compiler
- rpc
- verify
- async-interface
- use-esplora-reqwest
- sqlite
steps:
- name: checkout
uses: actions/checkout@v2
- name: Generate cache key
run: echo "${{ matrix.rust.version }} ${{ matrix.features }}" | tee .cache_key
- name: cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('.cache_key') }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Set default toolchain
run: rustup default ${{ matrix.rust.version }}
- name: Set profile
run: rustup set profile minimal
- name: Add clippy
if: ${{ matrix.rust.clippy }}
run: rustup component add clippy
- name: Update toolchain
run: rustup update
- name: Build
run: cargo build --features ${{ matrix.features }} --no-default-features
- name: Clippy
if: ${{ matrix.rust.clippy }}
run: cargo clippy --all-targets --features ${{ matrix.features }} --no-default-features -- -D warnings
- name: Test
run: cargo test --features ${{ matrix.features }} --no-default-features
test-readme-examples:
name: Test README.md examples
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v2
- name: cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-test-md-docs-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Set default toolchain
run: rustup default nightly
- name: Set profile
run: rustup set profile minimal
- name: Update toolchain
run: rustup update
- name: Test
run: cargo test --features test-md-docs --no-default-features -- doctest::ReadmeDoctests
test-blockchains:
name: Blockchain ${{ matrix.blockchain.features }}
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
blockchain:
- name: electrum
features: test-electrum
- name: rpc
features: test-rpc
- name: esplora
features: test-esplora,use-esplora-reqwest
- name: esplora
features: test-esplora,use-esplora-ureq
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ github.job }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Setup rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Test
run: cargo test --no-default-features --features ${{ matrix.blockchain.features }} ${{ matrix.blockchain.name }}::bdk_blockchain_tests
check-wasm:
name: Check WASM
runs-on: ubuntu-20.04
env:
CC: clang-10
CFLAGS: -I/usr/include
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ github.job }}-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
# Install a recent version of clang that supports wasm32
- run: wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add - || exit 1
- run: sudo apt-add-repository "deb http://apt.llvm.org/focal/ llvm-toolchain-focal-10 main" || exit 1
- run: sudo apt-get update || exit 1
- run: sudo apt-get install -y libclang-common-10-dev clang-10 libc6-dev-i386 || exit 1
- name: Set default toolchain
run: rustup default 1.56.0 # STABLE
- name: Set profile
run: rustup set profile minimal
- name: Add target wasm32
run: rustup target add wasm32-unknown-unknown
- name: Update toolchain
run: rustup update
- name: Check
run: cargo check --target wasm32-unknown-unknown --features use-esplora-reqwest --no-default-features
fmt:
name: Rust fmt
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set default toolchain
run: rustup default nightly
- name: Set profile
run: rustup set profile minimal
- name: Add rustfmt
run: rustup component add rustfmt
- name: Update toolchain
run: rustup update
- name: Check fmt
run: cargo fmt --all -- --config format_code_in_doc_comments=true --check

61
.github/workflows/nightly_docs.yml vendored Normal file
View File

@@ -0,0 +1,61 @@
name: Publish Nightly Docs
on: [push, pull_request]
jobs:
build_docs:
name: Build docs
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Setup cache
uses: actions/cache@v2
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: nightly-docs-${{ hashFiles('**/Cargo.toml','**/Cargo.lock') }}
- name: Set default toolchain
run: rustup default nightly
- name: Set profile
run: rustup set profile minimal
- name: Update toolchain
run: rustup update
- name: Build docs
run: cargo rustdoc --verbose --features=compiler,electrum,esplora,ureq,compact_filters,key-value-db,all-keys,sqlite -- --cfg docsrs -Dwarnings
- name: Upload artifact
uses: actions/upload-artifact@v2
with:
name: built-docs
path: ./target/doc/*
publish_docs:
name: 'Publish docs'
if: github.ref == 'refs/heads/master'
needs: [build_docs]
runs-on: ubuntu-latest
steps:
- name: Checkout `bitcoindevkit.org`
uses: actions/checkout@v2
with:
ssh-key: ${{ secrets.DOCS_PUSH_SSH_KEY }}
repository: bitcoindevkit/bitcoindevkit.org
ref: master
- name: Create directories
run: mkdir -p ./static/docs-rs/bdk/nightly
- name: Remove old latest
run: rm -rf ./static/docs-rs/bdk/nightly/latest
- name: Download built docs
uses: actions/download-artifact@v1
with:
name: built-docs
path: ./static/docs-rs/bdk/nightly/latest
- name: Configure git
run: git config user.email "github-actions@github.com" && git config user.name "github-actions"
- name: Commit
continue-on-error: true # If there's nothing to commit this step fails, but it's fine
run: git add ./static && git commit -m "Publish autogenerated nightly docs"
- name: Push
run: git push origin master

1
.gitignore vendored
View File

@@ -2,3 +2,4 @@
Cargo.lock
*.swp
.idea

View File

@@ -1,22 +0,0 @@
language: rust
rust:
- stable
# - 1.31.0
# - 1.22.0
before_script:
- rustup component add rustfmt
script:
- cargo fmt -- --check --verbose
- cargo test --verbose --all
- cargo build --verbose --all
- cargo build --verbose --no-default-features --features=minimal
- cargo build --verbose --no-default-features --features=minimal,esplora
- cargo build --verbose --no-default-features --features=key-value-db
- cargo build --verbose --no-default-features --features=electrum
notifications:
email: false
before_cache:
- rm -rf "$TRAVIS_HOME/.cargo/registry/src"
cache: cargo

406
CHANGELOG.md Normal file
View File

@@ -0,0 +1,406 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [v0.15.0] - [v0.14.0]
- Overhauled sync logic for electrum and esplora.
- Unify ureq and reqwest esplora backends to have the same configuration parameters. This means reqwest now has a timeout parameter and ureq has a concurrency parameter.
- Fixed esplora fee estimation.
## [v0.14.0] - [v0.13.0]
- BIP39 implementation dependency, in `keys::bip39` changed from tiny-bip39 to rust-bip39.
- Add new method on the `TxBuilder` to embed data in the transaction via `OP_RETURN`. To allow that a fix to check the dust only on spendable output has been introduced.
- Update the `Database` trait to store the last sync timestamp and block height
- Rename `ConfirmationTime` to `BlockTime`
## [v0.13.0] - [v0.12.0]
- Exposed `get_tx()` method from `Database` to `Wallet`.
## [v0.12.0] - [v0.11.0]
- Activate `miniscript/use-serde` feature to allow consumers of the library to access it via the re-exported `miniscript` crate.
- Add support for proxies in `EsploraBlockchain`
- Added `SqliteDatabase` that implements `Database` backed by a sqlite database using `rusqlite` crate.
## [v0.11.0] - [v0.10.0]
- Added `flush` method to the `Database` trait to explicitly flush to disk latest changes on the db.
## [v0.10.0] - [v0.9.0]
- Added `RpcBlockchain` in the `AnyBlockchain` struct to allow using Rpc backend where `AnyBlockchain` is used (eg `bdk-cli`)
- Removed hard dependency on `tokio`.
### Wallet
- Removed and replaced `set_single_recipient` with more general `drain_to` and replaced `maintain_single_recipient` with `allow_shrinking`.
### Blockchain
- Removed `stop_gap` from `Blockchain` trait and added it to only `ElectrumBlockchain` and `EsploraBlockchain` structs.
- Added a `ureq` backend for use when not using feature `async-interface` or target WASM. `ureq` is a blocking HTTP client.
## [v0.9.0] - [v0.8.0]
### Wallet
- Added Bitcoin core RPC added as blockchain backend
- Added a `verify` feature that can be enable to verify the unconfirmed txs we download against the consensus rules
## [v0.8.0] - [v0.7.0]
### Wallet
- Added an option that must be explicitly enabled to allow signing using non-`SIGHASH_ALL` sighashes (#350)
#### Changed
`get_address` now returns an `AddressInfo` struct that includes the index and derefs to `Address`.
## [v0.7.0] - [v0.6.0]
### Policy
#### Changed
Removed `fill_satisfaction` method in favor of enum parameter in `extract_policy` method
#### Added
Timelocks are considered (optionally) in building the `satisfaction` field
### Wallet
- Changed `Wallet::{sign, finalize_psbt}` now take a `&mut psbt` rather than consuming it.
- Require and validate `non_witness_utxo` for SegWit signatures by default, can be adjusted with `SignOptions`
- Replace the opt-in builder option `force_non_witness_utxo` with the opposite `only_witness_utxo`. From now on we will provide the `non_witness_utxo`, unless explicitly asked not to.
## [v0.6.0] - [v0.5.1]
### Misc
#### Changed
- New minimum supported rust version is 1.46.0
- Changed `AnyBlockchainConfig` to use serde tagged representation.
### Descriptor
#### Added
- Added ability to analyze a `PSBT` to check which and how many signatures are already available
### Wallet
#### Changed
- `get_new_address()` refactored to `get_address(AddressIndex::New)` to support different `get_address()` index selection strategies
#### Added
- Added `get_address(AddressIndex::LastUnused)` which returns the last derived address if it has not been used or if used in a received transaction returns a new address
- Added `get_address(AddressIndex::Peek(u32))` which returns a derived address for a specified descriptor index but does not change the current index
- Added `get_address(AddressIndex::Reset(u32))` which returns a derived address for a specified descriptor index and resets current index to the given value
- Added `get_psbt_input` to create the corresponding psbt input for a local utxo.
#### Fixed
- Fixed `coin_select` calculation for UTXOs where `value < fee` that caused over-/underflow errors.
## [v0.5.1] - [v0.5.0]
### Misc
#### Changed
- Pin `hyper` to `=0.14.4` to make it compile on Rust 1.45
## [v0.5.0] - [v0.4.0]
### Misc
#### Changed
- Updated `electrum-client` to version `0.7`
### Wallet
#### Changed
- `FeeRate` constructors `from_sat_per_vb` and `default_min_relay_fee` are now `const` functions
## [v0.4.0] - [v0.3.0]
### Keys
#### Changed
- Renamed `DerivableKey::add_metadata()` to `DerivableKey::into_descriptor_key()`
- Renamed `ToDescriptorKey::to_descriptor_key()` to `IntoDescriptorKey::into_descriptor_key()`
#### Added
- Added an `ExtendedKey` type that is an enum of `bip32::ExtendedPubKey` and `bip32::ExtendedPrivKey`
- Added `DerivableKey::into_extended_key()` as the only method that needs to be implemented
### Misc
#### Removed
- Removed the `parse_descriptor` example, since it wasn't demonstrating any bdk-specific API anymore.
#### Changed
- Updated `bitcoin` to `0.26`, `miniscript` to `5.1` and `electrum-client` to `0.6`
#### Added
- Added support for the `signet` network (issue #62)
- Added a function to get the version of BDK at runtime
### Wallet
#### Changed
- Removed the explicit `id` argument from `Wallet::add_signer()` since that's now part of `Signer` itself
- Renamed `ToWalletDescriptor::to_wallet_descriptor()` to `IntoWalletDescriptor::into_wallet_descriptor()`
### Policy
#### Changed
- Removed unneeded `Result<(), PolicyError>` return type for `Satisfaction::finalize()`
- Removed the `TooManyItemsSelected` policy error (see commit message for more details)
## [v0.3.0] - [v0.2.0]
### Descriptor
#### Changed
- Added an alias `DescriptorError` for `descriptor::error::Error`
- Changed the error returned by `descriptor!()` and `fragment!()` to `DescriptorError`
- Changed the error type in `ToWalletDescriptor` to `DescriptorError`
- Improved checks on descriptors built using the macros
### Blockchain
#### Changed
- Remove `BlockchainMarker`, `OfflineClient` and `OfflineWallet` in favor of just using the unit
type to mark for a missing client.
- Upgrade `tokio` to `1.0`.
### Transaction Creation Overhaul
The `TxBuilder` is now created from the `build_tx` or `build_fee_bump` functions on wallet and the
final transaction is created by calling `finish` on the builder.
- Removed `TxBuilder::utxos` in favor of `TxBuilder::add_utxos`
- Added `Wallet::build_tx` to replace `Wallet::create_tx`
- Added `Wallet::build_fee_bump` to replace `Wallet::bump_fee`
- Added `Wallet::get_utxo`
- Added `Wallet::get_descriptor_for_keychain`
### `add_foreign_utxo`
- Renamed `UTXO` to `LocalUtxo`
- Added `WeightedUtxo` to replace floating `(UTXO, usize)`.
- Added `Utxo` enum to incorporate both local utxos and foreign utxos
- Added `TxBuilder::add_foreign_utxo` which allows adding a utxo external to the wallet.
### CLI
#### Changed
- Remove `cli.rs` module, `cli-utils` feature and `repl.rs` example; moved to new [`bdk-cli`](https://github.com/bitcoindevkit/bdk-cli) repository
## [v0.2.0] - [0.1.0-beta.1]
### Project
#### Added
- Add CONTRIBUTING.md
- Add a Discord badge to the README
- Add code coverage github actions workflow
- Add scheduled audit check in CI
- Add CHANGELOG.md
#### Changed
- Rename the library to `bdk`
- Rename `ScriptType` to `KeychainKind`
- Prettify README examples on github
- Change CI to github actions
- Bump rust-bitcoin to 0.25, fix Cargo dependencies
- Enable clippy for stable and tests by default
- Switch to "mainline" rust-miniscript
- Generate a different cache key for every CI job
- Fix to at least bitcoin ^0.25.2
#### Fixed
- Fix or ignore clippy warnings for all optional features except compact_filters
- Pin cc version because last breaks rocksdb build
### Blockchain
#### Added
- Add a trait to create `Blockchain`s from a configuration
- Add an `AnyBlockchain` enum to allow switching at runtime
- Document `AnyBlockchain` and `ConfigurableBlockchain`
- Use our Instant struct to be compatible with wasm
- Make esplora call in parallel
- Allow to set concurrency in Esplora config and optionally pass it in repl
#### Fixed
- Fix receiving a coinbase using Electrum/Esplora
- Use proper type for EsploraHeader, make conversion to BlockHeader infallible
- Eagerly unwrap height option, save one collect
#### Changed
- Simplify the architecture of blockchain traits
- Improve sync
- Remove unused varaint HeaderParseFail
### CLI
#### Added
- Conditionally remove cli args according to enabled feature
#### Changed
- Add max_addresses param in sync
- Split the internal and external policy paths
### Database
#### Added
- Add `AnyDatabase` and `ConfigurableDatabase` traits
### Descriptor
#### Added
- Add a macro to write descriptors from code
- Add descriptor templates, add `DerivableKey`
- Add ToWalletDescriptor trait tests
- Add support for `sortedmulti` in `descriptor!`
- Add ExtractPolicy trait tests
- Add get_checksum tests, cleanup tests
- Add descriptor macro tests
#### Changes
- Improve the descriptor macro, add traits for key and descriptor types
#### Fixes
- Fix the recovery of a descriptor given a PSBT
### Keys
#### Added
- Add BIP39 support
- Take `ScriptContext` into account when converting keys
- Add a way to restrict the networks in which keys are valid
- Add a trait for keys that can be generated
- Fix entropy generation
- Less convoluted entropy generation
- Re-export tiny-bip39
- Implement `GeneratableKey` trait for `bitcoin::PrivateKey`
- Implement `ToDescriptorKey` trait for `GeneratedKey`
- Add a shortcut to generate keys with the default options
#### Fixed
- Fix all-keys and cli-utils tests
### Wallet
#### Added
- Allow to define static fees for transactions Fixes #137
- Merging two match expressions for fee calculation
- Incorporate RBF rules into utxo selection function
- Add Branch and Bound coin selection
- Add tests for BranchAndBoundCoinSelection::coin_select
- Add tests for BranchAndBoundCoinSelection::bnb
- Add tests for BranchAndBoundCoinSelection::single_random_draw
- Add test that shwpkh populates witness_utxo
- Add witness and redeem scripts to PSBT outputs
- Add an option to include `PSBT_GLOBAL_XPUB`s in PSBTs
- Eagerly finalize inputs
#### Changed
- Use collect to avoid iter unwrapping Options
- Make coin_select take may/must use utxo lists
- Improve `CoinSelectionAlgorithm`
- Refactor `Wallet::bump_fee()`
- Default to SIGHASH_ALL if not specified
- Replace ChangeSpendPolicy::filter_utxos with a predicate
- Make 'unspendable' into a HashSet
- Stop implicitly enforcing manaul selection by .add_utxo
- Rename DumbCS to LargestFirstCoinSelection
- Rename must_use_utxos to required_utxos
- Rename may_use_utxos to optional_uxtos
- Rename get_must_may_use_utxos to preselect_utxos
- Remove redundant Box around address validators
- Remove redundant Box around signers
- Make Signer and AddressValidator Send and Sync
- Split `send_all` into `set_single_recipient` and `drain_wallet`
- Use TXIN_DEFAULT_WEIGHT constant in coin selection
- Replace `must_use` with `required` in coin selection
- Take both spending policies into account in create_tx
- Check last derivation in cache to avoid recomputation
- Use the branch-and-bound cs by default
- Make coin_select return UTXOs instead of TxIns
- Build output lookup inside complete transaction
- Don't wrap SignersContainer arguments in Arc
- More consistent references with 'signers' variables
#### Fixed
- Fix signing for `ShWpkh` inputs
- Fix the recovery of a descriptor given a PSBT
### Examples
#### Added
- Support esplora blockchain source in repl
#### Changed
- Revert back the REPL example to use Electrum
- Remove the `magic` alias for `repl`
- Require esplora feature for repl example
#### Security
- Use dirs-next instead of dirs since the latter is unmantained
## [0.1.0-beta.1] - 2020-09-08
### Blockchain
#### Added
- Lightweight Electrum client with SSL/SOCKS5 support
- Add a generalized "Blockchain" interface
- Add Error::OfflineClient
- Add the Esplora backend
- Use async I/O in the various blockchain impls
- Compact Filters blockchain implementation
- Add support for Tor
- Impl OnlineBlockchain for types wrapped in Arc
### Database
#### Added
- Add a generalized database trait and a Sled-based implementation
- Add an in-memory database
### Descriptor
#### Added
- Wrap Miniscript descriptors to support xpubs
- Policy and contribution
- Transform a descriptor into its "public" version
- Use `miniscript::DescriptorPublicKey`
### Macros
#### Added
- Add a feature to enable the async interface on non-wasm32 platforms
### Wallet
#### Added
- Wallet logic
- Add `assume_height_reached` in PSBTSatisfier
- Add an option to change the assumed current height
- Specify the policy branch with a map
- Add a few commands to handle psbts
- Add hd_keypaths to outputs
- Add a `TxBuilder` struct to simplify `create_tx()`'s interface
- Abstract coin selection in a separate trait
- Refill the address pool whenever necessary
- Implement the wallet import/export format from FullyNoded
- Add a type convert fee units, add `Wallet::estimate_fee()`
- TxOrdering, shuffle/bip69 support
- Add RBF and custom versions in TxBuilder
- Allow limiting the use of internal utxos in TxBuilder
- Add `force_non_witness_utxo()` to TxBuilder
- RBF and add a few tests
- Add AddressValidators
- Add explicit ordering for the signers
- Support signing the whole tx instead of individual inputs
- Create a PSBT signer from an ExtendedDescriptor
### Examples
#### Added
- Add REPL broadcast command
- Add a miniscript compiler CLI
- Expose list_transactions() in the REPL
- Use `MemoryDatabase` in the compiler example
- Make the REPL return JSON
[unreleased]: https://github.com/bitcoindevkit/bdk/compare/v0.11.0...HEAD
[0.1.0-beta.1]: https://github.com/bitcoindevkit/bdk/compare/96c87ea5...0.1.0-beta.1
[v0.2.0]: https://github.com/bitcoindevkit/bdk/compare/0.1.0-beta.1...v0.2.0
[v0.3.0]: https://github.com/bitcoindevkit/bdk/compare/v0.2.0...v0.3.0
[v0.4.0]: https://github.com/bitcoindevkit/bdk/compare/v0.3.0...v0.4.0
[v0.5.0]: https://github.com/bitcoindevkit/bdk/compare/v0.4.0...v0.5.0
[v0.5.1]: https://github.com/bitcoindevkit/bdk/compare/v0.5.0...v0.5.1
[v0.6.0]: https://github.com/bitcoindevkit/bdk/compare/v0.5.1...v0.6.0
[v0.7.0]: https://github.com/bitcoindevkit/bdk/compare/v0.6.0...v0.7.0
[v0.8.0]: https://github.com/bitcoindevkit/bdk/compare/v0.7.0...v0.8.0
[v0.9.0]: https://github.com/bitcoindevkit/bdk/compare/v0.8.0...v0.9.0
[v0.10.0]: https://github.com/bitcoindevkit/bdk/compare/v0.9.0...v0.10.0
[v0.11.0]: https://github.com/bitcoindevkit/bdk/compare/v0.10.0...v0.11.0
[v0.12.0]: https://github.com/bitcoindevkit/bdk/compare/v0.11.0...v0.12.0
[v0.13.0]: https://github.com/bitcoindevkit/bdk/compare/v0.12.0...v0.13.0
[v0.14.0]: https://github.com/bitcoindevkit/bdk/compare/v0.13.0...v0.14.0
[v0.15.0]: https://github.com/bitcoindevkit/bdk/compare/v0.14.0...v0.15.0

118
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,118 @@
Contributing to BDK
==============================
The BDK project operates an open contributor model where anyone is welcome to
contribute towards development in the form of peer review, documentation,
testing and patches.
Anyone is invited to contribute without regard to technical experience,
"expertise", OSS experience, age, or other concern. However, the development of
cryptocurrencies demands a high-level of rigor, adversarial thinking, thorough
testing and risk-minimization.
Any bug may cost users real money. That being said, we deeply welcome people
contributing for the first time to an open source project or picking up Rust while
contributing. Don't be shy, you'll learn.
Communications Channels
-----------------------
Communication about BDK happens primarily on the [BDK Discord](https://discord.gg/dstn4dQ).
Discussion about code base improvements happens in GitHub [issues](https://github.com/bitcoindevkit/bdk/issues) and
on [pull requests](https://github.com/bitcoindevkit/bdk/pulls).
Contribution Workflow
---------------------
The codebase is maintained using the "contributor workflow" where everyone
without exception contributes patch proposals using "pull requests". This
facilitates social contribution, easy testing and peer review.
To contribute a patch, the worflow is a as follows:
1. Fork Repository
2. Create topic branch
3. Commit patches
In general commits should be atomic and diffs should be easy to read.
For this reason do not mix any formatting fixes or code moves with actual code
changes. Further, each commit, individually, should compile and pass tests, in
order to ensure git bisect and other automated tools function properly.
When adding a new feature, thought must be given to the long term technical
debt.
Every new feature should be covered by functional tests where possible.
When refactoring, structure your PR to make it easy to review and don't
hesitate to split it into multiple small, focused PRs.
The Minimal Supported Rust Version is 1.46 (enforced by our CI).
Commits should cover both the issue fixed and the solution's rationale.
These [guidelines](https://chris.beams.io/posts/git-commit/) should be kept in mind.
To facilitate communication with other contributors, the project is making use
of GitHub's "assignee" field. First check that no one is assigned and then
comment suggesting that you're working on it. If someone is already assigned,
don't hesitate to ask if the assigned party or previous commenters are still
working on it if it has been awhile.
Deprecation policy
------------------
Where possible, breaking existing APIs should be avoided. Instead, add new APIs and
use [`#[deprecated]`](https://github.com/rust-lang/rfcs/blob/master/text/1270-deprecation.md)
to discourage use of the old one.
Deprecated APIs are typically maintained for one release cycle. In other words, an
API that has been deprecated with the 0.10 release can be expected to be removed in the
0.11 release. This allows for smoother upgrades without incurring too much technical
debt inside this library.
If you deprecated an API as part of a contribution, we encourage you to "own" that API
and send a follow-up to remove it as part of the next release cycle.
Peer review
-----------
Anyone may participate in peer review which is expressed by comments in the
pull request. Typically reviewers will review the code for obvious errors, as
well as test out the patch set and opine on the technical merits of the patch.
PR should be reviewed first on the conceptual level before focusing on code
style or grammar fixes.
Coding Conventions
------------------
This codebase uses spaces, not tabs.
Use `cargo fmt` with the default settings to format code before committing.
This is also enforced by the CI.
Security
--------
Security is a high priority of BDK; disclosure of security vulnerabilites helps
prevent user loss of funds.
Note that BDK is currently considered "pre-production" during this time, there
is no special handling of security issues. Please simply open an issue on
Github.
Testing
-------
Related to the security aspect, BDK developers take testing very seriously.
Due to the modular nature of the project, writing new functional tests is easy
and good test coverage of the codebase is an important goal.
Refactoring the project to enable fine-grained unit testing is also an ongoing
effort.
Going further
-------------
You may be interested by Jon Atacks guide on [How to review Bitcoin Core PRs](https://github.com/jonatack/bitcoin-development/blob/master/how-to-review-bitcoin-core-prs.md)
and [How to make Bitcoin Core PRs](https://github.com/jonatack/bitcoin-development/blob/master/how-to-make-bitcoin-core-prs.md).
While there are differences between the projects in terms of context and
maturity, many of the suggestions offered apply to this project.
Overall, have fun :)

View File

@@ -1,58 +1,115 @@
[package]
name = "magical-bitcoin-wallet"
version = "0.1.0"
name = "bdk"
version = "0.15.1-dev"
edition = "2018"
authors = ["Riccardo Casatta <riccardo@casatta.it>", "Alekos Filini <alekos.filini@gmail.com>"]
authors = ["Alekos Filini <alekos.filini@gmail.com>", "Riccardo Casatta <riccardo@casatta.it>"]
homepage = "https://bitcoindevkit.org"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk"
description = "A modern, lightweight, descriptor-based wallet library"
keywords = ["bitcoin", "wallet", "descriptor", "psbt"]
readme = "README.md"
license = "MIT OR Apache-2.0"
[dependencies]
bdk-macros = "^0.6"
log = "^0.4"
bitcoin = { version = "0.23", features = ["use-serde"] }
miniscript = { version = "0.12" }
miniscript = { version = "^6.0", features = ["use-serde"] }
bitcoin = { version = "^0.27", features = ["use-serde", "base64"] }
serde = { version = "^1.0", features = ["derive"] }
serde_json = { version = "^1.0" }
base64 = "^0.11"
async-trait = "0.1"
rand = "^0.7"
# Optional dependencies
sled = { version = "0.31.0", optional = true }
electrum-client = { git = "https://github.com/MagicalBitcoin/rust-electrum-client.git", optional = true }
reqwest = { version = "0.10", optional = true, features = ["json"] }
sled = { version = "0.34", optional = true }
electrum-client = { version = "0.8", optional = true }
rusqlite = { version = "0.25.3", optional = true }
ahash = { version = "=0.7.4", optional = true }
reqwest = { version = "0.11", optional = true, features = ["json"] }
ureq = { version = "~2.2.0", features = ["json"], optional = true }
futures = { version = "0.3", optional = true }
clap = { version = "2.33", optional = true }
async-trait = { version = "0.1", optional = true }
rocksdb = { version = "0.14", default-features = false, features = ["snappy"], optional = true }
cc = { version = ">=1.0.64", optional = true }
socks = { version = "0.3", optional = true }
lazy_static = { version = "1.4", optional = true }
bip39 = { version = "1.0.1", optional = true }
bitcoinconsensus = { version = "0.19.0-3", optional = true }
# Needed by bdk_blockchain_tests macro
bitcoincore-rpc = { version = "0.14", optional = true }
# Platform-specific dependencies
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
tokio = { version = "1", features = ["rt"] }
[target.'cfg(target_arch = "wasm32")'.dependencies]
async-trait = "0.1"
js-sys = "0.3"
rand = { version = "^0.7", features = ["wasm-bindgen"] }
[features]
minimal = []
compiler = ["miniscript/compiler"]
verify = ["bitcoinconsensus"]
default = ["key-value-db", "electrum"]
electrum = ["electrum-client"]
esplora = ["reqwest", "futures"]
sqlite = ["rusqlite", "ahash"]
compact_filters = ["rocksdb", "socks", "lazy_static", "cc"]
key-value-db = ["sled"]
cli-utils = ["clap"]
all-keys = ["keys-bip39"]
keys-bip39 = ["bip39"]
rpc = ["bitcoincore-rpc"]
# We currently provide mulitple implementations of `Blockchain`, all are
# blocking except for the `EsploraBlockchain` which can be either async or
# blocking, depending on the HTTP client in use.
#
# - Users wanting asynchronous HTTP calls should enable `async-interface` to get
# access to the asynchronous method implementations. Then, if Esplora is wanted,
# enable `esplora` AND `reqwest` (`--features=use-esplora-reqwest`).
# - Users wanting blocking HTTP calls can use any of the other blockchain
# implementations (`compact_filters`, `electrum`, or `esplora`). Users wanting to
# use Esplora should enable `esplora` AND `ureq` (`--features=use-esplora-ureq`).
#
# WARNING: Please take care with the features below, various combinations will
# fail to build. We cannot currently build `bdk` with `--all-features`.
async-interface = ["async-trait"]
electrum = ["electrum-client"]
# MUST ALSO USE `--no-default-features`.
use-esplora-reqwest = ["esplora", "reqwest", "reqwest/socks", "futures"]
use-esplora-ureq = ["esplora", "ureq", "ureq/socks"]
# Typical configurations will not need to use `esplora` feature directly.
esplora = []
# Debug/Test features
test-blockchains = ["bitcoincore-rpc", "electrum-client"]
test-electrum = ["electrum", "electrsd/electrs_0_8_10", "test-blockchains"]
test-rpc = ["rpc", "electrsd/electrs_0_8_10", "test-blockchains"]
test-esplora = ["electrsd/legacy", "electrsd/esplora_a33e97e1", "test-blockchains"]
test-md-docs = ["electrum"]
[dev-dependencies]
tokio = { version = "0.2", features = ["macros"] }
lazy_static = "1.4"
rustyline = "6.0"
dirs = "2.0"
env_logger = "0.7"
rand = "0.7"
clap = "2.33"
electrsd = { version= "0.13", features = ["trigger", "bitcoind_22_0"] }
[[example]]
name = "repl"
required-features = ["cli-utils"]
name = "address_validator"
[[example]]
name = "psbt"
[[example]]
name = "parse_descriptor"
name = "compact_filters_balance"
required-features = ["compact_filters"]
[[example]]
name = "miniscriptc"
path = "examples/compiler.rs"
required-features = ["compiler"]
# Provide a more user-friendly alias for the REPL
[[example]]
name = "magic"
path = "examples/repl.rs"
required-features = ["cli-utils"]
[workspace]
members = ["macros"]
[package.metadata.docs.rs]
features = ["compiler", "electrum", "esplora", "ureq", "compact_filters", "rpc", "key-value-db", "sqlite", "all-keys", "verify"]
# defines the configuration attribute `docsrs`
rustdoc-args = ["--cfg", "docsrs"]

46
DEVELOPMENT_CYCLE.md Normal file
View File

@@ -0,0 +1,46 @@
# Development Cycle
This project follows a regular releasing schedule similar to the one [used by the Rust language](https://doc.rust-lang.org/book/appendix-07-nightly-rust.html). In short, this means that a new release is made at a regular cadence, with all the feature/bugfixes that made it to `master` in time. This ensures that we don't keep delaying releases waiting for "just one more little thing".
We decided to maintain a faster release cycle while the library is still in "beta", i.e. before release `1.0.0`: since we are constantly adding new features and, even more importantly, fixing issues, we want developers to have access to those updates as fast as possible. For this reason we will make a release **every 4 weeks**.
Once the project will have reached a more mature state (>= `1.0.0`), we will very likely switch to longer release cycles of **6 weeks**.
The "feature freeze" will happen **one week before the release date**. This means a new branch will be created originating from the `master` tip at that time, and in that branch we will stop adding new features and only focus on ensuring the ones we've added are working properly.
```
master: - - - - * - - - * - - - - - - * - - - * ...
| / | |
release/0.x.0: * - - # | |
| /
release/0.y.0: * - - #
```
As soon as the release is tagged and published, the `release` branch will be merged back into `master` to update the version in the `Cargo.toml` to apply the new `Cargo.toml` version and all the other fixes made during the feature freeze window.
## Making the Release
What follows are notes and procedures that maintainers can refer to when making releases. All the commits and tags must be signed and, ideally, also [timestamped](https://github.com/opentimestamps/opentimestamps-client/blob/master/doc/git-integration.md).
Pre-`v1.0.0` our "major" releases only affect the "minor" semver value. Accordingly, our "minor" releases will only affect the "patch" value.
1. Create a new branch called `release/x.y.z` from `master`. Double check that your local `master` is up-to-date with the upstream repo before doing so.
2. Make a commit on the release branch to bump the version to `x.y.z-rc.1`. The message should be "Bump version to x.y.z-rc.1".
3. Push the new branch to `bitcoindevkit/bdk` on GitHub.
4. During the one week of feature freeze run additional tests on the release branch.
5. If a bug is found:
- If it's a minor issue you can just fix it in the release branch, since it will be merged back to `master` eventually
- For bigger issues you can fix them on `master` and then *cherry-pick* the commit to the release branch
6. Update the changelog with the new release version.
7. Update `src/lib.rs` with the new version (line ~43)
8. On release day, make a commit on the release branch to bump the version to `x.y.z`. The message should be "Bump version to x.y.z".
9. Add a tag to this commit. The tag name should be `vx.y.z` (for example `v0.5.0`), and the message "Release x.y.z". Make sure the tag is signed, for extra safety use the explicit `--sign` flag.
10. Push the new commits to the upstream release branch, wait for the CI to finish one last time.
11. Publish **all** the updated crates to crates.io.
12. Make a new commit to bump the version value to `x.y.(z+1)-dev`. The message should be "Bump version to x.y.(z+1)-dev".
13. Merge the release branch back into `master`.
14. If the `master` branch contains any unreleased changes to the `bdk-macros` crate, change the `bdk` Cargo.toml `[dependencies]` to point to the local path (ie. `bdk-macros = { path = "./macros"}`)
15. Create the release on GitHub: go to "tags", click on the dots on the right and select "Create Release". Then set the title to `vx.y.z` and write down some brief release notes.
16. Make sure the new release shows up on crates.io and that the docs are built correctly on docs.rs.
17. Announce the release on Twitter, Discord and Telegram.
18. Celebrate :tada:

29
LICENSE
View File

@@ -1,21 +1,14 @@
MIT License
This software is licensed under [Apache 2.0](LICENSE-APACHE) or
[MIT](LICENSE-MIT), at your option.
Copyright (c) 2020 Magical Bitcoin
Some files retain their own copyright notice, however, for full authorship
information, see version control history.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Except as otherwise noted in individual files, all files in this repository are
licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
http://www.apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
http://opensource.org/licenses/MIT>, at your option.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
You may not use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of this software or any files in this repository except in
accordance with one or both of these licenses.

201
LICENSE-APACHE Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

16
LICENSE-MIT Normal file
View File

@@ -0,0 +1,16 @@
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

189
README.md
View File

@@ -1,7 +1,188 @@
# Magical Bitcoin Wallet
<div align="center">
<h1>BDK</h1>
A modern, lightweight, descriptor-based wallet written in Rust!
<img src="./static/bdk.svg" width="220" />
## Getting Started
<p>
<strong>A modern, lightweight, descriptor-based wallet library written in Rust!</strong>
</p>
See the documentation at [magicalbitcoin.org](https://magicalbitcoin.org)
<p>
<a href="https://crates.io/crates/bdk"><img alt="Crate Info" src="https://img.shields.io/crates/v/bdk.svg"/></a>
<a href="https://github.com/bitcoindevkit/bdk/blob/master/LICENSE"><img alt="MIT or Apache-2.0 Licensed" src="https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg"/></a>
<a href="https://github.com/bitcoindevkit/bdk/actions?query=workflow%3ACI"><img alt="CI Status" src="https://github.com/bitcoindevkit/bdk/workflows/CI/badge.svg"></a>
<a href="https://codecov.io/gh/bitcoindevkit/bdk"><img src="https://codecov.io/gh/bitcoindevkit/bdk/branch/master/graph/badge.svg"/></a>
<a href="https://docs.rs/bdk"><img alt="API Docs" src="https://img.shields.io/badge/docs.rs-bdk-green"/></a>
<a href="https://blog.rust-lang.org/2020/08/27/Rust-1.46.0.html"><img alt="Rustc Version 1.46+" src="https://img.shields.io/badge/rustc-1.46%2B-lightgrey.svg"/></a>
<a href="https://discord.gg/d7NkDKm"><img alt="Chat on Discord" src="https://img.shields.io/discord/753336465005608961?logo=discord"></a>
</p>
<h4>
<a href="https://bitcoindevkit.org">Project Homepage</a>
<span> | </span>
<a href="https://docs.rs/bdk">Documentation</a>
</h4>
</div>
## About
The `bdk` library aims to be the core building block for Bitcoin wallets of any kind.
* It uses [Miniscript](https://github.com/rust-bitcoin/rust-miniscript) to support descriptors with generalized conditions. This exact same library can be used to build
single-sig wallets, multisigs, timelocked contracts and more.
* It supports multiple blockchain backends and databases, allowing developers to choose exactly what's right for their projects.
* It's built to be cross-platform: the core logic works on desktop, mobile, and even WebAssembly.
* It's very easy to extend: developers can implement customized logic for blockchain backends, databases, signers, coin selection, and more, without having to fork and modify this library.
## Examples
### Sync the balance of a descriptor
```rust,no_run
use bdk::Wallet;
use bdk::database::MemoryDatabase;
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
use bdk::electrum_client::Client;
fn main() -> Result<(), bdk::Error> {
let client = Client::new("ssl://electrum.blockstream.info:60002")?;
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
bitcoin::Network::Testnet,
MemoryDatabase::default(),
ElectrumBlockchain::from(client)
)?;
wallet.sync(noop_progress(), None)?;
println!("Descriptor balance: {} SAT", wallet.get_balance()?);
Ok(())
}
```
### Generate a few addresses
```rust
use bdk::{Wallet, database::MemoryDatabase};
use bdk::wallet::AddressIndex::New;
fn main() -> Result<(), bdk::Error> {
let wallet = Wallet::new_offline(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
bitcoin::Network::Testnet,
MemoryDatabase::default(),
)?;
println!("Address #0: {}", wallet.get_address(New)?);
println!("Address #1: {}", wallet.get_address(New)?);
println!("Address #2: {}", wallet.get_address(New)?);
Ok(())
}
```
### Create a transaction
```rust,no_run
use bdk::{FeeRate, Wallet};
use bdk::database::MemoryDatabase;
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
use bdk::electrum_client::Client;
use bdk::wallet::AddressIndex::New;
use bitcoin::consensus::serialize;
fn main() -> Result<(), bdk::Error> {
let client = Client::new("ssl://electrum.blockstream.info:60002")?;
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
bitcoin::Network::Testnet,
MemoryDatabase::default(),
ElectrumBlockchain::from(client)
)?;
wallet.sync(noop_progress(), None)?;
let send_to = wallet.get_address(New)?;
let (psbt, details) = {
let mut builder = wallet.build_tx();
builder
.add_recipient(send_to.script_pubkey(), 50_000)
.enable_rbf()
.do_not_spend_change()
.fee_rate(FeeRate::from_sat_per_vb(5.0));
builder.finish()?
};
println!("Transaction details: {:#?}", details);
println!("Unsigned PSBT: {}", base64::encode(&serialize(&psbt)));
Ok(())
}
```
### Sign a transaction
```rust,no_run
use bdk::{Wallet, SignOptions, database::MemoryDatabase};
use bitcoin::consensus::deserialize;
fn main() -> Result<(), bdk::Error> {
let wallet = Wallet::new_offline(
"wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/1/*)"),
bitcoin::Network::Testnet,
MemoryDatabase::default(),
)?;
let psbt = "...";
let mut psbt = deserialize(&base64::decode(psbt).unwrap())?;
let finalized = wallet.sign(&mut psbt, SignOptions::default())?;
Ok(())
}
```
## Testing
### Unit testing
```
cargo test
```
### Integration testing
Integration testing require testing features, for example:
```
cargo test --features test-electrum
```
The other options are `test-esplora` or `test-rpc`.
Note that `electrs` and `bitcoind` binaries are automatically downloaded (on mac and linux), to specify you already have installed binaries you must use `--no-default-features` and provide `BITCOIND_EXE` and `ELECTRS_EXE` as environment variables.
## License
Licensed under either of
* Apache License, Version 2.0
([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license
([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.

14
ci/start-core.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/usr/bin/env sh
echo "Starting bitcoin node."
mkdir $GITHUB_WORKSPACE/.bitcoin
/root/bitcoind -regtest -server -daemon -datadir=$GITHUB_WORKSPACE/.bitcoin -fallbackfee=0.0002 -rpcallowip=0.0.0.0/0 -rpcbind=0.0.0.0 -blockfilterindex=1 -peerblockfilters=1
echo "Waiting for bitcoin node."
until /root/bitcoin-cli -regtest -datadir=$GITHUB_WORKSPACE/.bitcoin getblockchaininfo; do
sleep 1
done
/root/bitcoin-cli -regtest -datadir=$GITHUB_WORKSPACE/.bitcoin createwallet $BDK_RPC_WALLET
echo "Generating 150 bitcoin blocks."
ADDR=$(/root/bitcoin-cli -regtest -datadir=$GITHUB_WORKSPACE/.bitcoin -rpcwallet=$BDK_RPC_WALLET getnewaddress)
/root/bitcoin-cli -regtest -datadir=$GITHUB_WORKSPACE/.bitcoin generatetoaddress 150 $ADDR

13
codecov.yaml Normal file
View File

@@ -0,0 +1,13 @@
coverage:
status:
project:
default:
target: auto
threshold: 1%
base: auto
informational: false
patch:
default:
target: auto
threshold: 100%
base: auto

View File

@@ -0,0 +1,61 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::sync::Arc;
use bdk::bitcoin;
use bdk::database::MemoryDatabase;
use bdk::descriptor::HdKeyPaths;
use bdk::wallet::address_validator::{AddressValidator, AddressValidatorError};
use bdk::KeychainKind;
use bdk::Wallet;
use bdk::wallet::AddressIndex::New;
use bitcoin::hashes::hex::FromHex;
use bitcoin::util::bip32::Fingerprint;
use bitcoin::{Network, Script};
#[derive(Debug)]
struct DummyValidator;
impl AddressValidator for DummyValidator {
fn validate(
&self,
keychain: KeychainKind,
hd_keypaths: &HdKeyPaths,
script: &Script,
) -> Result<(), AddressValidatorError> {
let (_, path) = hd_keypaths
.values()
.find(|(fing, _)| fing == &Fingerprint::from_hex("bc123c3e").unwrap())
.ok_or(AddressValidatorError::InvalidScript)?;
println!(
"Validating `{:?}` {} address, script: {}",
keychain, path, script
);
Ok(())
}
}
fn main() -> Result<(), bdk::Error> {
let descriptor = "sh(and_v(v:pk(tpubDDpWvmUrPZrhSPmUzCMBHffvC3HyMAPnWDSAQNBTnj1iZeJa7BZQEttFiP4DS4GCcXQHezdXhn86Hj6LHX5EDstXPWrMaSneRWM8yUf6NFd/*),after(630000)))";
let mut wallet =
Wallet::new_offline(descriptor, None, Network::Regtest, MemoryDatabase::new())?;
wallet.add_address_validator(Arc::new(DummyValidator));
wallet.get_address(New)?;
wallet.get_address(New)?;
wallet.get_address(New)?;
Ok(())
}

View File

@@ -0,0 +1,43 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use bdk::blockchain::compact_filters::*;
use bdk::blockchain::noop_progress;
use bdk::database::MemoryDatabase;
use bdk::*;
use bitcoin::*;
use blockchain::compact_filters::CompactFiltersBlockchain;
use blockchain::compact_filters::CompactFiltersError;
use log::info;
use std::sync::Arc;
/// This will return wallet balance using compact filters
/// Requires a synced local bitcoin node 0.21 running on testnet with blockfilterindex=1 and peerblockfilters=1
fn main() -> Result<(), CompactFiltersError> {
env_logger::init();
info!("start");
let num_threads = 4;
let mempool = Arc::new(Mempool::default());
let peers = (0..num_threads)
.map(|_| Peer::connect("localhost:18333", Arc::clone(&mempool), Network::Testnet))
.collect::<Result<_, _>>()?;
let blockchain = CompactFiltersBlockchain::new(peers, "./wallet-filters", Some(500_000))?;
info!("done {:?}", blockchain);
let descriptor = "wpkh(tpubD6NzVbkrYhZ4X2yy78HWrr1M9NT8dKeWfzNiQqDdMqqa9UmmGztGGz6TaLFGsLfdft5iu32gxq1T4eMNxExNNWzVCpf9Y6JZi5TnqoC9wJq/*)";
let database = MemoryDatabase::default();
let wallet =
Arc::new(Wallet::new(descriptor, None, Network::Testnet, database, blockchain).unwrap());
wallet.sync(noop_progress(), None).unwrap();
info!("balance: {}", wallet.get_balance()?);
Ok(())
}

View File

@@ -1,29 +1,37 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
extern crate bdk;
extern crate bitcoin;
extern crate clap;
extern crate log;
extern crate magical_bitcoin_wallet;
extern crate miniscript;
extern crate rand;
extern crate serde_json;
extern crate sled;
use std::error::Error;
use std::str::FromStr;
use log::info;
use rand::distributions::Alphanumeric;
use rand::{thread_rng, Rng};
use clap::{App, Arg};
use bitcoin::Network;
use miniscript::policy::Concrete;
use miniscript::Descriptor;
use magical_bitcoin_wallet::types::ScriptType;
use magical_bitcoin_wallet::{OfflineWallet, Wallet};
use bdk::database::memory::MemoryDatabase;
use bdk::wallet::AddressIndex::New;
use bdk::{KeychainKind, Wallet};
fn main() {
fn main() -> Result<(), Box<dyn Error>> {
env_logger::init_from_env(
env_logger::Env::default().filter_or(env_logger::DEFAULT_FILTER_ENV, "info"),
);
@@ -55,56 +63,43 @@ fn main() {
.help("Sets the network")
.takes_value(true)
.default_value("testnet")
.possible_values(&["testnet", "regtest"]),
.possible_values(&["testnet", "regtest", "bitcoin", "signet"]),
)
.get_matches();
let policy_str = matches.value_of("POLICY").unwrap();
info!("Compiling policy: {}", policy_str);
let policy = Concrete::<String>::from_str(&policy_str).unwrap();
let compiled = policy.compile().unwrap();
let policy = Concrete::<String>::from_str(policy_str)?;
let descriptor = match matches.value_of("TYPE").unwrap() {
"sh" => Descriptor::Sh(compiled),
"wsh" => Descriptor::Wsh(compiled),
"sh-wsh" => Descriptor::ShWsh(compiled),
"sh" => Descriptor::new_sh(policy.compile()?)?,
"wsh" => Descriptor::new_wsh(policy.compile()?)?,
"sh-wsh" => Descriptor::new_sh_wsh(policy.compile()?)?,
_ => panic!("Invalid type"),
};
info!("... Descriptor: {}", descriptor);
let temp_db = {
let mut temp_db = std::env::temp_dir();
let rand_string: String = thread_rng().sample_iter(&Alphanumeric).take(15).collect();
temp_db.push(rand_string);
let database = MemoryDatabase::new();
let database = sled::open(&temp_db).unwrap();
let network = matches
.value_of("network")
.map(|n| Network::from_str(n))
.transpose()
.unwrap()
.unwrap_or(Network::Testnet);
let wallet = Wallet::new_offline(&format!("{}", descriptor), None, network, database)?;
let network = match matches.value_of("network") {
Some("regtest") => Network::Regtest,
Some("testnet") | _ => Network::Testnet,
};
let wallet: OfflineWallet<_> = Wallet::new_offline(
&format!("{}", descriptor),
None,
network,
database.open_tree("").unwrap(),
)
.unwrap();
info!("... First address: {}", wallet.get_address(New)?);
info!("... First address: {}", wallet.get_new_address().unwrap());
if matches.is_present("parsed_policy") {
let spending_policy = wallet.policies(KeychainKind::External)?;
info!(
"... Spending policy:\n{}",
serde_json::to_string_pretty(&spending_policy)?
);
}
if matches.is_present("parsed_policy") {
let spending_policy = wallet.policies(ScriptType::External).unwrap();
info!(
"... Spending policy:\n{}",
serde_json::to_string_pretty(&spending_policy).unwrap()
);
}
temp_db
};
std::fs::remove_dir_all(temp_db).unwrap();
Ok(())
}

View File

@@ -1,31 +0,0 @@
extern crate magical_bitcoin_wallet;
extern crate serde_json;
use std::str::FromStr;
use magical_bitcoin_wallet::bitcoin::*;
use magical_bitcoin_wallet::descriptor::*;
fn main() {
let desc = "wsh(or_d(\
thresh_m(\
2,[d34db33f/44'/0'/0']xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL/1/*,tprv8ZgxMBicQKsPduL5QnGihpprdHyypMGi4DhimjtzYemu7se5YQNcZfAPLqXRuGHb5ZX2eTQj62oNqMnyxJ7B7wz54Uzswqw8fFqMVdcmVF7/1/*\
),\
and_v(vc:pk_h(cVt4o7BGAig1UXywgGSmARhxMdzP5qvQsxKkSsc1XEkw3tDTQFpy),older(1000))\
))";
let extended_desc = ExtendedDescriptor::from_str(desc).unwrap();
println!("{:?}", extended_desc);
let policy = extended_desc.extract_policy().unwrap();
println!("policy: {}", serde_json::to_string(&policy).unwrap());
let derived_desc = extended_desc.derive(42).unwrap();
println!("{:?}", derived_desc);
let addr = derived_desc.address(Network::Testnet).unwrap();
println!("{}", addr);
let script = derived_desc.witness_script();
println!("{:?}", script);
}

View File

@@ -1,50 +0,0 @@
extern crate base64;
extern crate magical_bitcoin_wallet;
use std::str::FromStr;
use magical_bitcoin_wallet::bitcoin;
use magical_bitcoin_wallet::descriptor::*;
use magical_bitcoin_wallet::psbt::*;
use magical_bitcoin_wallet::signer::Signer;
use bitcoin::consensus::encode::{deserialize, serialize};
use bitcoin::util::psbt::PartiallySignedTransaction;
use bitcoin::SigHashType;
fn main() {
let desc = "pkh(tprv8ZgxMBicQKsPd7Uf69XL1XwhmjHopUGep8GuEiJDZmbQz6o58LninorQAfcKZWARbtRtfnLcJ5MQ2AtHcQJCCRUcMRvmDUjyEmNUWwx8UbK/*)";
let extended_desc = ExtendedDescriptor::from_str(desc).unwrap();
let psbt_str = "cHNidP8BAFMCAAAAAd9SiQfxXZ+CKjgjRNonWXsnlA84aLvjxtwCmMfRc0ZbAQAAAAD+////ASjS9QUAAAAAF6kUYJR3oB0lS1M0W1RRMMiENSX45IuHAAAAAAABAPUCAAAAA9I7/OqeFeOFdr5VTLnj3UI/CNRw2eWmMPf7qDv6uIF6AAAAABcWABTG+kgr0g44V0sK9/9FN9oG/CxMK/7///+d0ffphPcV6FE9J/3ZPKWu17YxBnWWTJQyRJs3HUo1gwEAAAAA/v///835mYd9DmnjVnUKd2421MDoZmIxvB4XyJluN3SPUV9hAAAAABcWABRfvwFGp+x/yWdXeNgFs9v0duyeS/7///8CFbH+AAAAAAAXqRSEnTOAjJN/X6ZgR9ftKmwisNSZx4cA4fUFAAAAABl2qRTs6pS4x17MSQ4yNs/1GPsfdlv2NIisAAAAACIGApVE9PPtkcqp8Da43yrXGv4nLOotZdyxwJoTWQxuLxIuCAxfmh4JAAAAAAA=";
let psbt_buf = base64::decode(psbt_str).unwrap();
let mut psbt: PartiallySignedTransaction = deserialize(&psbt_buf).unwrap();
let signer = PSBTSigner::from_descriptor(&psbt.global.unsigned_tx, &extended_desc).unwrap();
for (index, input) in psbt.inputs.iter_mut().enumerate() {
for (pubkey, (fing, path)) in &input.hd_keypaths {
let sighash = input.sighash_type.unwrap_or(SigHashType::All);
// Ignore the "witness_utxo" case because we know this psbt is a legacy tx
if let Some(non_wit_utxo) = &input.non_witness_utxo {
let prev_script = &non_wit_utxo.output
[psbt.global.unsigned_tx.input[index].previous_output.vout as usize]
.script_pubkey;
let (signature, sighash) = signer
.sig_legacy_from_fingerprint(index, sighash, fing, path, prev_script)
.unwrap()
.unwrap();
let mut concat_sig = Vec::new();
concat_sig.extend_from_slice(&signature.serialize_der());
concat_sig.extend_from_slice(&[sighash as u8]);
input.partial_sigs.insert(*pubkey, concat_sig);
}
}
}
println!("signed: {}", base64::encode(&serialize(&psbt)));
}

View File

@@ -1,126 +0,0 @@
use std::fs;
use std::path::PathBuf;
use std::sync::Arc;
use rustyline::error::ReadlineError;
use rustyline::Editor;
use clap::AppSettings;
#[allow(unused_imports)]
use log::{debug, error, info, trace, LevelFilter};
use bitcoin::Network;
use magical_bitcoin_wallet::bitcoin;
use magical_bitcoin_wallet::blockchain::ElectrumBlockchain;
use magical_bitcoin_wallet::cli;
use magical_bitcoin_wallet::sled;
use magical_bitcoin_wallet::{Client, Wallet};
fn prepare_home_dir() -> PathBuf {
let mut dir = PathBuf::new();
dir.push(&dirs::home_dir().unwrap());
dir.push(".magical-bitcoin");
if !dir.exists() {
info!("Creating home directory {}", dir.as_path().display());
fs::create_dir(&dir).unwrap();
}
dir.push("database.sled");
dir
}
#[tokio::main]
async fn main() {
env_logger::init();
let app = cli::make_cli_subcommands();
let mut repl_app = app.clone().setting(AppSettings::NoBinaryName);
let app = cli::add_global_flags(app);
let matches = app.get_matches();
// TODO
// let level = match matches.occurrences_of("v") {
// 0 => LevelFilter::Info,
// 1 => LevelFilter::Debug,
// _ => LevelFilter::Trace,
// };
let network = match matches.value_of("network") {
Some("regtest") => Network::Regtest,
Some("testnet") | _ => Network::Testnet,
};
let descriptor = matches.value_of("descriptor").unwrap();
let change_descriptor = matches.value_of("change_descriptor");
debug!("descriptors: {:?} {:?}", descriptor, change_descriptor);
let database = sled::open(prepare_home_dir().to_str().unwrap()).unwrap();
let tree = database
.open_tree(matches.value_of("wallet").unwrap())
.unwrap();
debug!("database opened successfully");
let client = Client::new(matches.value_of("server").unwrap())
.await
.unwrap();
let wallet = Wallet::new(
descriptor,
change_descriptor,
network,
tree,
ElectrumBlockchain::from(client),
)
.await
.unwrap();
let wallet = Arc::new(wallet);
if let Some(_sub_matches) = matches.subcommand_matches("repl") {
let mut rl = Editor::<()>::new();
// if rl.load_history("history.txt").is_err() {
// println!("No previous history.");
// }
loop {
let readline = rl.readline(">> ");
match readline {
Ok(line) => {
if line.trim() == "" {
continue;
}
rl.add_history_entry(line.as_str());
let matches = repl_app.get_matches_from_safe_borrow(line.split(" "));
if let Err(err) = matches {
println!("{}", err.message);
continue;
}
if let Some(s) = cli::handle_matches(&Arc::clone(&wallet), matches.unwrap())
.await
.unwrap()
{
println!("{}", s);
}
}
Err(ReadlineError::Interrupted) => continue,
Err(ReadlineError::Eof) => break,
Err(err) => {
println!("{:?}", err);
break;
}
}
}
// rl.save_history("history.txt").unwrap();
} else {
if let Some(s) = cli::handle_matches(&wallet, matches).await.unwrap() {
println!("{}", s);
}
}
}

24
macros/Cargo.toml Normal file
View File

@@ -0,0 +1,24 @@
[package]
name = "bdk-macros"
version = "0.6.0"
authors = ["Alekos Filini <alekos.filini@gmail.com>"]
edition = "2018"
homepage = "https://bitcoindevkit.org"
repository = "https://github.com/bitcoindevkit/bdk"
documentation = "https://docs.rs/bdk-macros"
description = "Supporting macros for `bdk`"
keywords = ["bdk"]
license = "MIT OR Apache-2.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
syn = { version = "1.0", features = ["parsing", "full"] }
proc-macro2 = "1.0"
quote = "1.0"
[features]
debug = ["syn/extra-traits"]
[lib]
proc-macro = true

146
macros/src/lib.rs Normal file
View File

@@ -0,0 +1,146 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
#[macro_use]
extern crate quote;
use proc_macro::TokenStream;
use syn::spanned::Spanned;
use syn::{parse, ImplItemMethod, ItemImpl, ItemTrait, Token};
fn add_async_trait(mut parsed: ItemTrait) -> TokenStream {
let output = quote! {
#[cfg(all(not(target_arch = "wasm32"), not(feature = "async-interface")))]
#parsed
};
for mut item in &mut parsed.items {
if let syn::TraitItem::Method(m) = &mut item {
m.sig.asyncness = Some(Token![async](m.span()));
}
}
let output = quote! {
#output
#[cfg(any(target_arch = "wasm32", feature = "async-interface"))]
#[async_trait(?Send)]
#parsed
};
output.into()
}
fn add_async_method(mut parsed: ImplItemMethod) -> TokenStream {
let output = quote! {
#[cfg(all(not(target_arch = "wasm32"), not(feature = "async-interface")))]
#parsed
};
parsed.sig.asyncness = Some(Token![async](parsed.span()));
let output = quote! {
#output
#[cfg(any(target_arch = "wasm32", feature = "async-interface"))]
#parsed
};
output.into()
}
fn add_async_impl_trait(mut parsed: ItemImpl) -> TokenStream {
let output = quote! {
#[cfg(all(not(target_arch = "wasm32"), not(feature = "async-interface")))]
#parsed
};
for mut item in &mut parsed.items {
if let syn::ImplItem::Method(m) = &mut item {
m.sig.asyncness = Some(Token![async](m.span()));
}
}
let output = quote! {
#output
#[cfg(any(target_arch = "wasm32", feature = "async-interface"))]
#[async_trait(?Send)]
#parsed
};
output.into()
}
/// Makes a method or every method of a trait "async" only if the target_arch is "wasm32"
///
/// Requires the `async-trait` crate as a dependency whenever this attribute is used on a trait
/// definition or trait implementation.
#[proc_macro_attribute]
pub fn maybe_async(_attr: TokenStream, item: TokenStream) -> TokenStream {
if let Ok(parsed) = parse(item.clone()) {
add_async_trait(parsed)
} else if let Ok(parsed) = parse(item.clone()) {
add_async_method(parsed)
} else if let Ok(parsed) = parse(item) {
add_async_impl_trait(parsed)
} else {
(quote! {
compile_error!("#[maybe_async] can only be used on methods, trait or trait impl blocks")
})
.into()
}
}
/// Awaits if target_arch is "wasm32", does nothing otherwise
#[proc_macro]
pub fn maybe_await(expr: TokenStream) -> TokenStream {
let expr: proc_macro2::TokenStream = expr.into();
let quoted = quote! {
{
#[cfg(all(not(target_arch = "wasm32"), not(feature = "async-interface")))]
{
#expr
}
#[cfg(any(target_arch = "wasm32", feature = "async-interface"))]
{
#expr.await
}
}
};
quoted.into()
}
/// Awaits if target_arch is "wasm32", uses `tokio::Runtime::block_on()` otherwise
///
/// Requires the `tokio` crate as a dependecy with `rt-core` or `rt-threaded` to build on non-wasm32 platforms.
#[proc_macro]
pub fn await_or_block(expr: TokenStream) -> TokenStream {
let expr: proc_macro2::TokenStream = expr.into();
let quoted = quote! {
{
#[cfg(all(not(target_arch = "wasm32"), not(feature = "async-interface")))]
{
tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap().block_on(#expr)
}
#[cfg(any(target_arch = "wasm32", feature = "async-interface"))]
{
#expr.await
}
}
};
quoted.into()
}

253
src/blockchain/any.rs Normal file
View File

@@ -0,0 +1,253 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Runtime-checked blockchain types
//!
//! This module provides the implementation of [`AnyBlockchain`] which allows switching the
//! inner [`Blockchain`] type at runtime.
//!
//! ## Example
//!
//! In this example both `wallet_electrum` and `wallet_esplora` have the same type of
//! `Wallet<AnyBlockchain, MemoryDatabase>`. This means that they could both, for instance, be
//! assigned to a struct member.
//!
//! ```no_run
//! # use bitcoin::Network;
//! # use bdk::blockchain::*;
//! # use bdk::database::MemoryDatabase;
//! # use bdk::Wallet;
//! # #[cfg(feature = "electrum")]
//! # {
//! let electrum_blockchain = ElectrumBlockchain::from(electrum_client::Client::new("...")?);
//! let wallet_electrum: Wallet<AnyBlockchain, _> = Wallet::new(
//! "...",
//! None,
//! Network::Testnet,
//! MemoryDatabase::default(),
//! electrum_blockchain.into(),
//! )?;
//! # }
//!
//! # #[cfg(all(feature = "esplora", feature = "ureq"))]
//! # {
//! let esplora_blockchain = EsploraBlockchain::new("...", 20);
//! let wallet_esplora: Wallet<AnyBlockchain, _> = Wallet::new(
//! "...",
//! None,
//! Network::Testnet,
//! MemoryDatabase::default(),
//! esplora_blockchain.into(),
//! )?;
//! # }
//!
//! # Ok::<(), bdk::Error>(())
//! ```
//!
//! When paired with the use of [`ConfigurableBlockchain`], it allows creating wallets with any
//! blockchain type supported using a single line of code:
//!
//! ```no_run
//! # use bitcoin::Network;
//! # use bdk::blockchain::*;
//! # use bdk::database::MemoryDatabase;
//! # use bdk::Wallet;
//! # #[cfg(all(feature = "esplora", feature = "ureq"))]
//! # {
//! let config = serde_json::from_str("...")?;
//! let blockchain = AnyBlockchain::from_config(&config)?;
//! let wallet = Wallet::new(
//! "...",
//! None,
//! Network::Testnet,
//! MemoryDatabase::default(),
//! blockchain,
//! )?;
//! # }
//! # Ok::<(), bdk::Error>(())
//! ```
use super::*;
macro_rules! impl_from {
( $from:ty, $to:ty, $variant:ident, $( $cfg:tt )* ) => {
$( $cfg )*
impl From<$from> for $to {
fn from(inner: $from) -> Self {
<$to>::$variant(inner)
}
}
};
}
macro_rules! impl_inner_method {
( $self:expr, $name:ident $(, $args:expr)* ) => {
match $self {
#[cfg(feature = "electrum")]
AnyBlockchain::Electrum(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "esplora")]
AnyBlockchain::Esplora(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "compact_filters")]
AnyBlockchain::CompactFilters(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "rpc")]
AnyBlockchain::Rpc(inner) => inner.$name( $($args, )* ),
}
}
}
/// Type that can contain any of the [`Blockchain`] types defined by the library
///
/// It allows switching backend at runtime
///
/// See [this module](crate::blockchain::any)'s documentation for a usage example.
pub enum AnyBlockchain {
#[cfg(feature = "electrum")]
#[cfg_attr(docsrs, doc(cfg(feature = "electrum")))]
/// Electrum client
Electrum(electrum::ElectrumBlockchain),
#[cfg(feature = "esplora")]
#[cfg_attr(docsrs, doc(cfg(feature = "esplora")))]
/// Esplora client
Esplora(esplora::EsploraBlockchain),
#[cfg(feature = "compact_filters")]
#[cfg_attr(docsrs, doc(cfg(feature = "compact_filters")))]
/// Compact filters client
CompactFilters(compact_filters::CompactFiltersBlockchain),
#[cfg(feature = "rpc")]
#[cfg_attr(docsrs, doc(cfg(feature = "rpc")))]
/// RPC client
Rpc(rpc::RpcBlockchain),
}
#[maybe_async]
impl Blockchain for AnyBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
maybe_await!(impl_inner_method!(self, get_capabilities))
}
fn setup<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
maybe_await!(impl_inner_method!(self, setup, database, progress_update))
}
fn sync<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
maybe_await!(impl_inner_method!(self, sync, database, progress_update))
}
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
maybe_await!(impl_inner_method!(self, get_tx, txid))
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
maybe_await!(impl_inner_method!(self, broadcast, tx))
}
fn get_height(&self) -> Result<u32, Error> {
maybe_await!(impl_inner_method!(self, get_height))
}
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
maybe_await!(impl_inner_method!(self, estimate_fee, target))
}
}
impl_from!(electrum::ElectrumBlockchain, AnyBlockchain, Electrum, #[cfg(feature = "electrum")]);
impl_from!(esplora::EsploraBlockchain, AnyBlockchain, Esplora, #[cfg(feature = "esplora")]);
impl_from!(compact_filters::CompactFiltersBlockchain, AnyBlockchain, CompactFilters, #[cfg(feature = "compact_filters")]);
impl_from!(rpc::RpcBlockchain, AnyBlockchain, Rpc, #[cfg(feature = "rpc")]);
/// Type that can contain any of the blockchain configurations defined by the library
///
/// This allows storing a single configuration that can be loaded into an [`AnyBlockchain`]
/// instance. Wallets that plan to offer users the ability to switch blockchain backend at runtime
/// will find this particularly useful.
///
/// This type can be serialized from a JSON object like:
///
/// ```
/// # #[cfg(feature = "electrum")]
/// # {
/// use bdk::blockchain::{electrum::ElectrumBlockchainConfig, AnyBlockchainConfig};
/// let config: AnyBlockchainConfig = serde_json::from_str(
/// r#"{
/// "type" : "electrum",
/// "url" : "ssl://electrum.blockstream.info:50002",
/// "retry": 2,
/// "stop_gap": 20
/// }"#,
/// )
/// .unwrap();
/// assert_eq!(
/// config,
/// AnyBlockchainConfig::Electrum(ElectrumBlockchainConfig {
/// url: "ssl://electrum.blockstream.info:50002".into(),
/// retry: 2,
/// socks5: None,
/// timeout: None,
/// stop_gap: 20,
/// })
/// );
/// # }
/// ```
#[derive(Debug, serde::Serialize, serde::Deserialize, Clone, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum AnyBlockchainConfig {
#[cfg(feature = "electrum")]
#[cfg_attr(docsrs, doc(cfg(feature = "electrum")))]
/// Electrum client
Electrum(electrum::ElectrumBlockchainConfig),
#[cfg(feature = "esplora")]
#[cfg_attr(docsrs, doc(cfg(feature = "esplora")))]
/// Esplora client
Esplora(esplora::EsploraBlockchainConfig),
#[cfg(feature = "compact_filters")]
#[cfg_attr(docsrs, doc(cfg(feature = "compact_filters")))]
/// Compact filters client
CompactFilters(compact_filters::CompactFiltersBlockchainConfig),
#[cfg(feature = "rpc")]
#[cfg_attr(docsrs, doc(cfg(feature = "rpc")))]
/// RPC client configuration
Rpc(rpc::RpcConfig),
}
impl ConfigurableBlockchain for AnyBlockchain {
type Config = AnyBlockchainConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
Ok(match config {
#[cfg(feature = "electrum")]
AnyBlockchainConfig::Electrum(inner) => {
AnyBlockchain::Electrum(electrum::ElectrumBlockchain::from_config(inner)?)
}
#[cfg(feature = "esplora")]
AnyBlockchainConfig::Esplora(inner) => {
AnyBlockchain::Esplora(esplora::EsploraBlockchain::from_config(inner)?)
}
#[cfg(feature = "compact_filters")]
AnyBlockchainConfig::CompactFilters(inner) => AnyBlockchain::CompactFilters(
compact_filters::CompactFiltersBlockchain::from_config(inner)?,
),
#[cfg(feature = "rpc")]
AnyBlockchainConfig::Rpc(inner) => {
AnyBlockchain::Rpc(rpc::RpcBlockchain::from_config(inner)?)
}
})
}
}
impl_from!(electrum::ElectrumBlockchainConfig, AnyBlockchainConfig, Electrum, #[cfg(feature = "electrum")]);
impl_from!(esplora::EsploraBlockchainConfig, AnyBlockchainConfig, Esplora, #[cfg(feature = "esplora")]);
impl_from!(compact_filters::CompactFiltersBlockchainConfig, AnyBlockchainConfig, CompactFilters, #[cfg(feature = "compact_filters")]);
impl_from!(rpc::RpcConfig, AnyBlockchainConfig, Rpc, #[cfg(feature = "rpc")]);

View File

@@ -0,0 +1,568 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Compact Filters
//!
//! This module contains a multithreaded implementation of an [`Blockchain`] backend that
//! uses BIP157 (aka "Neutrino") to populate the wallet's [database](crate::database::Database)
//! by downloading compact filters from the P2P network.
//!
//! Since there are currently very few peers "in the wild" that advertise the required service
//! flag, this implementation requires that one or more known peers are provided by the user.
//! No dns or other kinds of peer discovery are done internally.
//!
//! Moreover, this module doesn't currently support detecting and resolving conflicts between
//! messages received by different peers. Thus, it's recommended to use this module by only
//! connecting to a single peer at a time, optionally by opening multiple connections if it's
//! desirable to use multiple threads at once to sync in parallel.
//!
//! This is an **EXPERIMENTAL** feature, API and other major changes are expected.
//!
//! ## Example
//!
//! ```no_run
//! # use std::sync::Arc;
//! # use bitcoin::*;
//! # use bdk::*;
//! # use bdk::blockchain::compact_filters::*;
//! let num_threads = 4;
//!
//! let mempool = Arc::new(Mempool::default());
//! let peers = (0..num_threads)
//! .map(|_| {
//! Peer::connect(
//! "btcd-mainnet.lightning.computer:8333",
//! Arc::clone(&mempool),
//! Network::Bitcoin,
//! )
//! })
//! .collect::<Result<_, _>>()?;
//! let blockchain = CompactFiltersBlockchain::new(peers, "./wallet-filters", Some(500_000))?;
//! # Ok::<(), CompactFiltersError>(())
//! ```
use std::collections::HashSet;
use std::fmt;
use std::path::Path;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use bitcoin::network::message_blockdata::Inventory;
use bitcoin::{Network, OutPoint, Transaction, Txid};
use rocksdb::{Options, SliceTransform, DB};
mod peer;
mod store;
mod sync;
use super::{Blockchain, Capability, ConfigurableBlockchain, Progress};
use crate::database::{BatchDatabase, BatchOperations, DatabaseUtils};
use crate::error::Error;
use crate::types::{KeychainKind, LocalUtxo, TransactionDetails};
use crate::{BlockTime, FeeRate};
use peer::*;
use store::*;
use sync::*;
pub use peer::{Mempool, Peer};
const SYNC_HEADERS_COST: f32 = 1.0;
const SYNC_FILTERS_COST: f32 = 11.6 * 1_000.0;
const PROCESS_BLOCKS_COST: f32 = 20_000.0;
/// Structure implementing the required blockchain traits
///
/// ## Example
/// See the [`blockchain::compact_filters`](crate::blockchain::compact_filters) module for a usage example.
#[derive(Debug)]
pub struct CompactFiltersBlockchain {
peers: Vec<Arc<Peer>>,
headers: Arc<ChainStore<Full>>,
skip_blocks: Option<usize>,
}
impl CompactFiltersBlockchain {
/// Construct a new instance given a list of peers, a path to store headers and block
/// filters downloaded during the sync and optionally a number of blocks to ignore starting
/// from the genesis while scanning for the wallet's outputs.
///
/// For each [`Peer`] specified a new thread will be spawned to download and verify the filters
/// in parallel. It's currently recommended to only connect to a single peer to avoid
/// inconsistencies in the data returned, optionally with multiple connections in parallel to
/// speed-up the sync process.
pub fn new<P: AsRef<Path>>(
peers: Vec<Peer>,
storage_dir: P,
skip_blocks: Option<usize>,
) -> Result<Self, CompactFiltersError> {
if peers.is_empty() {
return Err(CompactFiltersError::NoPeers);
}
let mut opts = Options::default();
opts.create_if_missing(true);
opts.set_prefix_extractor(SliceTransform::create_fixed_prefix(16));
let network = peers[0].get_network();
let cfs = DB::list_cf(&opts, &storage_dir).unwrap_or_else(|_| vec!["default".to_string()]);
let db = DB::open_cf(&opts, &storage_dir, &cfs)?;
let headers = Arc::new(ChainStore::new(db, network)?);
// try to recover partial snapshots
for cf_name in &cfs {
if !cf_name.starts_with("_headers:") {
continue;
}
info!("Trying to recover: {:?}", cf_name);
headers.recover_snapshot(cf_name)?;
}
Ok(CompactFiltersBlockchain {
peers: peers.into_iter().map(Arc::new).collect(),
headers,
skip_blocks,
})
}
/// Process a transaction by looking for inputs that spend from a UTXO in the database or
/// outputs that send funds to a know script_pubkey.
fn process_tx<D: BatchDatabase>(
&self,
database: &mut D,
tx: &Transaction,
height: Option<u32>,
timestamp: Option<u64>,
internal_max_deriv: &mut Option<u32>,
external_max_deriv: &mut Option<u32>,
) -> Result<(), Error> {
let mut updates = database.begin_batch();
let mut incoming: u64 = 0;
let mut outgoing: u64 = 0;
let mut inputs_sum: u64 = 0;
let mut outputs_sum: u64 = 0;
// look for our own inputs
for (i, input) in tx.input.iter().enumerate() {
if let Some(previous_output) = database.get_previous_output(&input.previous_output)? {
inputs_sum += previous_output.value;
if database.is_mine(&previous_output.script_pubkey)? {
outgoing += previous_output.value;
debug!("{} input #{} is mine, removing from utxo", tx.txid(), i);
updates.del_utxo(&input.previous_output)?;
}
}
}
for (i, output) in tx.output.iter().enumerate() {
// to compute the fees later
outputs_sum += output.value;
// this output is ours, we have a path to derive it
if let Some((keychain, child)) =
database.get_path_from_script_pubkey(&output.script_pubkey)?
{
debug!("{} output #{} is mine, adding utxo", tx.txid(), i);
updates.set_utxo(&LocalUtxo {
outpoint: OutPoint::new(tx.txid(), i as u32),
txout: output.clone(),
keychain,
})?;
incoming += output.value;
if keychain == KeychainKind::Internal
&& (internal_max_deriv.is_none() || child > internal_max_deriv.unwrap_or(0))
{
*internal_max_deriv = Some(child);
} else if keychain == KeychainKind::External
&& (external_max_deriv.is_none() || child > external_max_deriv.unwrap_or(0))
{
*external_max_deriv = Some(child);
}
}
}
if incoming > 0 || outgoing > 0 {
let tx = TransactionDetails {
txid: tx.txid(),
transaction: Some(tx.clone()),
received: incoming,
sent: outgoing,
confirmation_time: BlockTime::new(height, timestamp),
verified: height.is_some(),
fee: Some(inputs_sum.saturating_sub(outputs_sum)),
};
info!("Saving tx {}", tx.txid);
updates.set_tx(&tx)?;
}
database.commit_batch(updates)?;
Ok(())
}
}
impl Blockchain for CompactFiltersBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
vec![Capability::FullHistory].into_iter().collect()
}
#[allow(clippy::mutex_atomic)] // Mutex is easier to understand than a CAS loop.
fn setup<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
let first_peer = &self.peers[0];
let skip_blocks = self.skip_blocks.unwrap_or(0);
let cf_sync = Arc::new(CfSync::new(Arc::clone(&self.headers), skip_blocks, 0x00)?);
let initial_height = self.headers.get_height()?;
let total_bundles = (first_peer.get_version().start_height as usize)
.checked_sub(skip_blocks)
.map(|x| x / 1000)
.unwrap_or(0)
+ 1;
let expected_bundles_to_sync = total_bundles.saturating_sub(cf_sync.pruned_bundles()?);
let headers_cost = (first_peer.get_version().start_height as usize)
.saturating_sub(initial_height) as f32
* SYNC_HEADERS_COST;
let filters_cost = expected_bundles_to_sync as f32 * SYNC_FILTERS_COST;
let total_cost = headers_cost + filters_cost + PROCESS_BLOCKS_COST;
if let Some(snapshot) = sync::sync_headers(
Arc::clone(first_peer),
Arc::clone(&self.headers),
|new_height| {
let local_headers_cost =
new_height.saturating_sub(initial_height) as f32 * SYNC_HEADERS_COST;
progress_update.update(
local_headers_cost / total_cost * 100.0,
Some(format!("Synced headers to {}", new_height)),
)
},
)? {
if snapshot.work()? > self.headers.work()? {
info!("Applying snapshot with work: {}", snapshot.work()?);
self.headers.apply_snapshot(snapshot)?;
}
}
let synced_height = self.headers.get_height()?;
let buried_height = synced_height.saturating_sub(sync::BURIED_CONFIRMATIONS);
info!("Synced headers to height: {}", synced_height);
cf_sync.prepare_sync(Arc::clone(first_peer))?;
let all_scripts = Arc::new(
database
.iter_script_pubkeys(None)?
.into_iter()
.map(|s| s.to_bytes())
.collect::<Vec<_>>(),
);
#[allow(clippy::mutex_atomic)]
let last_synced_block = Arc::new(Mutex::new(synced_height));
let synced_bundles = Arc::new(AtomicUsize::new(0));
let progress_update = Arc::new(Mutex::new(progress_update));
let mut threads = Vec::with_capacity(self.peers.len());
for peer in &self.peers {
let cf_sync = Arc::clone(&cf_sync);
let peer = Arc::clone(peer);
let headers = Arc::clone(&self.headers);
let all_scripts = Arc::clone(&all_scripts);
let last_synced_block = Arc::clone(&last_synced_block);
let progress_update = Arc::clone(&progress_update);
let synced_bundles = Arc::clone(&synced_bundles);
let thread = std::thread::spawn(move || {
cf_sync.capture_thread_for_sync(
peer,
|block_hash, filter| {
if !filter
.match_any(block_hash, &mut all_scripts.iter().map(AsRef::as_ref))?
{
return Ok(false);
}
let block_height = headers.get_height_for(block_hash)?.unwrap_or(0);
let saved_correct_block = matches!(headers.get_full_block(block_height)?, Some(block) if &block.block_hash() == block_hash);
if saved_correct_block {
Ok(false)
} else {
let mut last_synced_block = last_synced_block.lock().unwrap();
// If we download a block older than `last_synced_block`, we update it so that
// we know to delete and re-process all txs starting from that height
if block_height < *last_synced_block {
*last_synced_block = block_height;
}
Ok(true)
}
},
|index| {
let synced_bundles = synced_bundles.fetch_add(1, Ordering::SeqCst);
let local_filters_cost = synced_bundles as f32 * SYNC_FILTERS_COST;
progress_update.lock().unwrap().update(
(headers_cost + local_filters_cost) / total_cost * 100.0,
Some(format!(
"Synced filters {} - {}",
index * 1000 + 1,
(index + 1) * 1000
)),
)
},
)
});
threads.push(thread);
}
for t in threads {
t.join().unwrap()?;
}
progress_update.lock().unwrap().update(
(headers_cost + filters_cost) / total_cost * 100.0,
Some("Processing downloaded blocks and mempool".into()),
)?;
// delete all txs newer than last_synced_block
let last_synced_block = *last_synced_block.lock().unwrap();
log::debug!(
"Dropping transactions newer than `last_synced_block` = {}",
last_synced_block
);
let mut updates = database.begin_batch();
for details in database.iter_txs(false)? {
match details.confirmation_time {
Some(c) if (c.height as usize) < last_synced_block => continue,
_ => updates.del_tx(&details.txid, false)?,
};
}
database.commit_batch(updates)?;
match first_peer.ask_for_mempool() {
Err(CompactFiltersError::PeerBloomDisabled) => {
log::warn!("Peer has BLOOM disabled, we can't ask for the mempool")
}
e => e?,
};
let mut internal_max_deriv = None;
let mut external_max_deriv = None;
for (height, block) in self.headers.iter_full_blocks()? {
for tx in &block.txdata {
self.process_tx(
database,
tx,
Some(height as u32),
None,
&mut internal_max_deriv,
&mut external_max_deriv,
)?;
}
}
for tx in first_peer.get_mempool().iter_txs().iter() {
self.process_tx(
database,
tx,
None,
None,
&mut internal_max_deriv,
&mut external_max_deriv,
)?;
}
let current_ext = database
.get_last_index(KeychainKind::External)?
.unwrap_or(0);
let first_ext_new = external_max_deriv.map(|x| x + 1).unwrap_or(0);
if first_ext_new > current_ext {
info!("Setting external index to {}", first_ext_new);
database.set_last_index(KeychainKind::External, first_ext_new)?;
}
let current_int = database
.get_last_index(KeychainKind::Internal)?
.unwrap_or(0);
let first_int_new = internal_max_deriv.map(|x| x + 1).unwrap_or(0);
if first_int_new > current_int {
info!("Setting internal index to {}", first_int_new);
database.set_last_index(KeychainKind::Internal, first_int_new)?;
}
info!("Dropping blocks until {}", buried_height);
self.headers.delete_blocks_until(buried_height)?;
progress_update
.lock()
.unwrap()
.update(100.0, Some("Done".into()))?;
Ok(())
}
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(self.peers[0]
.get_mempool()
.get_tx(&Inventory::Transaction(*txid)))
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
self.peers[0].broadcast_tx(tx.clone())?;
Ok(())
}
fn get_height(&self) -> Result<u32, Error> {
Ok(self.headers.get_height()? as u32)
}
fn estimate_fee(&self, _target: usize) -> Result<FeeRate, Error> {
// TODO
Ok(FeeRate::default())
}
}
/// Data to connect to a Bitcoin P2P peer
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
pub struct BitcoinPeerConfig {
/// Peer address such as 127.0.0.1:18333
pub address: String,
/// Optional socks5 proxy
pub socks5: Option<String>,
/// Optional socks5 proxy credentials
pub socks5_credentials: Option<(String, String)>,
}
/// Configuration for a [`CompactFiltersBlockchain`]
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
pub struct CompactFiltersBlockchainConfig {
/// List of peers to try to connect to for asking headers and filters
pub peers: Vec<BitcoinPeerConfig>,
/// Network used
pub network: Network,
/// Storage dir to save partially downloaded headers and full blocks. Should be a separate directory per descriptor. Consider using [crate::wallet::wallet_name_from_descriptor] for this.
pub storage_dir: String,
/// Optionally skip initial `skip_blocks` blocks (default: 0)
pub skip_blocks: Option<usize>,
}
impl ConfigurableBlockchain for CompactFiltersBlockchain {
type Config = CompactFiltersBlockchainConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
let mempool = Arc::new(Mempool::default());
let peers = config
.peers
.iter()
.map(|peer_conf| match &peer_conf.socks5 {
None => Peer::connect(&peer_conf.address, Arc::clone(&mempool), config.network),
Some(proxy) => Peer::connect_proxy(
peer_conf.address.as_str(),
proxy,
peer_conf
.socks5_credentials
.as_ref()
.map(|(a, b)| (a.as_str(), b.as_str())),
Arc::clone(&mempool),
config.network,
),
})
.collect::<Result<_, _>>()?;
Ok(CompactFiltersBlockchain::new(
peers,
&config.storage_dir,
config.skip_blocks,
)?)
}
}
/// An error that can occur during sync with a [`CompactFiltersBlockchain`]
#[derive(Debug)]
pub enum CompactFiltersError {
/// A peer sent an invalid or unexpected response
InvalidResponse,
/// The headers returned are invalid
InvalidHeaders,
/// The compact filter headers returned are invalid
InvalidFilterHeader,
/// The compact filter returned is invalid
InvalidFilter,
/// The peer is missing a block in the valid chain
MissingBlock,
/// The data stored in the block filters storage are corrupted
DataCorruption,
/// A peer is not connected
NotConnected,
/// A peer took too long to reply to one of our messages
Timeout,
/// The peer doesn't advertise the [`BLOOM`](bitcoin::network::constants::ServiceFlags::BLOOM) service flag
PeerBloomDisabled,
/// No peers have been specified
NoPeers,
/// Internal database error
Db(rocksdb::Error),
/// Internal I/O error
Io(std::io::Error),
/// Invalid BIP158 filter
Bip158(bitcoin::util::bip158::Error),
/// Internal system time error
Time(std::time::SystemTimeError),
/// Wrapper for [`crate::error::Error`]
Global(Box<crate::error::Error>),
}
impl fmt::Display for CompactFiltersError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
impl std::error::Error for CompactFiltersError {}
impl_error!(rocksdb::Error, Db, CompactFiltersError);
impl_error!(std::io::Error, Io, CompactFiltersError);
impl_error!(bitcoin::util::bip158::Error, Bip158, CompactFiltersError);
impl_error!(std::time::SystemTimeError, Time, CompactFiltersError);
impl From<crate::error::Error> for CompactFiltersError {
fn from(err: crate::error::Error) -> Self {
CompactFiltersError::Global(Box::new(err))
}
}

View File

@@ -0,0 +1,572 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::collections::HashMap;
use std::net::{TcpStream, ToSocketAddrs};
use std::sync::{Arc, Condvar, Mutex, RwLock};
use std::thread;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use socks::{Socks5Stream, ToTargetAddr};
use rand::{thread_rng, Rng};
use bitcoin::consensus::Encodable;
use bitcoin::hash_types::BlockHash;
use bitcoin::network::constants::ServiceFlags;
use bitcoin::network::message::{NetworkMessage, RawNetworkMessage};
use bitcoin::network::message_blockdata::*;
use bitcoin::network::message_filter::*;
use bitcoin::network::message_network::VersionMessage;
use bitcoin::network::stream_reader::StreamReader;
use bitcoin::network::Address;
use bitcoin::{Block, Network, Transaction, Txid, Wtxid};
use super::CompactFiltersError;
type ResponsesMap = HashMap<&'static str, Arc<(Mutex<Vec<NetworkMessage>>, Condvar)>>;
pub(crate) const TIMEOUT_SECS: u64 = 30;
/// Container for unconfirmed, but valid Bitcoin transactions
///
/// It is normally shared between [`Peer`]s with the use of [`Arc`], so that transactions are not
/// duplicated in memory.
#[derive(Debug, Default)]
pub struct Mempool(RwLock<InnerMempool>);
#[derive(Debug, Default)]
struct InnerMempool {
txs: HashMap<Txid, Transaction>,
wtxids: HashMap<Wtxid, Txid>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
enum TxIdentifier {
Wtxid(Wtxid),
Txid(Txid),
}
impl Mempool {
/// Create a new empty mempool
pub fn new() -> Self {
Self::default()
}
/// Add a transaction to the mempool
///
/// Note that this doesn't propagate the transaction to other
/// peers. To do that, [`broadcast`](crate::blockchain::Blockchain::broadcast) should be used.
pub fn add_tx(&self, tx: Transaction) {
let mut guard = self.0.write().unwrap();
guard.wtxids.insert(tx.wtxid(), tx.txid());
guard.txs.insert(tx.txid(), tx);
}
/// Look-up a transaction in the mempool given an [`Inventory`] request
pub fn get_tx(&self, inventory: &Inventory) -> Option<Transaction> {
let identifer = match inventory {
Inventory::Error | Inventory::Block(_) | Inventory::WitnessBlock(_) => return None,
Inventory::Transaction(txid) => TxIdentifier::Txid(*txid),
Inventory::WitnessTransaction(txid) => TxIdentifier::Txid(*txid),
Inventory::WTx(wtxid) => TxIdentifier::Wtxid(*wtxid),
Inventory::Unknown { inv_type, hash } => {
log::warn!(
"Unknown inventory request type `{}`, hash `{:?}`",
inv_type,
hash
);
return None;
}
};
let txid = match identifer {
TxIdentifier::Txid(txid) => Some(txid),
TxIdentifier::Wtxid(wtxid) => self.0.read().unwrap().wtxids.get(&wtxid).cloned(),
};
txid.map(|txid| self.0.read().unwrap().txs.get(&txid).cloned())
.flatten()
}
/// Return whether or not the mempool contains a transaction with a given txid
pub fn has_tx(&self, txid: &Txid) -> bool {
self.0.read().unwrap().txs.contains_key(txid)
}
/// Return the list of transactions contained in the mempool
pub fn iter_txs(&self) -> Vec<Transaction> {
self.0.read().unwrap().txs.values().cloned().collect()
}
}
/// A Bitcoin peer
#[derive(Debug)]
pub struct Peer {
writer: Arc<Mutex<TcpStream>>,
responses: Arc<RwLock<ResponsesMap>>,
reader_thread: thread::JoinHandle<()>,
connected: Arc<RwLock<bool>>,
mempool: Arc<Mempool>,
version: VersionMessage,
network: Network,
}
impl Peer {
/// Connect to a peer over a plaintext TCP connection
///
/// This function internally spawns a new thread that will monitor incoming messages from the
/// peer, and optionally reply to some of them transparently, like [pings](bitcoin::network::message::NetworkMessage::Ping)
pub fn connect<A: ToSocketAddrs>(
address: A,
mempool: Arc<Mempool>,
network: Network,
) -> Result<Self, CompactFiltersError> {
let stream = TcpStream::connect(address)?;
Peer::from_stream(stream, mempool, network)
}
/// Connect to a peer through a SOCKS5 proxy, optionally by using some credentials, specified
/// as a tuple of `(username, password)`
///
/// This function internally spawns a new thread that will monitor incoming messages from the
/// peer, and optionally reply to some of them transparently, like [pings](NetworkMessage::Ping)
pub fn connect_proxy<T: ToTargetAddr, P: ToSocketAddrs>(
target: T,
proxy: P,
credentials: Option<(&str, &str)>,
mempool: Arc<Mempool>,
network: Network,
) -> Result<Self, CompactFiltersError> {
let socks_stream = if let Some((username, password)) = credentials {
Socks5Stream::connect_with_password(proxy, target, username, password)?
} else {
Socks5Stream::connect(proxy, target)?
};
Peer::from_stream(socks_stream.into_inner(), mempool, network)
}
/// Create a [`Peer`] from an already connected TcpStream
fn from_stream(
stream: TcpStream,
mempool: Arc<Mempool>,
network: Network,
) -> Result<Self, CompactFiltersError> {
let writer = Arc::new(Mutex::new(stream.try_clone()?));
let responses: Arc<RwLock<ResponsesMap>> = Arc::new(RwLock::new(HashMap::new()));
let connected = Arc::new(RwLock::new(true));
let mut locked_writer = writer.lock().unwrap();
let reader_thread_responses = Arc::clone(&responses);
let reader_thread_writer = Arc::clone(&writer);
let reader_thread_mempool = Arc::clone(&mempool);
let reader_thread_connected = Arc::clone(&connected);
let reader_thread = thread::spawn(move || {
Self::reader_thread(
network,
stream,
reader_thread_responses,
reader_thread_writer,
reader_thread_mempool,
reader_thread_connected,
)
});
let timestamp = SystemTime::now().duration_since(UNIX_EPOCH)?.as_secs() as i64;
let nonce = thread_rng().gen();
let receiver = Address::new(&locked_writer.peer_addr()?, ServiceFlags::NONE);
let sender = Address {
services: ServiceFlags::NONE,
address: [0u16; 8],
port: 0,
};
Self::_send(
&mut locked_writer,
network.magic(),
NetworkMessage::Version(VersionMessage::new(
ServiceFlags::WITNESS,
timestamp,
receiver,
sender,
nonce,
"MagicalBitcoinWallet".into(),
0,
)),
)?;
let version = if let NetworkMessage::Version(version) =
Self::_recv(&responses, "version", None).unwrap()
{
version
} else {
return Err(CompactFiltersError::InvalidResponse);
};
if let NetworkMessage::Verack = Self::_recv(&responses, "verack", None).unwrap() {
Self::_send(&mut locked_writer, network.magic(), NetworkMessage::Verack)?;
} else {
return Err(CompactFiltersError::InvalidResponse);
}
std::mem::drop(locked_writer);
Ok(Peer {
writer,
responses,
reader_thread,
connected,
mempool,
version,
network,
})
}
/// Send a Bitcoin network message
fn _send(
writer: &mut TcpStream,
magic: u32,
payload: NetworkMessage,
) -> Result<(), CompactFiltersError> {
log::trace!("==> {:?}", payload);
let raw_message = RawNetworkMessage { magic, payload };
raw_message
.consensus_encode(writer)
.map_err(|_| CompactFiltersError::DataCorruption)?;
Ok(())
}
/// Wait for a specific incoming Bitcoin message, optionally with a timeout
fn _recv(
responses: &Arc<RwLock<ResponsesMap>>,
wait_for: &'static str,
timeout: Option<Duration>,
) -> Option<NetworkMessage> {
let message_resp = {
let mut lock = responses.write().unwrap();
let message_resp = lock.entry(wait_for).or_default();
Arc::clone(message_resp)
};
let (lock, cvar) = &*message_resp;
let mut messages = lock.lock().unwrap();
while messages.is_empty() {
match timeout {
None => messages = cvar.wait(messages).unwrap(),
Some(t) => {
let result = cvar.wait_timeout(messages, t).unwrap();
if result.1.timed_out() {
return None;
}
messages = result.0;
}
}
}
messages.pop()
}
/// Return the [`VersionMessage`] sent by the peer
pub fn get_version(&self) -> &VersionMessage {
&self.version
}
/// Return the Bitcoin [`Network`] in use
pub fn get_network(&self) -> Network {
self.network
}
/// Return the mempool used by this peer
pub fn get_mempool(&self) -> Arc<Mempool> {
Arc::clone(&self.mempool)
}
/// Return whether or not the peer is still connected
pub fn is_connected(&self) -> bool {
*self.connected.read().unwrap()
}
/// Internal function called once the `reader_thread` is spawned
fn reader_thread(
network: Network,
connection: TcpStream,
reader_thread_responses: Arc<RwLock<ResponsesMap>>,
reader_thread_writer: Arc<Mutex<TcpStream>>,
reader_thread_mempool: Arc<Mempool>,
reader_thread_connected: Arc<RwLock<bool>>,
) {
macro_rules! check_disconnect {
($call:expr) => {
match $call {
Ok(good) => good,
Err(e) => {
log::debug!("Error {:?}", e);
*reader_thread_connected.write().unwrap() = false;
break;
}
}
};
}
let mut reader = StreamReader::new(connection, None);
loop {
let raw_message: RawNetworkMessage = check_disconnect!(reader.read_next());
let in_message = if raw_message.magic != network.magic() {
continue;
} else {
raw_message.payload
};
log::trace!("<== {:?}", in_message);
match in_message {
NetworkMessage::Ping(nonce) => {
check_disconnect!(Self::_send(
&mut reader_thread_writer.lock().unwrap(),
network.magic(),
NetworkMessage::Pong(nonce),
));
continue;
}
NetworkMessage::Alert(_) => continue,
NetworkMessage::GetData(ref inv) => {
let (found, not_found): (Vec<_>, Vec<_>) = inv
.iter()
.map(|item| (*item, reader_thread_mempool.get_tx(item)))
.partition(|(_, d)| d.is_some());
for (_, found_tx) in found {
check_disconnect!(Self::_send(
&mut reader_thread_writer.lock().unwrap(),
network.magic(),
NetworkMessage::Tx(found_tx.unwrap()),
));
}
if !not_found.is_empty() {
check_disconnect!(Self::_send(
&mut reader_thread_writer.lock().unwrap(),
network.magic(),
NetworkMessage::NotFound(
not_found.into_iter().map(|(i, _)| i).collect(),
),
));
}
}
_ => {}
}
let message_resp = {
let mut lock = reader_thread_responses.write().unwrap();
let message_resp = lock.entry(in_message.cmd()).or_default();
Arc::clone(message_resp)
};
let (lock, cvar) = &*message_resp;
let mut messages = lock.lock().unwrap();
messages.push(in_message);
cvar.notify_all();
}
}
/// Send a raw Bitcoin message to the peer
pub fn send(&self, payload: NetworkMessage) -> Result<(), CompactFiltersError> {
let mut writer = self.writer.lock().unwrap();
Self::_send(&mut writer, self.network.magic(), payload)
}
/// Waits for a specific incoming Bitcoin message, optionally with a timeout
pub fn recv(
&self,
wait_for: &'static str,
timeout: Option<Duration>,
) -> Result<Option<NetworkMessage>, CompactFiltersError> {
Ok(Self::_recv(&self.responses, wait_for, timeout))
}
}
pub trait CompactFiltersPeer {
fn get_cf_checkpt(
&self,
filter_type: u8,
stop_hash: BlockHash,
) -> Result<CFCheckpt, CompactFiltersError>;
fn get_cf_headers(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<CFHeaders, CompactFiltersError>;
fn get_cf_filters(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<(), CompactFiltersError>;
fn pop_cf_filter_resp(&self) -> Result<CFilter, CompactFiltersError>;
}
impl CompactFiltersPeer for Peer {
fn get_cf_checkpt(
&self,
filter_type: u8,
stop_hash: BlockHash,
) -> Result<CFCheckpt, CompactFiltersError> {
self.send(NetworkMessage::GetCFCheckpt(GetCFCheckpt {
filter_type,
stop_hash,
}))?;
let response = self
.recv("cfcheckpt", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let response = match response {
NetworkMessage::CFCheckpt(response) => response,
_ => return Err(CompactFiltersError::InvalidResponse),
};
if response.filter_type != filter_type {
return Err(CompactFiltersError::InvalidResponse);
}
Ok(response)
}
fn get_cf_headers(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<CFHeaders, CompactFiltersError> {
self.send(NetworkMessage::GetCFHeaders(GetCFHeaders {
filter_type,
start_height,
stop_hash,
}))?;
let response = self
.recv("cfheaders", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let response = match response {
NetworkMessage::CFHeaders(response) => response,
_ => return Err(CompactFiltersError::InvalidResponse),
};
if response.filter_type != filter_type {
return Err(CompactFiltersError::InvalidResponse);
}
Ok(response)
}
fn pop_cf_filter_resp(&self) -> Result<CFilter, CompactFiltersError> {
let response = self
.recv("cfilter", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let response = match response {
NetworkMessage::CFilter(response) => response,
_ => return Err(CompactFiltersError::InvalidResponse),
};
Ok(response)
}
fn get_cf_filters(
&self,
filter_type: u8,
start_height: u32,
stop_hash: BlockHash,
) -> Result<(), CompactFiltersError> {
self.send(NetworkMessage::GetCFilters(GetCFilters {
filter_type,
start_height,
stop_hash,
}))?;
Ok(())
}
}
pub trait InvPeer {
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, CompactFiltersError>;
fn ask_for_mempool(&self) -> Result<(), CompactFiltersError>;
fn broadcast_tx(&self, tx: Transaction) -> Result<(), CompactFiltersError>;
}
impl InvPeer for Peer {
fn get_block(&self, block_hash: BlockHash) -> Result<Option<Block>, CompactFiltersError> {
self.send(NetworkMessage::GetData(vec![Inventory::WitnessBlock(
block_hash,
)]))?;
match self.recv("block", Some(Duration::from_secs(TIMEOUT_SECS)))? {
None => Ok(None),
Some(NetworkMessage::Block(response)) => Ok(Some(response)),
_ => Err(CompactFiltersError::InvalidResponse),
}
}
fn ask_for_mempool(&self) -> Result<(), CompactFiltersError> {
if !self.version.services.has(ServiceFlags::BLOOM) {
return Err(CompactFiltersError::PeerBloomDisabled);
}
self.send(NetworkMessage::MemPool)?;
let inv = match self.recv("inv", Some(Duration::from_secs(5)))? {
None => return Ok(()), // empty mempool
Some(NetworkMessage::Inv(inv)) => inv,
_ => return Err(CompactFiltersError::InvalidResponse),
};
let getdata = inv
.iter()
.cloned()
.filter(
|item| matches!(item, Inventory::Transaction(txid) if !self.mempool.has_tx(txid)),
)
.collect::<Vec<_>>();
let num_txs = getdata.len();
self.send(NetworkMessage::GetData(getdata))?;
for _ in 0..num_txs {
let tx = self
.recv("tx", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?;
let tx = match tx {
NetworkMessage::Tx(tx) => tx,
_ => return Err(CompactFiltersError::InvalidResponse),
};
self.mempool.add_tx(tx);
}
Ok(())
}
fn broadcast_tx(&self, tx: Transaction) -> Result<(), CompactFiltersError> {
self.mempool.add_tx(tx.clone());
self.send(NetworkMessage::Tx(tx))?;
Ok(())
}
}

View File

@@ -0,0 +1,850 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::convert::TryInto;
use std::fmt;
use std::io::{Read, Write};
use std::marker::PhantomData;
use std::ops::Deref;
use std::sync::Arc;
use std::sync::RwLock;
use rand::distributions::Alphanumeric;
use rand::{thread_rng, Rng};
use rocksdb::{Direction, IteratorMode, ReadOptions, WriteBatch, DB};
use bitcoin::consensus::{deserialize, encode::VarInt, serialize, Decodable, Encodable};
use bitcoin::hash_types::{FilterHash, FilterHeader};
use bitcoin::hashes::hex::FromHex;
use bitcoin::hashes::Hash;
use bitcoin::util::bip158::BlockFilter;
use bitcoin::util::uint::Uint256;
use bitcoin::Block;
use bitcoin::BlockHash;
use bitcoin::BlockHeader;
use bitcoin::Network;
use lazy_static::lazy_static;
use super::CompactFiltersError;
lazy_static! {
static ref MAINNET_GENESIS: Block = deserialize(&Vec::<u8>::from_hex("0100000000000000000000000000000000000000000000000000000000000000000000003BA3EDFD7A7B12B27AC72C3E67768F617FC81BC3888A51323A9FB8AA4B1E5E4A29AB5F49FFFF001D1DAC2B7C0101000000010000000000000000000000000000000000000000000000000000000000000000FFFFFFFF4D04FFFF001D0104455468652054696D65732030332F4A616E2F32303039204368616E63656C6C6F72206F6E206272696E6B206F66207365636F6E64206261696C6F757420666F722062616E6B73FFFFFFFF0100F2052A01000000434104678AFDB0FE5548271967F1A67130B7105CD6A828E03909A67962E0EA1F61DEB649F6BC3F4CEF38C4F35504E51EC112DE5C384DF7BA0B8D578A4C702B6BF11D5FAC00000000").unwrap()).unwrap();
static ref TESTNET_GENESIS: Block = deserialize(&Vec::<u8>::from_hex("0100000000000000000000000000000000000000000000000000000000000000000000003BA3EDFD7A7B12B27AC72C3E67768F617FC81BC3888A51323A9FB8AA4B1E5E4ADAE5494DFFFF001D1AA4AE180101000000010000000000000000000000000000000000000000000000000000000000000000FFFFFFFF4D04FFFF001D0104455468652054696D65732030332F4A616E2F32303039204368616E63656C6C6F72206F6E206272696E6B206F66207365636F6E64206261696C6F757420666F722062616E6B73FFFFFFFF0100F2052A01000000434104678AFDB0FE5548271967F1A67130B7105CD6A828E03909A67962E0EA1F61DEB649F6BC3F4CEF38C4F35504E51EC112DE5C384DF7BA0B8D578A4C702B6BF11D5FAC00000000").unwrap()).unwrap();
static ref REGTEST_GENESIS: Block = deserialize(&Vec::<u8>::from_hex("0100000000000000000000000000000000000000000000000000000000000000000000003BA3EDFD7A7B12B27AC72C3E67768F617FC81BC3888A51323A9FB8AA4B1E5E4ADAE5494DFFFF7F20020000000101000000010000000000000000000000000000000000000000000000000000000000000000FFFFFFFF4D04FFFF001D0104455468652054696D65732030332F4A616E2F32303039204368616E63656C6C6F72206F6E206272696E6B206F66207365636F6E64206261696C6F757420666F722062616E6B73FFFFFFFF0100F2052A01000000434104678AFDB0FE5548271967F1A67130B7105CD6A828E03909A67962E0EA1F61DEB649F6BC3F4CEF38C4F35504E51EC112DE5C384DF7BA0B8D578A4C702B6BF11D5FAC00000000").unwrap()).unwrap();
static ref SIGNET_GENESIS: Block = deserialize(&Vec::<u8>::from_hex("0100000000000000000000000000000000000000000000000000000000000000000000003BA3EDFD7A7B12B27AC72C3E67768F617FC81BC3888A51323A9FB8AA4B1E5E4A008F4D5FAE77031E8AD222030101000000010000000000000000000000000000000000000000000000000000000000000000FFFFFFFF4D04FFFF001D0104455468652054696D65732030332F4A616E2F32303039204368616E63656C6C6F72206F6E206272696E6B206F66207365636F6E64206261696C6F757420666F722062616E6B73FFFFFFFF0100F2052A01000000434104678AFDB0FE5548271967F1A67130B7105CD6A828E03909A67962E0EA1F61DEB649F6BC3F4CEF38C4F35504E51EC112DE5C384DF7BA0B8D578A4C702B6BF11D5FAC00000000").unwrap()).unwrap();
}
pub trait StoreType: Default + fmt::Debug {}
#[derive(Default, Debug)]
pub struct Full;
impl StoreType for Full {}
#[derive(Default, Debug)]
pub struct Snapshot;
impl StoreType for Snapshot {}
pub enum StoreEntry {
BlockHeader(Option<usize>),
Block(Option<usize>),
BlockHeaderIndex(Option<BlockHash>),
CFilterTable((u8, Option<usize>)),
}
impl StoreEntry {
pub fn get_prefix(&self) -> Vec<u8> {
match self {
StoreEntry::BlockHeader(_) => b"z",
StoreEntry::Block(_) => b"x",
StoreEntry::BlockHeaderIndex(_) => b"i",
StoreEntry::CFilterTable(_) => b"t",
}
.to_vec()
}
pub fn get_key(&self) -> Vec<u8> {
let mut prefix = self.get_prefix();
match self {
StoreEntry::BlockHeader(Some(height)) => {
prefix.extend_from_slice(&height.to_be_bytes())
}
StoreEntry::Block(Some(height)) => prefix.extend_from_slice(&height.to_be_bytes()),
StoreEntry::BlockHeaderIndex(Some(hash)) => {
prefix.extend_from_slice(&hash.into_inner())
}
StoreEntry::CFilterTable((filter_type, bundle_index)) => {
prefix.push(*filter_type);
if let Some(bundle_index) = bundle_index {
prefix.extend_from_slice(&bundle_index.to_be_bytes());
}
}
_ => {}
}
prefix
}
}
pub trait SerializeDb: Sized {
fn serialize(&self) -> Vec<u8>;
fn deserialize(data: &[u8]) -> Result<Self, CompactFiltersError>;
}
impl<T> SerializeDb for T
where
T: Encodable + Decodable,
{
fn serialize(&self) -> Vec<u8> {
serialize(self)
}
fn deserialize(data: &[u8]) -> Result<Self, CompactFiltersError> {
deserialize(data).map_err(|_| CompactFiltersError::DataCorruption)
}
}
impl Encodable for BundleStatus {
fn consensus_encode<W: Write>(&self, mut e: W) -> Result<usize, std::io::Error> {
let mut written = 0;
match self {
BundleStatus::Init => {
written += 0x00u8.consensus_encode(&mut e)?;
}
BundleStatus::CfHeaders { cf_headers } => {
written += 0x01u8.consensus_encode(&mut e)?;
written += VarInt(cf_headers.len() as u64).consensus_encode(&mut e)?;
for header in cf_headers {
written += header.consensus_encode(&mut e)?;
}
}
BundleStatus::CFilters { cf_filters } => {
written += 0x02u8.consensus_encode(&mut e)?;
written += VarInt(cf_filters.len() as u64).consensus_encode(&mut e)?;
for filter in cf_filters {
written += filter.consensus_encode(&mut e)?;
}
}
BundleStatus::Processed { cf_filters } => {
written += 0x03u8.consensus_encode(&mut e)?;
written += VarInt(cf_filters.len() as u64).consensus_encode(&mut e)?;
for filter in cf_filters {
written += filter.consensus_encode(&mut e)?;
}
}
BundleStatus::Pruned => {
written += 0x04u8.consensus_encode(&mut e)?;
}
BundleStatus::Tip { cf_filters } => {
written += 0x05u8.consensus_encode(&mut e)?;
written += VarInt(cf_filters.len() as u64).consensus_encode(&mut e)?;
for filter in cf_filters {
written += filter.consensus_encode(&mut e)?;
}
}
}
Ok(written)
}
}
impl Decodable for BundleStatus {
fn consensus_decode<D: Read>(mut d: D) -> Result<Self, bitcoin::consensus::encode::Error> {
let byte_type = u8::consensus_decode(&mut d)?;
match byte_type {
0x00 => Ok(BundleStatus::Init),
0x01 => {
let num = VarInt::consensus_decode(&mut d)?;
let num = num.0 as usize;
let mut cf_headers = Vec::with_capacity(num);
for _ in 0..num {
cf_headers.push(FilterHeader::consensus_decode(&mut d)?);
}
Ok(BundleStatus::CfHeaders { cf_headers })
}
0x02 => {
let num = VarInt::consensus_decode(&mut d)?;
let num = num.0 as usize;
let mut cf_filters = Vec::with_capacity(num);
for _ in 0..num {
cf_filters.push(Vec::<u8>::consensus_decode(&mut d)?);
}
Ok(BundleStatus::CFilters { cf_filters })
}
0x03 => {
let num = VarInt::consensus_decode(&mut d)?;
let num = num.0 as usize;
let mut cf_filters = Vec::with_capacity(num);
for _ in 0..num {
cf_filters.push(Vec::<u8>::consensus_decode(&mut d)?);
}
Ok(BundleStatus::Processed { cf_filters })
}
0x04 => Ok(BundleStatus::Pruned),
0x05 => {
let num = VarInt::consensus_decode(&mut d)?;
let num = num.0 as usize;
let mut cf_filters = Vec::with_capacity(num);
for _ in 0..num {
cf_filters.push(Vec::<u8>::consensus_decode(&mut d)?);
}
Ok(BundleStatus::Tip { cf_filters })
}
_ => Err(bitcoin::consensus::encode::Error::ParseFailed(
"Invalid byte type",
)),
}
}
}
pub struct ChainStore<T: StoreType> {
store: Arc<RwLock<DB>>,
cf_name: String,
min_height: usize,
network: Network,
phantom: PhantomData<T>,
}
impl ChainStore<Full> {
pub fn new(store: DB, network: Network) -> Result<Self, CompactFiltersError> {
let genesis = match network {
Network::Bitcoin => MAINNET_GENESIS.deref(),
Network::Testnet => TESTNET_GENESIS.deref(),
Network::Regtest => REGTEST_GENESIS.deref(),
Network::Signet => SIGNET_GENESIS.deref(),
};
let cf_name = "default".to_string();
let cf_handle = store.cf_handle(&cf_name).unwrap();
let genesis_key = StoreEntry::BlockHeader(Some(0)).get_key();
if store.get_pinned_cf(cf_handle, &genesis_key)?.is_none() {
let mut batch = WriteBatch::default();
batch.put_cf(
cf_handle,
genesis_key,
(genesis.header, genesis.header.work()).serialize(),
);
batch.put_cf(
cf_handle,
StoreEntry::BlockHeaderIndex(Some(genesis.block_hash())).get_key(),
&0usize.to_be_bytes(),
);
store.write(batch)?;
}
Ok(ChainStore {
store: Arc::new(RwLock::new(store)),
cf_name,
min_height: 0,
network,
phantom: PhantomData,
})
}
pub fn get_locators(&self) -> Result<Vec<(BlockHash, usize)>, CompactFiltersError> {
let mut step = 1;
let mut index = self.get_height()?;
let mut answer = Vec::new();
let store_read = self.store.read().unwrap();
let cf_handle = store_read.cf_handle(&self.cf_name).unwrap();
loop {
if answer.len() > 10 {
step *= 2;
}
let (header, _): (BlockHeader, Uint256) = SerializeDb::deserialize(
&store_read
.get_pinned_cf(cf_handle, StoreEntry::BlockHeader(Some(index)).get_key())?
.unwrap(),
)?;
answer.push((header.block_hash(), index));
if let Some(new_index) = index.checked_sub(step) {
index = new_index;
} else {
break;
}
}
Ok(answer)
}
pub fn start_snapshot(&self, from: usize) -> Result<ChainStore<Snapshot>, CompactFiltersError> {
let new_cf_name: String = thread_rng().sample_iter(&Alphanumeric).take(16).collect();
let new_cf_name = format!("_headers:{}", new_cf_name);
let mut write_store = self.store.write().unwrap();
write_store.create_cf(&new_cf_name, &Default::default())?;
let cf_handle = write_store.cf_handle(&self.cf_name).unwrap();
let new_cf_handle = write_store.cf_handle(&new_cf_name).unwrap();
let (header, work): (BlockHeader, Uint256) = SerializeDb::deserialize(
&write_store
.get_pinned_cf(cf_handle, StoreEntry::BlockHeader(Some(from)).get_key())?
.ok_or(CompactFiltersError::DataCorruption)?,
)?;
let mut batch = WriteBatch::default();
batch.put_cf(
new_cf_handle,
StoreEntry::BlockHeaderIndex(Some(header.block_hash())).get_key(),
&from.to_be_bytes(),
);
batch.put_cf(
new_cf_handle,
StoreEntry::BlockHeader(Some(from)).get_key(),
(header, work).serialize(),
);
write_store.write(batch)?;
let store = Arc::clone(&self.store);
Ok(ChainStore {
store,
cf_name: new_cf_name,
min_height: from,
network: self.network,
phantom: PhantomData,
})
}
pub fn recover_snapshot(&self, cf_name: &str) -> Result<(), CompactFiltersError> {
let mut write_store = self.store.write().unwrap();
let snapshot_cf_handle = write_store.cf_handle(cf_name).unwrap();
let prefix = StoreEntry::BlockHeader(None).get_key();
let mut iterator = write_store.prefix_iterator_cf(snapshot_cf_handle, prefix);
let min_height = match iterator
.next()
.and_then(|(k, _)| k[1..].try_into().ok())
.map(usize::from_be_bytes)
{
None => {
std::mem::drop(iterator);
write_store.drop_cf(cf_name).ok();
return Ok(());
}
Some(x) => x,
};
std::mem::drop(iterator);
std::mem::drop(write_store);
let snapshot = ChainStore {
store: Arc::clone(&self.store),
cf_name: cf_name.into(),
min_height,
network: self.network,
phantom: PhantomData,
};
if snapshot.work()? > self.work()? {
self.apply_snapshot(snapshot)?;
}
Ok(())
}
pub fn apply_snapshot(
&self,
snaphost: ChainStore<Snapshot>,
) -> Result<(), CompactFiltersError> {
let mut batch = WriteBatch::default();
let read_store = self.store.read().unwrap();
let cf_handle = read_store.cf_handle(&self.cf_name).unwrap();
let snapshot_cf_handle = read_store.cf_handle(&snaphost.cf_name).unwrap();
let from_key = StoreEntry::BlockHeader(Some(snaphost.min_height)).get_key();
let to_key = StoreEntry::BlockHeader(Some(usize::MAX)).get_key();
let mut opts = ReadOptions::default();
opts.set_iterate_upper_bound(to_key.clone());
log::debug!("Removing items");
batch.delete_range_cf(cf_handle, &from_key, &to_key);
for (_, v) in read_store.iterator_cf_opt(
cf_handle,
opts,
IteratorMode::From(&from_key, Direction::Forward),
) {
let (header, _): (BlockHeader, Uint256) = SerializeDb::deserialize(&v)?;
batch.delete_cf(
cf_handle,
StoreEntry::BlockHeaderIndex(Some(header.block_hash())).get_key(),
);
}
// Delete full blocks overridden by snapshot
let from_key = StoreEntry::Block(Some(snaphost.min_height)).get_key();
let to_key = StoreEntry::Block(Some(usize::MAX)).get_key();
batch.delete_range(&from_key, &to_key);
log::debug!("Copying over new items");
for (k, v) in read_store.iterator_cf(snapshot_cf_handle, IteratorMode::Start) {
batch.put_cf(cf_handle, k, v);
}
read_store.write(batch)?;
std::mem::drop(read_store);
self.store.write().unwrap().drop_cf(&snaphost.cf_name)?;
Ok(())
}
pub fn get_height_for(
&self,
block_hash: &BlockHash,
) -> Result<Option<usize>, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let cf_handle = read_store.cf_handle(&self.cf_name).unwrap();
let key = StoreEntry::BlockHeaderIndex(Some(*block_hash)).get_key();
let data = read_store.get_pinned_cf(cf_handle, key)?;
data.map(|data| {
Ok::<_, CompactFiltersError>(usize::from_be_bytes(
data.as_ref()
.try_into()
.map_err(|_| CompactFiltersError::DataCorruption)?,
))
})
.transpose()
}
pub fn get_block_hash(&self, height: usize) -> Result<Option<BlockHash>, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let cf_handle = read_store.cf_handle(&self.cf_name).unwrap();
let key = StoreEntry::BlockHeader(Some(height)).get_key();
let data = read_store.get_pinned_cf(cf_handle, key)?;
data.map(|data| {
let (header, _): (BlockHeader, Uint256) =
deserialize(&data).map_err(|_| CompactFiltersError::DataCorruption)?;
Ok::<_, CompactFiltersError>(header.block_hash())
})
.transpose()
}
pub fn save_full_block(&self, block: &Block, height: usize) -> Result<(), CompactFiltersError> {
let key = StoreEntry::Block(Some(height)).get_key();
self.store.read().unwrap().put(key, block.serialize())?;
Ok(())
}
pub fn get_full_block(&self, height: usize) -> Result<Option<Block>, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let key = StoreEntry::Block(Some(height)).get_key();
let opt_block = read_store.get_pinned(key)?;
opt_block
.map(|data| deserialize(&data))
.transpose()
.map_err(|_| CompactFiltersError::DataCorruption)
}
pub fn delete_blocks_until(&self, height: usize) -> Result<(), CompactFiltersError> {
let from_key = StoreEntry::Block(Some(0)).get_key();
let to_key = StoreEntry::Block(Some(height)).get_key();
let mut batch = WriteBatch::default();
batch.delete_range(&from_key, &to_key);
self.store.read().unwrap().write(batch)?;
Ok(())
}
pub fn iter_full_blocks(&self) -> Result<Vec<(usize, Block)>, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let prefix = StoreEntry::Block(None).get_key();
let iterator = read_store.prefix_iterator(&prefix);
// FIXME: we have to filter manually because rocksdb sometimes returns stuff that doesn't
// have the right prefix
iterator
.filter(|(k, _)| k.starts_with(&prefix))
.map(|(k, v)| {
let height: usize = usize::from_be_bytes(
k[1..]
.try_into()
.map_err(|_| CompactFiltersError::DataCorruption)?,
);
let block = SerializeDb::deserialize(&v)?;
Ok((height, block))
})
.collect::<Result<_, _>>()
}
}
impl<T: StoreType> ChainStore<T> {
pub fn work(&self) -> Result<Uint256, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let cf_handle = read_store.cf_handle(&self.cf_name).unwrap();
let prefix = StoreEntry::BlockHeader(None).get_key();
let iterator = read_store.prefix_iterator_cf(cf_handle, prefix);
Ok(iterator
.last()
.map(|(_, v)| -> Result<_, CompactFiltersError> {
let (_, work): (BlockHeader, Uint256) = SerializeDb::deserialize(&v)?;
Ok(work)
})
.transpose()?
.unwrap_or_default())
}
pub fn get_height(&self) -> Result<usize, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let cf_handle = read_store.cf_handle(&self.cf_name).unwrap();
let prefix = StoreEntry::BlockHeader(None).get_key();
let iterator = read_store.prefix_iterator_cf(cf_handle, prefix);
Ok(iterator
.last()
.map(|(k, _)| -> Result<_, CompactFiltersError> {
let height = usize::from_be_bytes(
k[1..]
.try_into()
.map_err(|_| CompactFiltersError::DataCorruption)?,
);
Ok(height)
})
.transpose()?
.unwrap_or_default())
}
pub fn get_tip_hash(&self) -> Result<Option<BlockHash>, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let cf_handle = read_store.cf_handle(&self.cf_name).unwrap();
let prefix = StoreEntry::BlockHeader(None).get_key();
let iterator = read_store.prefix_iterator_cf(cf_handle, prefix);
iterator
.last()
.map(|(_, v)| -> Result<_, CompactFiltersError> {
let (header, _): (BlockHeader, Uint256) = SerializeDb::deserialize(&v)?;
Ok(header.block_hash())
})
.transpose()
}
pub fn apply(
&mut self,
from: usize,
headers: Vec<BlockHeader>,
) -> Result<BlockHash, CompactFiltersError> {
let mut batch = WriteBatch::default();
let read_store = self.store.read().unwrap();
let cf_handle = read_store.cf_handle(&self.cf_name).unwrap();
let (mut last_hash, mut accumulated_work) = read_store
.get_pinned_cf(cf_handle, StoreEntry::BlockHeader(Some(from)).get_key())?
.map(|result| {
let (header, work): (BlockHeader, Uint256) = SerializeDb::deserialize(&result)?;
Ok::<_, CompactFiltersError>((header.block_hash(), work))
})
.transpose()?
.ok_or(CompactFiltersError::DataCorruption)?;
for (index, header) in headers.into_iter().enumerate() {
if header.prev_blockhash != last_hash {
return Err(CompactFiltersError::InvalidHeaders);
}
last_hash = header.block_hash();
accumulated_work = accumulated_work + header.work();
let height = from + index + 1;
batch.put_cf(
cf_handle,
StoreEntry::BlockHeaderIndex(Some(header.block_hash())).get_key(),
&(height).to_be_bytes(),
);
batch.put_cf(
cf_handle,
StoreEntry::BlockHeader(Some(height)).get_key(),
(header, accumulated_work).serialize(),
);
}
std::mem::drop(read_store);
self.store.write().unwrap().write(batch)?;
Ok(last_hash)
}
}
impl<T: StoreType> fmt::Debug for ChainStore<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct(&format!("ChainStore<{:?}>", T::default()))
.field("cf_name", &self.cf_name)
.field("min_height", &self.min_height)
.field("network", &self.network)
.field("headers_height", &self.get_height())
.field("tip_hash", &self.get_tip_hash())
.finish()
}
}
pub enum BundleStatus {
Init,
CfHeaders { cf_headers: Vec<FilterHeader> },
CFilters { cf_filters: Vec<Vec<u8>> },
Processed { cf_filters: Vec<Vec<u8>> },
Tip { cf_filters: Vec<Vec<u8>> },
Pruned,
}
pub struct CfStore {
store: Arc<RwLock<DB>>,
filter_type: u8,
}
type BundleEntry = (BundleStatus, FilterHeader);
impl CfStore {
pub fn new(
headers_store: &ChainStore<Full>,
filter_type: u8,
) -> Result<Self, CompactFiltersError> {
let cf_store = CfStore {
store: Arc::clone(&headers_store.store),
filter_type,
};
let genesis = match headers_store.network {
Network::Bitcoin => MAINNET_GENESIS.deref(),
Network::Testnet => TESTNET_GENESIS.deref(),
Network::Regtest => REGTEST_GENESIS.deref(),
Network::Signet => SIGNET_GENESIS.deref(),
};
let filter = BlockFilter::new_script_filter(genesis, |utxo| {
Err(bitcoin::util::bip158::Error::UtxoMissing(*utxo))
})?;
let first_key = StoreEntry::CFilterTable((filter_type, Some(0))).get_key();
// Add the genesis' filter
{
let read_store = cf_store.store.read().unwrap();
if read_store.get_pinned(&first_key)?.is_none() {
read_store.put(
&first_key,
(
BundleStatus::Init,
filter.filter_header(&FilterHeader::from_hash(Default::default())),
)
.serialize(),
)?;
}
}
Ok(cf_store)
}
pub fn get_filter_type(&self) -> u8 {
self.filter_type
}
pub fn get_bundles(&self) -> Result<Vec<BundleEntry>, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let prefix = StoreEntry::CFilterTable((self.filter_type, None)).get_key();
let iterator = read_store.prefix_iterator(&prefix);
// FIXME: we have to filter manually because rocksdb sometimes returns stuff that doesn't
// have the right prefix
iterator
.filter(|(k, _)| k.starts_with(&prefix))
.map(|(_, data)| BundleEntry::deserialize(&data))
.collect::<Result<_, _>>()
}
pub fn get_checkpoints(&self) -> Result<Vec<FilterHeader>, CompactFiltersError> {
let read_store = self.store.read().unwrap();
let prefix = StoreEntry::CFilterTable((self.filter_type, None)).get_key();
let iterator = read_store.prefix_iterator(&prefix);
// FIXME: we have to filter manually because rocksdb sometimes returns stuff that doesn't
// have the right prefix
iterator
.filter(|(k, _)| k.starts_with(&prefix))
.skip(1)
.map(|(_, data)| Ok::<_, CompactFiltersError>(BundleEntry::deserialize(&data)?.1))
.collect::<Result<_, _>>()
}
pub fn replace_checkpoints(
&self,
checkpoints: Vec<FilterHeader>,
) -> Result<(), CompactFiltersError> {
let current_checkpoints = self.get_checkpoints()?;
let mut equal_bundles = 0;
for (index, (our, their)) in current_checkpoints
.iter()
.zip(checkpoints.iter())
.enumerate()
{
equal_bundles = index;
if our != their {
break;
}
}
let read_store = self.store.read().unwrap();
let mut batch = WriteBatch::default();
for (index, filter_hash) in checkpoints.iter().enumerate().skip(equal_bundles) {
let key = StoreEntry::CFilterTable((self.filter_type, Some(index + 1))).get_key(); // +1 to skip the genesis' filter
if let Some((BundleStatus::Tip { .. }, _)) = read_store
.get_pinned(&key)?
.map(|data| BundleEntry::deserialize(&data))
.transpose()?
{
println!("Keeping bundle #{} as Tip", index);
} else {
batch.put(&key, (BundleStatus::Init, *filter_hash).serialize());
}
}
read_store.write(batch)?;
Ok(())
}
pub fn advance_to_cf_headers(
&self,
bundle: usize,
checkpoint: FilterHeader,
filter_hashes: Vec<FilterHash>,
) -> Result<BundleStatus, CompactFiltersError> {
let cf_headers: Vec<FilterHeader> = filter_hashes
.into_iter()
.scan(checkpoint, |prev_header, filter_hash| {
let filter_header = filter_hash.filter_header(prev_header);
*prev_header = filter_header;
Some(filter_header)
})
.collect();
let read_store = self.store.read().unwrap();
let next_key = StoreEntry::CFilterTable((self.filter_type, Some(bundle + 1))).get_key(); // +1 to skip the genesis' filter
if let Some((_, next_checkpoint)) = read_store
.get_pinned(&next_key)?
.map(|data| BundleEntry::deserialize(&data))
.transpose()?
{
// check connection with the next bundle if present
if cf_headers.iter().last() != Some(&next_checkpoint) {
return Err(CompactFiltersError::InvalidFilterHeader);
}
}
let key = StoreEntry::CFilterTable((self.filter_type, Some(bundle))).get_key();
let value = (BundleStatus::CfHeaders { cf_headers }, checkpoint);
read_store.put(key, value.serialize())?;
Ok(value.0)
}
pub fn advance_to_cf_filters(
&self,
bundle: usize,
checkpoint: FilterHeader,
headers: Vec<FilterHeader>,
filters: Vec<(usize, Vec<u8>)>,
) -> Result<BundleStatus, CompactFiltersError> {
let cf_filters = filters
.into_iter()
.zip(headers.into_iter())
.scan(checkpoint, |prev_header, ((_, filter_content), header)| {
let filter = BlockFilter::new(&filter_content);
if header != filter.filter_header(prev_header) {
return Some(Err(CompactFiltersError::InvalidFilter));
}
*prev_header = header;
Some(Ok::<_, CompactFiltersError>(filter_content))
})
.collect::<Result<_, _>>()?;
let key = StoreEntry::CFilterTable((self.filter_type, Some(bundle))).get_key();
let value = (BundleStatus::CFilters { cf_filters }, checkpoint);
let read_store = self.store.read().unwrap();
read_store.put(key, value.serialize())?;
Ok(value.0)
}
pub fn prune_filters(
&self,
bundle: usize,
checkpoint: FilterHeader,
) -> Result<BundleStatus, CompactFiltersError> {
let key = StoreEntry::CFilterTable((self.filter_type, Some(bundle))).get_key();
let value = (BundleStatus::Pruned, checkpoint);
let read_store = self.store.read().unwrap();
read_store.put(key, value.serialize())?;
Ok(value.0)
}
pub fn mark_as_tip(
&self,
bundle: usize,
cf_filters: Vec<Vec<u8>>,
checkpoint: FilterHeader,
) -> Result<BundleStatus, CompactFiltersError> {
let key = StoreEntry::CFilterTable((self.filter_type, Some(bundle))).get_key();
let value = (BundleStatus::Tip { cf_filters }, checkpoint);
let read_store = self.store.read().unwrap();
read_store.put(key, value.serialize())?;
Ok(value.0)
}
}

View File

@@ -0,0 +1,296 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::collections::{BTreeMap, HashMap, VecDeque};
use std::sync::{Arc, Mutex};
use std::time::Duration;
use bitcoin::hash_types::{BlockHash, FilterHeader};
use bitcoin::network::message::NetworkMessage;
use bitcoin::network::message_blockdata::GetHeadersMessage;
use bitcoin::util::bip158::BlockFilter;
use super::peer::*;
use super::store::*;
use super::CompactFiltersError;
use crate::error::Error;
pub(crate) const BURIED_CONFIRMATIONS: usize = 100;
pub struct CfSync {
headers_store: Arc<ChainStore<Full>>,
cf_store: Arc<CfStore>,
skip_blocks: usize,
bundles: Mutex<VecDeque<(BundleStatus, FilterHeader, usize)>>,
}
impl CfSync {
pub fn new(
headers_store: Arc<ChainStore<Full>>,
skip_blocks: usize,
filter_type: u8,
) -> Result<Self, CompactFiltersError> {
let cf_store = Arc::new(CfStore::new(&headers_store, filter_type)?);
Ok(CfSync {
headers_store,
cf_store,
skip_blocks,
bundles: Mutex::new(VecDeque::new()),
})
}
pub fn pruned_bundles(&self) -> Result<usize, CompactFiltersError> {
Ok(self
.cf_store
.get_bundles()?
.into_iter()
.skip(self.skip_blocks / 1000)
.fold(0, |acc, (status, _)| match status {
BundleStatus::Pruned => acc + 1,
_ => acc,
}))
}
pub fn prepare_sync(&self, peer: Arc<Peer>) -> Result<(), CompactFiltersError> {
let mut bundles_lock = self.bundles.lock().unwrap();
let resp = peer.get_cf_checkpt(
self.cf_store.get_filter_type(),
self.headers_store.get_tip_hash()?.unwrap(),
)?;
self.cf_store.replace_checkpoints(resp.filter_headers)?;
bundles_lock.clear();
for (index, (status, checkpoint)) in self.cf_store.get_bundles()?.into_iter().enumerate() {
bundles_lock.push_back((status, checkpoint, index));
}
Ok(())
}
pub fn capture_thread_for_sync<F, Q>(
&self,
peer: Arc<Peer>,
process: F,
completed_bundle: Q,
) -> Result<(), CompactFiltersError>
where
F: Fn(&BlockHash, &BlockFilter) -> Result<bool, CompactFiltersError>,
Q: Fn(usize) -> Result<(), Error>,
{
let current_height = self.headers_store.get_height()?; // TODO: we should update it in case headers_store is also updated
loop {
let (mut status, checkpoint, index) = match self.bundles.lock().unwrap().pop_front() {
None => break,
Some(x) => x,
};
log::debug!(
"Processing bundle #{} - height {} to {}",
index,
index * 1000 + 1,
(index + 1) * 1000
);
let process_received_filters =
|expected_filters| -> Result<BTreeMap<usize, Vec<u8>>, CompactFiltersError> {
let mut filters_map = BTreeMap::new();
for _ in 0..expected_filters {
let filter = peer.pop_cf_filter_resp()?;
if filter.filter_type != self.cf_store.get_filter_type() {
return Err(CompactFiltersError::InvalidResponse);
}
match self.headers_store.get_height_for(&filter.block_hash)? {
Some(height) => filters_map.insert(height, filter.filter),
None => return Err(CompactFiltersError::InvalidFilter),
};
}
Ok(filters_map)
};
let start_height = index * 1000 + 1;
let mut already_processed = 0;
if start_height < self.skip_blocks {
status = self.cf_store.prune_filters(index, checkpoint)?;
}
let stop_height = std::cmp::min(current_height, start_height + 999);
let stop_hash = self.headers_store.get_block_hash(stop_height)?.unwrap();
if let BundleStatus::Init = status {
log::trace!("status: Init");
let resp = peer.get_cf_headers(0x00, start_height as u32, stop_hash)?;
assert!(resp.previous_filter_header == checkpoint);
status =
self.cf_store
.advance_to_cf_headers(index, checkpoint, resp.filter_hashes)?;
}
if let BundleStatus::Tip { cf_filters } = status {
log::trace!("status: Tip (beginning) ");
already_processed = cf_filters.len();
let headers_resp = peer.get_cf_headers(0x00, start_height as u32, stop_hash)?;
let cf_headers = match self.cf_store.advance_to_cf_headers(
index,
checkpoint,
headers_resp.filter_hashes,
)? {
BundleStatus::CfHeaders { cf_headers } => cf_headers,
_ => return Err(CompactFiltersError::InvalidResponse),
};
peer.get_cf_filters(
self.cf_store.get_filter_type(),
(start_height + cf_filters.len()) as u32,
stop_hash,
)?;
let expected_filters = stop_height - start_height + 1 - cf_filters.len();
let filters_map = process_received_filters(expected_filters)?;
let filters = cf_filters
.into_iter()
.enumerate()
.chain(filters_map.into_iter())
.collect();
status = self
.cf_store
.advance_to_cf_filters(index, checkpoint, cf_headers, filters)?;
}
if let BundleStatus::CfHeaders { cf_headers } = status {
log::trace!("status: CFHeaders");
peer.get_cf_filters(
self.cf_store.get_filter_type(),
start_height as u32,
stop_hash,
)?;
let expected_filters = stop_height - start_height + 1;
let filters_map = process_received_filters(expected_filters)?;
status = self.cf_store.advance_to_cf_filters(
index,
checkpoint,
cf_headers,
filters_map.into_iter().collect(),
)?;
}
if let BundleStatus::CFilters { cf_filters } = status {
log::trace!("status: CFilters");
let last_sync_buried_height =
(start_height + already_processed).saturating_sub(BURIED_CONFIRMATIONS);
for (filter_index, filter) in cf_filters.iter().enumerate() {
let height = filter_index + start_height;
// do not download blocks that were already "buried" since the last sync
if height < last_sync_buried_height {
continue;
}
let block_hash = self.headers_store.get_block_hash(height)?.unwrap();
// TODO: also download random blocks?
if process(&block_hash, &BlockFilter::new(filter))? {
log::debug!("Downloading block {}", block_hash);
let block = peer
.get_block(block_hash)?
.ok_or(CompactFiltersError::MissingBlock)?;
self.headers_store.save_full_block(&block, height)?;
}
}
status = BundleStatus::Processed { cf_filters };
}
if let BundleStatus::Processed { cf_filters } = status {
log::trace!("status: Processed");
if current_height - stop_height > 1000 {
status = self.cf_store.prune_filters(index, checkpoint)?;
} else {
status = self.cf_store.mark_as_tip(index, cf_filters, checkpoint)?;
}
completed_bundle(index)?;
}
if let BundleStatus::Pruned = status {
log::trace!("status: Pruned");
}
if let BundleStatus::Tip { .. } = status {
log::trace!("status: Tip");
}
}
Ok(())
}
}
pub fn sync_headers<F>(
peer: Arc<Peer>,
store: Arc<ChainStore<Full>>,
sync_fn: F,
) -> Result<Option<ChainStore<Snapshot>>, CompactFiltersError>
where
F: Fn(usize) -> Result<(), Error>,
{
let locators = store.get_locators()?;
let locators_vec = locators.iter().map(|(hash, _)| hash).cloned().collect();
let locators_map: HashMap<_, _> = locators.into_iter().collect();
peer.send(NetworkMessage::GetHeaders(GetHeadersMessage::new(
locators_vec,
Default::default(),
)))?;
let (mut snapshot, mut last_hash) = if let NetworkMessage::Headers(headers) = peer
.recv("headers", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?
{
if headers.is_empty() {
return Ok(None);
}
match locators_map.get(&headers[0].prev_blockhash) {
None => return Err(CompactFiltersError::InvalidHeaders),
Some(from) => (store.start_snapshot(*from)?, headers[0].prev_blockhash),
}
} else {
return Err(CompactFiltersError::InvalidResponse);
};
let mut sync_height = store.get_height()?;
while sync_height < peer.get_version().start_height as usize {
peer.send(NetworkMessage::GetHeaders(GetHeadersMessage::new(
vec![last_hash],
Default::default(),
)))?;
if let NetworkMessage::Headers(headers) = peer
.recv("headers", Some(Duration::from_secs(TIMEOUT_SECS)))?
.ok_or(CompactFiltersError::Timeout)?
{
let batch_len = headers.len();
last_hash = snapshot.apply(sync_height, headers)?;
sync_height += batch_len;
sync_fn(sync_height)?;
} else {
return Err(CompactFiltersError::InvalidResponse);
}
}
Ok(Some(snapshot))
}

View File

@@ -1,147 +1,319 @@
use std::collections::HashSet;
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Electrum
//!
//! This module defines a [`Blockchain`] struct that wraps an [`electrum_client::Client`]
//! and implements the logic required to populate the wallet's [database](crate::database::Database) by
//! querying the inner client.
//!
//! ## Example
//!
//! ```no_run
//! # use bdk::blockchain::electrum::ElectrumBlockchain;
//! let client = electrum_client::Client::new("ssl://electrum.blockstream.info:50002")?;
//! let blockchain = ElectrumBlockchain::from(client);
//! # Ok::<(), bdk::Error>(())
//! ```
use std::collections::{HashMap, HashSet};
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use bitcoin::{Script, Transaction, Txid};
use bitcoin::{Transaction, Txid};
use electrum_client::tokio::io::{AsyncRead, AsyncWrite};
use electrum_client::Client;
use electrum_client::{Client, ConfigBuilder, ElectrumApi, Socks5Config};
use self::utils::{ELSGetHistoryRes, ELSListUnspentRes, ElectrumLikeSync};
use super::script_sync::Request;
use super::*;
use crate::database::{BatchDatabase, DatabaseUtils};
use crate::database::{BatchDatabase, Database};
use crate::error::Error;
use crate::{BlockTime, FeeRate};
pub struct ElectrumBlockchain<T: AsyncRead + AsyncWrite + Send>(Option<Client<T>>);
/// Wrapper over an Electrum Client that implements the required blockchain traits
///
/// ## Example
/// See the [`blockchain::electrum`](crate::blockchain::electrum) module for a usage example.
pub struct ElectrumBlockchain {
client: Client,
stop_gap: usize,
}
impl<T: AsyncRead + AsyncWrite + Send> std::convert::From<Client<T>> for ElectrumBlockchain<T> {
fn from(client: Client<T>) -> Self {
ElectrumBlockchain(Some(client))
impl std::convert::From<Client> for ElectrumBlockchain {
fn from(client: Client) -> Self {
ElectrumBlockchain {
client,
stop_gap: 20,
}
}
}
impl<T: AsyncRead + AsyncWrite + Send> Blockchain for ElectrumBlockchain<T> {
fn offline() -> Self {
ElectrumBlockchain(None)
impl Blockchain for ElectrumBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
vec![
Capability::FullHistory,
Capability::GetAnyTx,
Capability::AccurateFees,
]
.into_iter()
.collect()
}
fn is_online(&self) -> bool {
self.0.is_some()
}
}
#[async_trait(?Send)]
impl<T: AsyncRead + AsyncWrite + Send> OnlineBlockchain for ElectrumBlockchain<T> {
async fn get_capabilities(&self) -> HashSet<Capability> {
vec![Capability::FullHistory, Capability::GetAnyTx]
.into_iter()
.collect()
}
async fn setup<D: BatchDatabase + DatabaseUtils, P: Progress>(
&mut self,
stop_gap: Option<usize>,
fn setup<D: BatchDatabase, P: Progress>(
&self,
database: &mut D,
progress_update: P,
_progress_update: P,
) -> Result<(), Error> {
self.0
.as_mut()
.ok_or(Error::OfflineClient)?
.electrum_like_setup(stop_gap, database, progress_update)
.await
let mut request = script_sync::start(database, self.stop_gap)?;
let mut block_times = HashMap::<u32, u32>::new();
let mut txid_to_height = HashMap::<Txid, u32>::new();
let mut tx_cache = TxCache::new(database, &self.client);
let chunk_size = self.stop_gap;
// The electrum server has been inconsistent somehow in its responses during sync. For
// example, we do a batch request of transactions and the response contains less
// tranascations than in the request. This should never happen but we don't want to panic.
let electrum_goof = || Error::Generic("electrum server misbehaving".to_string());
let batch_update = loop {
request = match request {
Request::Script(script_req) => {
let scripts = script_req.request().take(chunk_size);
let txids_per_script: Vec<Vec<_>> = self
.client
.batch_script_get_history(scripts)
.map_err(Error::Electrum)?
.into_iter()
.map(|txs| {
txs.into_iter()
.map(|tx| {
let tx_height = match tx.height {
none if none <= 0 => None,
height => {
txid_to_height.insert(tx.tx_hash, height as u32);
Some(height as u32)
}
};
(tx.tx_hash, tx_height)
})
.collect()
})
.collect();
script_req.satisfy(txids_per_script)?
}
Request::Conftime(conftime_req) => {
// collect up to chunk_size heights to fetch from electrum
let needs_block_height = {
let mut needs_block_height_iter = conftime_req
.request()
.filter_map(|txid| txid_to_height.get(txid).cloned())
.filter(|height| block_times.get(height).is_none());
let mut needs_block_height = HashSet::new();
while needs_block_height.len() < chunk_size {
match needs_block_height_iter.next() {
Some(height) => needs_block_height.insert(height),
None => break,
};
}
needs_block_height
};
let new_block_headers = self
.client
.batch_block_header(needs_block_height.iter().cloned())?;
for (height, header) in needs_block_height.into_iter().zip(new_block_headers) {
block_times.insert(height, header.time);
}
let conftimes = conftime_req
.request()
.take(chunk_size)
.map(|txid| {
let confirmation_time = txid_to_height
.get(txid)
.map(|height| {
let timestamp =
*block_times.get(height).ok_or_else(electrum_goof)?;
Result::<_, Error>::Ok(BlockTime {
height: *height,
timestamp: timestamp.into(),
})
})
.transpose()?;
Ok(confirmation_time)
})
.collect::<Result<_, Error>>()?;
conftime_req.satisfy(conftimes)?
}
Request::Tx(tx_req) => {
let needs_full = tx_req.request().take(chunk_size);
tx_cache.save_txs(needs_full.clone())?;
let full_transactions = needs_full
.map(|txid| tx_cache.get(*txid).ok_or_else(electrum_goof))
.collect::<Result<Vec<_>, _>>()?;
let input_txs = full_transactions.iter().flat_map(|tx| {
tx.input
.iter()
.filter(|input| !input.previous_output.is_null())
.map(|input| &input.previous_output.txid)
});
tx_cache.save_txs(input_txs)?;
let full_details = full_transactions
.into_iter()
.map(|tx| {
let prev_outputs = tx
.input
.iter()
.map(|input| {
if input.previous_output.is_null() {
return Ok(None);
}
let prev_tx = tx_cache
.get(input.previous_output.txid)
.ok_or_else(electrum_goof)?;
let txout = prev_tx
.output
.get(input.previous_output.vout as usize)
.ok_or_else(electrum_goof)?;
Ok(Some(txout.clone()))
})
.collect::<Result<Vec<_>, Error>>()?;
Ok((prev_outputs, tx))
})
.collect::<Result<Vec<_>, Error>>()?;
tx_req.satisfy(full_details)?
}
Request::Finish(batch_update) => break batch_update,
}
};
database.commit_batch(batch_update)?;
Ok(())
}
async fn get_tx(&mut self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(self
.0
.as_mut()
.ok_or(Error::OfflineClient)?
.transaction_get(txid)
.await
.map(Option::Some)?)
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(self.client.transaction_get(txid).map(Option::Some)?)
}
async fn broadcast(&mut self, tx: &Transaction) -> Result<(), Error> {
Ok(self
.0
.as_mut()
.ok_or(Error::OfflineClient)?
.transaction_broadcast(tx)
.await
.map(|_| ())?)
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
Ok(self.client.transaction_broadcast(tx).map(|_| ())?)
}
async fn get_height(&mut self) -> Result<usize, Error> {
fn get_height(&self) -> Result<u32, Error> {
// TODO: unsubscribe when added to the client, or is there a better call to use here?
Ok(self
.0
.as_mut()
.ok_or(Error::OfflineClient)?
.client
.block_headers_subscribe()
.await
.map(|data| data.height)?)
.map(|data| data.height as u32)?)
}
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
Ok(FeeRate::from_btc_per_kvb(
self.client.estimate_fee(target)? as f32
))
}
}
#[async_trait(?Send)]
impl<T: AsyncRead + AsyncWrite + Send> ElectrumLikeSync for Client<T> {
async fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script>>(
&mut self,
scripts: I,
) -> Result<Vec<Vec<ELSGetHistoryRes>>, Error> {
self.batch_script_get_history(scripts)
.await
.map(|v| {
v.into_iter()
.map(|v| {
v.into_iter()
.map(
|electrum_client::GetHistoryRes {
height, tx_hash, ..
}| ELSGetHistoryRes {
height,
tx_hash,
},
)
.collect()
})
.collect()
})
.map_err(Error::Electrum)
struct TxCache<'a, 'b, D> {
db: &'a D,
client: &'b Client,
cache: HashMap<Txid, Transaction>,
}
impl<'a, 'b, D: Database> TxCache<'a, 'b, D> {
fn new(db: &'a D, client: &'b Client) -> Self {
TxCache {
db,
client,
cache: HashMap::default(),
}
}
fn save_txs<'c>(&mut self, txids: impl Iterator<Item = &'c Txid>) -> Result<(), Error> {
let mut need_fetch = vec![];
for txid in txids {
if self.cache.get(txid).is_some() {
continue;
} else if let Some(transaction) = self.db.get_raw_tx(txid)? {
self.cache.insert(*txid, transaction);
} else {
need_fetch.push(txid);
}
}
if !need_fetch.is_empty() {
let txs = self
.client
.batch_transaction_get(need_fetch.clone())
.map_err(Error::Electrum)?;
for (tx, _txid) in txs.into_iter().zip(need_fetch) {
debug_assert_eq!(*_txid, tx.txid());
self.cache.insert(tx.txid(), tx);
}
}
Ok(())
}
async fn els_batch_script_list_unspent<'s, I: IntoIterator<Item = &'s Script>>(
&mut self,
scripts: I,
) -> Result<Vec<Vec<ELSListUnspentRes>>, Error> {
self.batch_script_list_unspent(scripts)
.await
.map(|v| {
v.into_iter()
.map(|v| {
v.into_iter()
.map(
|electrum_client::ListUnspentRes {
height,
tx_hash,
tx_pos,
..
}| ELSListUnspentRes {
height,
tx_hash,
tx_pos,
},
)
.collect()
})
.collect()
})
.map_err(Error::Electrum)
}
async fn els_transaction_get(&mut self, txid: &Txid) -> Result<Transaction, Error> {
self.transaction_get(txid).await.map_err(Error::Electrum)
fn get(&self, txid: Txid) -> Option<Transaction> {
self.cache.get(&txid).map(Clone::clone)
}
}
/// Configuration for an [`ElectrumBlockchain`]
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
pub struct ElectrumBlockchainConfig {
/// URL of the Electrum server (such as ElectrumX, Esplora, BWT) may start with `ssl://` or `tcp://` and include a port
///
/// eg. `ssl://electrum.blockstream.info:60002`
pub url: String,
/// URL of the socks5 proxy server or a Tor service
pub socks5: Option<String>,
/// Request retry count
pub retry: u8,
/// Request timeout (seconds)
pub timeout: Option<u8>,
/// Stop searching addresses for transactions after finding an unused gap of this length
pub stop_gap: usize,
}
impl ConfigurableBlockchain for ElectrumBlockchain {
type Config = ElectrumBlockchainConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
let socks5 = config.socks5.as_ref().map(Socks5Config::new);
let electrum_config = ConfigBuilder::new()
.retry(config.retry)
.timeout(config.timeout)?
.socks5(socks5)?
.build();
Ok(ElectrumBlockchain {
client: Client::from_config(config.url.as_str(), electrum_config)?,
stop_gap: config.stop_gap,
})
}
}
#[cfg(test)]
#[cfg(feature = "test-electrum")]
crate::bdk_blockchain_tests! {
fn test_instance(test_client: &TestClient) -> ElectrumBlockchain {
ElectrumBlockchain::from(Client::new(&test_client.electrsd.electrum_url).unwrap())
}
}

View File

@@ -1,314 +0,0 @@
use std::collections::HashSet;
use futures::stream::{self, StreamExt, TryStreamExt};
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use serde::Deserialize;
use reqwest::Client;
use reqwest::StatusCode;
use bitcoin::consensus::{deserialize, serialize};
use bitcoin::hashes::hex::ToHex;
use bitcoin::hashes::{sha256, Hash};
use bitcoin::{Script, Transaction, Txid};
use self::utils::{ELSGetHistoryRes, ELSListUnspentRes, ElectrumLikeSync};
use super::*;
use crate::database::{BatchDatabase, DatabaseUtils};
use crate::error::Error;
#[derive(Debug)]
pub struct UrlClient {
url: String,
client: Client,
}
#[derive(Debug)]
pub struct EsploraBlockchain(Option<UrlClient>);
impl std::convert::From<UrlClient> for EsploraBlockchain {
fn from(url_client: UrlClient) -> Self {
EsploraBlockchain(Some(url_client))
}
}
impl EsploraBlockchain {
pub fn new(base_url: &str) -> Self {
EsploraBlockchain(Some(UrlClient {
url: base_url.to_string(),
client: Client::new(),
}))
}
}
impl Blockchain for EsploraBlockchain {
fn offline() -> Self {
EsploraBlockchain(None)
}
fn is_online(&self) -> bool {
self.0.is_some()
}
}
#[async_trait(?Send)]
impl OnlineBlockchain for EsploraBlockchain {
async fn get_capabilities(&self) -> HashSet<Capability> {
vec![Capability::FullHistory, Capability::GetAnyTx]
.into_iter()
.collect()
}
async fn setup<D: BatchDatabase + DatabaseUtils, P: Progress>(
&mut self,
stop_gap: Option<usize>,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
self.0
.as_mut()
.ok_or(Error::OfflineClient)?
.electrum_like_setup(stop_gap, database, progress_update)
.await
}
async fn get_tx(&mut self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(self
.0
.as_mut()
.ok_or(Error::OfflineClient)?
._get_tx(txid)
.await?)
}
async fn broadcast(&mut self, tx: &Transaction) -> Result<(), Error> {
Ok(self
.0
.as_mut()
.ok_or(Error::OfflineClient)?
._broadcast(tx)
.await?)
}
async fn get_height(&mut self) -> Result<usize, Error> {
Ok(self
.0
.as_mut()
.ok_or(Error::OfflineClient)?
._get_height()
.await?)
}
}
impl UrlClient {
fn script_to_scripthash(script: &Script) -> String {
sha256::Hash::hash(script.as_bytes()).into_inner().to_hex()
}
async fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
let resp = self
.client
.get(&format!("{}/api/tx/{}/raw", self.url, txid))
.send()
.await?;
if let StatusCode::NOT_FOUND = resp.status() {
return Ok(None);
}
Ok(Some(deserialize(&resp.error_for_status()?.bytes().await?)?))
}
async fn _broadcast(&self, transaction: &Transaction) -> Result<(), EsploraError> {
self.client
.post(&format!("{}/api/tx", self.url))
.body(serialize(transaction).to_hex())
.send()
.await?
.error_for_status()?;
Ok(())
}
async fn _get_height(&self) -> Result<usize, EsploraError> {
Ok(self
.client
.get(&format!("{}/api/blocks/tip/height", self.url))
.send()
.await?
.error_for_status()?
.text()
.await?
.parse()?)
}
async fn _script_get_history(
&self,
script: &Script,
) -> Result<Vec<ELSGetHistoryRes>, EsploraError> {
let mut result = Vec::new();
let scripthash = Self::script_to_scripthash(script);
// Add the unconfirmed transactions first
result.extend(
self.client
.get(&format!(
"{}/api/scripthash/{}/txs/mempool",
self.url, scripthash
))
.send()
.await?
.error_for_status()?
.json::<Vec<EsploraGetHistory>>()
.await?
.into_iter()
.map(|x| ELSGetHistoryRes {
tx_hash: x.txid,
height: x.status.block_height.unwrap_or(0) as i32,
}),
);
debug!(
"Found {} mempool txs for {} - {:?}",
result.len(),
scripthash,
script
);
// Then go through all the pages of confirmed transactions
let mut last_txid = String::new();
loop {
let response = self
.client
.get(&format!(
"{}/api/scripthash/{}/txs/chain/{}",
self.url, scripthash, last_txid
))
.send()
.await?
.error_for_status()?
.json::<Vec<EsploraGetHistory>>()
.await?;
let len = response.len();
if let Some(elem) = response.last() {
last_txid = elem.txid.to_hex();
}
debug!("... adding {} confirmed transactions", len);
result.extend(response.into_iter().map(|x| ELSGetHistoryRes {
tx_hash: x.txid,
height: x.status.block_height.unwrap_or(0) as i32,
}));
if len < 25 {
break;
}
}
Ok(result)
}
async fn _script_list_unspent(
&self,
script: &Script,
) -> Result<Vec<ELSListUnspentRes>, EsploraError> {
Ok(self
.client
.get(&format!(
"{}/api/scripthash/{}/utxo",
self.url,
Self::script_to_scripthash(script)
))
.send()
.await?
.error_for_status()?
.json::<Vec<EsploraListUnspent>>()
.await?
.into_iter()
.map(|x| ELSListUnspentRes {
tx_hash: x.txid,
height: x.status.block_height.unwrap_or(0),
tx_pos: x.vout,
})
.collect())
}
}
#[async_trait(?Send)]
impl ElectrumLikeSync for UrlClient {
async fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script>>(
&mut self,
scripts: I,
) -> Result<Vec<Vec<ELSGetHistoryRes>>, Error> {
Ok(stream::iter(scripts)
.then(|script| self._script_get_history(&script))
.try_collect()
.await?)
}
async fn els_batch_script_list_unspent<'s, I: IntoIterator<Item = &'s Script>>(
&mut self,
scripts: I,
) -> Result<Vec<Vec<ELSListUnspentRes>>, Error> {
Ok(stream::iter(scripts)
.then(|script| self._script_list_unspent(&script))
.try_collect()
.await?)
}
async fn els_transaction_get(&mut self, txid: &Txid) -> Result<Transaction, Error> {
Ok(self
._get_tx(txid)
.await?
.ok_or_else(|| EsploraError::TransactionNotFound(*txid))?)
}
}
#[derive(Deserialize)]
struct EsploraGetHistoryStatus {
block_height: Option<usize>,
}
#[derive(Deserialize)]
struct EsploraGetHistory {
txid: Txid,
status: EsploraGetHistoryStatus,
}
#[derive(Deserialize)]
struct EsploraListUnspent {
txid: Txid,
vout: usize,
status: EsploraGetHistoryStatus,
}
#[derive(Debug)]
pub enum EsploraError {
Reqwest(reqwest::Error),
Parsing(std::num::ParseIntError),
BitcoinEncoding(bitcoin::consensus::encode::Error),
TransactionNotFound(Txid),
}
impl From<reqwest::Error> for EsploraError {
fn from(other: reqwest::Error) -> Self {
EsploraError::Reqwest(other)
}
}
impl From<std::num::ParseIntError> for EsploraError {
fn from(other: std::num::ParseIntError) -> Self {
EsploraError::Parsing(other)
}
}
impl From<bitcoin::consensus::encode::Error> for EsploraError {
fn from(other: bitcoin::consensus::encode::Error) -> Self {
EsploraError::BitcoinEncoding(other)
}
}

View File

@@ -0,0 +1,117 @@
//! structs from the esplora API
//!
//! see: <https://github.com/Blockstream/esplora/blob/master/API.md>
use crate::BlockTime;
use bitcoin::{OutPoint, Script, Transaction, TxIn, TxOut, Txid};
#[derive(serde::Deserialize, Clone, Debug)]
pub struct PrevOut {
pub value: u64,
pub scriptpubkey: Script,
}
#[derive(serde::Deserialize, Clone, Debug)]
pub struct Vin {
pub txid: Txid,
pub vout: u32,
// None if coinbase
pub prevout: Option<PrevOut>,
pub scriptsig: Script,
#[serde(deserialize_with = "deserialize_witness")]
pub witness: Vec<Vec<u8>>,
pub sequence: u32,
pub is_coinbase: bool,
}
#[derive(serde::Deserialize, Clone, Debug)]
pub struct Vout {
pub value: u64,
pub scriptpubkey: Script,
}
#[derive(serde::Deserialize, Clone, Debug)]
pub struct TxStatus {
pub confirmed: bool,
pub block_height: Option<u32>,
pub block_time: Option<u64>,
}
#[derive(serde::Deserialize, Clone, Debug)]
pub struct Tx {
pub txid: Txid,
pub version: i32,
pub locktime: u32,
pub vin: Vec<Vin>,
pub vout: Vec<Vout>,
pub status: TxStatus,
pub fee: u64,
}
impl Tx {
pub fn to_tx(&self) -> Transaction {
Transaction {
version: self.version,
lock_time: self.locktime,
input: self
.vin
.iter()
.cloned()
.map(|vin| TxIn {
previous_output: OutPoint {
txid: vin.txid,
vout: vin.vout,
},
script_sig: vin.scriptsig,
sequence: vin.sequence,
witness: vin.witness,
})
.collect(),
output: self
.vout
.iter()
.cloned()
.map(|vout| TxOut {
value: vout.value,
script_pubkey: vout.scriptpubkey,
})
.collect(),
}
}
pub fn confirmation_time(&self) -> Option<BlockTime> {
match self.status {
TxStatus {
confirmed: true,
block_height: Some(height),
block_time: Some(timestamp),
} => Some(BlockTime { timestamp, height }),
_ => None,
}
}
pub fn previous_outputs(&self) -> Vec<Option<TxOut>> {
self.vin
.iter()
.cloned()
.map(|vin| {
vin.prevout.map(|po| TxOut {
script_pubkey: po.scriptpubkey,
value: po.value,
})
})
.collect()
}
}
fn deserialize_witness<'de, D>(d: D) -> Result<Vec<Vec<u8>>, D::Error>
where
D: serde::de::Deserializer<'de>,
{
use crate::serde::Deserialize;
use bitcoin::hashes::hex::FromHex;
let list = Vec::<String>::deserialize(d)?;
list.into_iter()
.map(|hex_str| Vec::<u8>::from_hex(&hex_str))
.collect::<Result<Vec<Vec<u8>>, _>>()
.map_err(serde::de::Error::custom)
}

View File

@@ -0,0 +1,212 @@
//! Esplora
//!
//! This module defines a [`EsploraBlockchain`] struct that can query an Esplora
//! backend populate the wallet's [database](crate::database::Database) by:
//!
//! ## Example
//!
//! ```no_run
//! # use bdk::blockchain::esplora::EsploraBlockchain;
//! let blockchain = EsploraBlockchain::new("https://blockstream.info/testnet/api", 20);
//! # Ok::<(), bdk::Error>(())
//! ```
//!
//! Esplora blockchain can use either `ureq` or `reqwest` for the HTTP client
//! depending on your needs (blocking or async respectively).
//!
//! Please note, to configure the Esplora HTTP client correctly use one of:
//! Blocking: --features='esplora,ureq'
//! Async: --features='async-interface,esplora,reqwest' --no-default-features
use std::collections::HashMap;
use std::fmt;
use std::io;
use bitcoin::consensus;
use bitcoin::{BlockHash, Txid};
use crate::error::Error;
use crate::FeeRate;
#[cfg(feature = "reqwest")]
mod reqwest;
#[cfg(feature = "reqwest")]
pub use self::reqwest::*;
#[cfg(feature = "ureq")]
mod ureq;
#[cfg(feature = "ureq")]
pub use self::ureq::*;
mod api;
fn into_fee_rate(target: usize, estimates: HashMap<String, f64>) -> Result<FeeRate, Error> {
let fee_val = {
let mut pairs = estimates
.into_iter()
.filter_map(|(k, v)| Some((k.parse::<usize>().ok()?, v)))
.collect::<Vec<_>>();
pairs.sort_unstable_by_key(|(k, _)| std::cmp::Reverse(*k));
pairs
.into_iter()
.find(|(k, _)| k <= &target)
.map(|(_, v)| v)
.unwrap_or(1.0)
};
Ok(FeeRate::from_sat_per_vb(fee_val as f32))
}
/// Errors that can happen during a sync with [`EsploraBlockchain`]
#[derive(Debug)]
pub enum EsploraError {
/// Error during ureq HTTP request
#[cfg(feature = "ureq")]
Ureq(::ureq::Error),
/// Transport error during the ureq HTTP call
#[cfg(feature = "ureq")]
UreqTransport(::ureq::Transport),
/// Error during reqwest HTTP request
#[cfg(feature = "reqwest")]
Reqwest(::reqwest::Error),
/// HTTP response error
HttpResponse(u16),
/// IO error during ureq response read
Io(io::Error),
/// No header found in ureq response
NoHeader,
/// Invalid number returned
Parsing(std::num::ParseIntError),
/// Invalid Bitcoin data returned
BitcoinEncoding(bitcoin::consensus::encode::Error),
/// Invalid Hex data returned
Hex(bitcoin::hashes::hex::Error),
/// Transaction not found
TransactionNotFound(Txid),
/// Header height not found
HeaderHeightNotFound(u32),
/// Header hash not found
HeaderHashNotFound(BlockHash),
}
impl fmt::Display for EsploraError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
/// Configuration for an [`EsploraBlockchain`]
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
pub struct EsploraBlockchainConfig {
/// Base URL of the esplora service
///
/// eg. `https://blockstream.info/api/`
pub base_url: String,
/// Optional URL of the proxy to use to make requests to the Esplora server
///
/// The string should be formatted as: `<protocol>://<user>:<password>@host:<port>`.
///
/// Note that the format of this value and the supported protocols change slightly between the
/// sync version of esplora (using `ureq`) and the async version (using `reqwest`). For more
/// details check with the documentation of the two crates. Both of them are compiled with
/// the `socks` feature enabled.
///
/// The proxy is ignored when targeting `wasm32`.
#[serde(skip_serializing_if = "Option::is_none")]
pub proxy: Option<String>,
/// Number of parallel requests sent to the esplora service (default: 4)
#[serde(skip_serializing_if = "Option::is_none")]
pub concurrency: Option<u8>,
/// Stop searching addresses for transactions after finding an unused gap of this length.
pub stop_gap: usize,
/// Socket timeout.
#[serde(skip_serializing_if = "Option::is_none")]
pub timeout: Option<u64>,
}
impl EsploraBlockchainConfig {
/// create a config with default values given the base url and stop gap
pub fn new(base_url: String, stop_gap: usize) -> Self {
Self {
base_url,
proxy: None,
timeout: None,
stop_gap,
concurrency: None,
}
}
}
impl std::error::Error for EsploraError {}
#[cfg(feature = "ureq")]
impl_error!(::ureq::Transport, UreqTransport, EsploraError);
#[cfg(feature = "reqwest")]
impl_error!(::reqwest::Error, Reqwest, EsploraError);
impl_error!(io::Error, Io, EsploraError);
impl_error!(std::num::ParseIntError, Parsing, EsploraError);
impl_error!(consensus::encode::Error, BitcoinEncoding, EsploraError);
impl_error!(bitcoin::hashes::hex::Error, Hex, EsploraError);
#[cfg(test)]
#[cfg(feature = "test-esplora")]
crate::bdk_blockchain_tests! {
fn test_instance(test_client: &TestClient) -> EsploraBlockchain {
EsploraBlockchain::new(&format!("http://{}",test_client.electrsd.esplora_url.as_ref().unwrap()), 20)
}
}
const DEFAULT_CONCURRENT_REQUESTS: u8 = 4;
#[cfg(test)]
mod test {
use super::*;
#[test]
fn feerate_parsing() {
let esplora_fees = serde_json::from_str::<HashMap<String, f64>>(
r#"{
"25": 1.015,
"5": 2.3280000000000003,
"12": 2.0109999999999997,
"15": 1.018,
"17": 1.018,
"11": 2.0109999999999997,
"3": 3.01,
"2": 4.9830000000000005,
"6": 2.2359999999999998,
"21": 1.018,
"13": 1.081,
"7": 2.2359999999999998,
"8": 2.2359999999999998,
"16": 1.018,
"20": 1.018,
"22": 1.017,
"23": 1.017,
"504": 1,
"9": 2.2359999999999998,
"14": 1.018,
"10": 2.0109999999999997,
"24": 1.017,
"1008": 1,
"1": 4.9830000000000005,
"4": 2.3280000000000003,
"19": 1.018,
"144": 1,
"18": 1.018
}
"#,
)
.unwrap();
assert_eq!(
into_fee_rate(6, esplora_fees.clone()).unwrap(),
FeeRate::from_sat_per_vb(2.236)
);
assert_eq!(
into_fee_rate(26, esplora_fees).unwrap(),
FeeRate::from_sat_per_vb(1.015),
"should inherit from value for 25"
);
}
}

View File

@@ -0,0 +1,331 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Esplora by way of `reqwest` HTTP client.
use std::collections::{HashMap, HashSet};
use bitcoin::consensus::{deserialize, serialize};
use bitcoin::hashes::hex::{FromHex, ToHex};
use bitcoin::hashes::{sha256, Hash};
use bitcoin::{BlockHeader, Script, Transaction, Txid};
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use ::reqwest::{Client, StatusCode};
use futures::stream::{FuturesOrdered, TryStreamExt};
use super::api::Tx;
use crate::blockchain::esplora::EsploraError;
use crate::blockchain::*;
use crate::database::BatchDatabase;
use crate::error::Error;
use crate::FeeRate;
#[derive(Debug)]
struct UrlClient {
url: String,
// We use the async client instead of the blocking one because it automatically uses `fetch`
// when the target platform is wasm32.
client: Client,
concurrency: u8,
}
/// Structure that implements the logic to sync with Esplora
///
/// ## Example
/// See the [`blockchain::esplora`](crate::blockchain::esplora) module for a usage example.
#[derive(Debug)]
pub struct EsploraBlockchain {
url_client: UrlClient,
stop_gap: usize,
}
impl std::convert::From<UrlClient> for EsploraBlockchain {
fn from(url_client: UrlClient) -> Self {
EsploraBlockchain {
url_client,
stop_gap: 20,
}
}
}
impl EsploraBlockchain {
/// Create a new instance of the client from a base URL and `stop_gap`.
pub fn new(base_url: &str, stop_gap: usize) -> Self {
EsploraBlockchain {
url_client: UrlClient {
url: base_url.to_string(),
client: Client::new(),
concurrency: super::DEFAULT_CONCURRENT_REQUESTS,
},
stop_gap,
}
}
/// Set the concurrency to use when doing batch queries against the Esplora instance.
pub fn with_concurrency(mut self, concurrency: u8) -> Self {
self.url_client.concurrency = concurrency;
self
}
}
#[maybe_async]
impl Blockchain for EsploraBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
vec![
Capability::FullHistory,
Capability::GetAnyTx,
Capability::AccurateFees,
]
.into_iter()
.collect()
}
fn setup<D: BatchDatabase, P: Progress>(
&self,
database: &mut D,
_progress_update: P,
) -> Result<(), Error> {
use crate::blockchain::script_sync::Request;
let mut request = script_sync::start(database, self.stop_gap)?;
let mut tx_index: HashMap<Txid, Tx> = HashMap::new();
let batch_update = loop {
request = match request {
Request::Script(script_req) => {
let futures: FuturesOrdered<_> = script_req
.request()
.take(self.url_client.concurrency as usize)
.map(|script| async move {
let mut related_txs: Vec<Tx> =
self.url_client._scripthash_txs(script, None).await?;
let n_confirmed =
related_txs.iter().filter(|tx| tx.status.confirmed).count();
// esplora pages on 25 confirmed transactions. If there's 25 or more we
// keep requesting to see if there's more.
if n_confirmed >= 25 {
loop {
let new_related_txs: Vec<Tx> = self
.url_client
._scripthash_txs(
script,
Some(related_txs.last().unwrap().txid),
)
.await?;
let n = new_related_txs.len();
related_txs.extend(new_related_txs);
// we've reached the end
if n < 25 {
break;
}
}
}
Result::<_, Error>::Ok(related_txs)
})
.collect();
let txs_per_script: Vec<Vec<Tx>> = await_or_block!(futures.try_collect())?;
let mut satisfaction = vec![];
for txs in txs_per_script {
satisfaction.push(
txs.iter()
.map(|tx| (tx.txid, tx.status.block_height))
.collect(),
);
for tx in txs {
tx_index.insert(tx.txid, tx);
}
}
script_req.satisfy(satisfaction)?
}
Request::Conftime(conftime_req) => {
let conftimes = conftime_req
.request()
.map(|txid| {
tx_index
.get(txid)
.expect("must be in index")
.confirmation_time()
})
.collect();
conftime_req.satisfy(conftimes)?
}
Request::Tx(tx_req) => {
let full_txs = tx_req
.request()
.map(|txid| {
let tx = tx_index.get(txid).expect("must be in index");
(tx.previous_outputs(), tx.to_tx())
})
.collect();
tx_req.satisfy(full_txs)?
}
Request::Finish(batch_update) => break batch_update,
}
};
database.commit_batch(batch_update)?;
Ok(())
}
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(await_or_block!(self.url_client._get_tx(txid))?)
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
Ok(await_or_block!(self.url_client._broadcast(tx))?)
}
fn get_height(&self) -> Result<u32, Error> {
Ok(await_or_block!(self.url_client._get_height())?)
}
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
let estimates = await_or_block!(self.url_client._get_fee_estimates())?;
super::into_fee_rate(target, estimates)
}
}
impl UrlClient {
async fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
let resp = self
.client
.get(&format!("{}/tx/{}/raw", self.url, txid))
.send()
.await?;
if let StatusCode::NOT_FOUND = resp.status() {
return Ok(None);
}
Ok(Some(deserialize(&resp.error_for_status()?.bytes().await?)?))
}
async fn _get_tx_no_opt(&self, txid: &Txid) -> Result<Transaction, EsploraError> {
match self._get_tx(txid).await {
Ok(Some(tx)) => Ok(tx),
Ok(None) => Err(EsploraError::TransactionNotFound(*txid)),
Err(e) => Err(e),
}
}
async fn _get_header(&self, block_height: u32) -> Result<BlockHeader, EsploraError> {
let resp = self
.client
.get(&format!("{}/block-height/{}", self.url, block_height))
.send()
.await?;
if let StatusCode::NOT_FOUND = resp.status() {
return Err(EsploraError::HeaderHeightNotFound(block_height));
}
let bytes = resp.bytes().await?;
let hash = std::str::from_utf8(&bytes)
.map_err(|_| EsploraError::HeaderHeightNotFound(block_height))?;
let resp = self
.client
.get(&format!("{}/block/{}/header", self.url, hash))
.send()
.await?;
let header = deserialize(&Vec::from_hex(&resp.text().await?)?)?;
Ok(header)
}
async fn _broadcast(&self, transaction: &Transaction) -> Result<(), EsploraError> {
self.client
.post(&format!("{}/tx", self.url))
.body(serialize(transaction).to_hex())
.send()
.await?
.error_for_status()?;
Ok(())
}
async fn _get_height(&self) -> Result<u32, EsploraError> {
let req = self
.client
.get(&format!("{}/blocks/tip/height", self.url))
.send()
.await?;
Ok(req.error_for_status()?.text().await?.parse()?)
}
async fn _scripthash_txs(
&self,
script: &Script,
last_seen: Option<Txid>,
) -> Result<Vec<Tx>, EsploraError> {
let script_hash = sha256::Hash::hash(script.as_bytes()).into_inner().to_hex();
let url = match last_seen {
Some(last_seen) => format!(
"{}/scripthash/{}/txs/chain/{}",
self.url, script_hash, last_seen
),
None => format!("{}/scripthash/{}/txs", self.url, script_hash),
};
Ok(self
.client
.get(url)
.send()
.await?
.error_for_status()?
.json::<Vec<Tx>>()
.await?)
}
async fn _get_fee_estimates(&self) -> Result<HashMap<String, f64>, EsploraError> {
Ok(self
.client
.get(&format!("{}/fee-estimates", self.url,))
.send()
.await?
.error_for_status()?
.json::<HashMap<String, f64>>()
.await?)
}
}
impl ConfigurableBlockchain for EsploraBlockchain {
type Config = super::EsploraBlockchainConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
let map_e = |e: reqwest::Error| Error::Esplora(Box::new(e.into()));
let mut blockchain = EsploraBlockchain::new(config.base_url.as_str(), config.stop_gap);
if let Some(concurrency) = config.concurrency {
blockchain.url_client.concurrency = concurrency;
}
let mut builder = Client::builder();
#[cfg(not(target_arch = "wasm32"))]
if let Some(proxy) = &config.proxy {
builder = builder.proxy(reqwest::Proxy::all(proxy).map_err(map_e)?);
}
#[cfg(not(target_arch = "wasm32"))]
if let Some(timeout) = config.timeout {
builder = builder.timeout(core::time::Duration::from_secs(timeout));
}
blockchain.url_client.client = builder.build().map_err(map_e)?;
Ok(blockchain)
}
}

View File

@@ -0,0 +1,371 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Esplora by way of `ureq` HTTP client.
use std::collections::{HashMap, HashSet};
use std::io;
use std::io::Read;
use std::time::Duration;
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use ureq::{Agent, Proxy, Response};
use bitcoin::consensus::{deserialize, serialize};
use bitcoin::hashes::hex::{FromHex, ToHex};
use bitcoin::hashes::{sha256, Hash};
use bitcoin::{BlockHeader, Script, Transaction, Txid};
use super::api::Tx;
use crate::blockchain::esplora::EsploraError;
use crate::blockchain::*;
use crate::database::BatchDatabase;
use crate::error::Error;
use crate::FeeRate;
#[derive(Debug, Clone)]
struct UrlClient {
url: String,
agent: Agent,
}
/// Structure that implements the logic to sync with Esplora
///
/// ## Example
/// See the [`blockchain::esplora`](crate::blockchain::esplora) module for a usage example.
#[derive(Debug)]
pub struct EsploraBlockchain {
url_client: UrlClient,
stop_gap: usize,
concurrency: u8,
}
impl EsploraBlockchain {
/// Create a new instance of the client from a base URL and the `stop_gap`.
pub fn new(base_url: &str, stop_gap: usize) -> Self {
EsploraBlockchain {
url_client: UrlClient {
url: base_url.to_string(),
agent: Agent::new(),
},
concurrency: super::DEFAULT_CONCURRENT_REQUESTS,
stop_gap,
}
}
/// Set the inner `ureq` agent.
pub fn with_agent(mut self, agent: Agent) -> Self {
self.url_client.agent = agent;
self
}
/// Set the number of parallel requests the client can make.
pub fn with_concurrency(mut self, concurrency: u8) -> Self {
self.concurrency = concurrency;
self
}
}
impl Blockchain for EsploraBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
vec![
Capability::FullHistory,
Capability::GetAnyTx,
Capability::AccurateFees,
]
.into_iter()
.collect()
}
fn setup<D: BatchDatabase, P: Progress>(
&self,
database: &mut D,
_progress_update: P,
) -> Result<(), Error> {
use crate::blockchain::script_sync::Request;
let mut request = script_sync::start(database, self.stop_gap)?;
let mut tx_index: HashMap<Txid, Tx> = HashMap::new();
let batch_update = loop {
request = match request {
Request::Script(script_req) => {
let scripts = script_req
.request()
.take(self.concurrency as usize)
.cloned();
let handles = scripts.map(move |script| {
let client = self.url_client.clone();
// make each request in its own thread.
std::thread::spawn(move || {
let mut related_txs: Vec<Tx> = client._scripthash_txs(&script, None)?;
let n_confirmed =
related_txs.iter().filter(|tx| tx.status.confirmed).count();
// esplora pages on 25 confirmed transactions. If there's 25 or more we
// keep requesting to see if there's more.
if n_confirmed >= 25 {
loop {
let new_related_txs: Vec<Tx> = client._scripthash_txs(
&script,
Some(related_txs.last().unwrap().txid),
)?;
let n = new_related_txs.len();
related_txs.extend(new_related_txs);
// we've reached the end
if n < 25 {
break;
}
}
}
Result::<_, Error>::Ok(related_txs)
})
});
let txs_per_script: Vec<Vec<Tx>> = handles
.map(|handle| handle.join().unwrap())
.collect::<Result<_, _>>()?;
let mut satisfaction = vec![];
for txs in txs_per_script {
satisfaction.push(
txs.iter()
.map(|tx| (tx.txid, tx.status.block_height))
.collect(),
);
for tx in txs {
tx_index.insert(tx.txid, tx);
}
}
script_req.satisfy(satisfaction)?
}
Request::Conftime(conftime_req) => {
let conftimes = conftime_req
.request()
.map(|txid| {
tx_index
.get(txid)
.expect("must be in index")
.confirmation_time()
})
.collect();
conftime_req.satisfy(conftimes)?
}
Request::Tx(tx_req) => {
let full_txs = tx_req
.request()
.map(|txid| {
let tx = tx_index.get(txid).expect("must be in index");
(tx.previous_outputs(), tx.to_tx())
})
.collect();
tx_req.satisfy(full_txs)?
}
Request::Finish(batch_update) => break batch_update,
}
};
database.commit_batch(batch_update)?;
Ok(())
}
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(self.url_client._get_tx(txid)?)
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
let _txid = self.url_client._broadcast(tx)?;
Ok(())
}
fn get_height(&self) -> Result<u32, Error> {
Ok(self.url_client._get_height()?)
}
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
let estimates = self.url_client._get_fee_estimates()?;
super::into_fee_rate(target, estimates)
}
}
impl UrlClient {
fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
let resp = self
.agent
.get(&format!("{}/tx/{}/raw", self.url, txid))
.call();
match resp {
Ok(resp) => Ok(Some(deserialize(&into_bytes(resp)?)?)),
Err(ureq::Error::Status(code, _)) => {
if is_status_not_found(code) {
return Ok(None);
}
Err(EsploraError::HttpResponse(code))
}
Err(e) => Err(EsploraError::Ureq(e)),
}
}
fn _get_tx_no_opt(&self, txid: &Txid) -> Result<Transaction, EsploraError> {
match self._get_tx(txid) {
Ok(Some(tx)) => Ok(tx),
Ok(None) => Err(EsploraError::TransactionNotFound(*txid)),
Err(e) => Err(e),
}
}
fn _get_header(&self, block_height: u32) -> Result<BlockHeader, EsploraError> {
let resp = self
.agent
.get(&format!("{}/block-height/{}", self.url, block_height))
.call();
let bytes = match resp {
Ok(resp) => Ok(into_bytes(resp)?),
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
Err(e) => Err(EsploraError::Ureq(e)),
}?;
let hash = std::str::from_utf8(&bytes)
.map_err(|_| EsploraError::HeaderHeightNotFound(block_height))?;
let resp = self
.agent
.get(&format!("{}/block/{}/header", self.url, hash))
.call();
match resp {
Ok(resp) => Ok(deserialize(&Vec::from_hex(&resp.into_string()?)?)?),
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
Err(e) => Err(EsploraError::Ureq(e)),
}
}
fn _broadcast(&self, transaction: &Transaction) -> Result<(), EsploraError> {
let resp = self
.agent
.post(&format!("{}/tx", self.url))
.send_string(&serialize(transaction).to_hex());
match resp {
Ok(_) => Ok(()), // We do not return the txid?
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
Err(e) => Err(EsploraError::Ureq(e)),
}
}
fn _get_height(&self) -> Result<u32, EsploraError> {
let resp = self
.agent
.get(&format!("{}/blocks/tip/height", self.url))
.call();
match resp {
Ok(resp) => Ok(resp.into_string()?.parse()?),
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
Err(e) => Err(EsploraError::Ureq(e)),
}
}
fn _get_fee_estimates(&self) -> Result<HashMap<String, f64>, EsploraError> {
let resp = self
.agent
.get(&format!("{}/fee-estimates", self.url,))
.call();
let map = match resp {
Ok(resp) => {
let map: HashMap<String, f64> = resp.into_json()?;
Ok(map)
}
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
Err(e) => Err(EsploraError::Ureq(e)),
}?;
Ok(map)
}
fn _scripthash_txs(
&self,
script: &Script,
last_seen: Option<Txid>,
) -> Result<Vec<Tx>, EsploraError> {
let script_hash = sha256::Hash::hash(script.as_bytes()).into_inner().to_hex();
let url = match last_seen {
Some(last_seen) => format!(
"{}/scripthash/{}/txs/chain/{}",
self.url, script_hash, last_seen
),
None => format!("{}/scripthash/{}/txs", self.url, script_hash),
};
Ok(self.agent.get(&url).call()?.into_json()?)
}
}
fn is_status_not_found(status: u16) -> bool {
status == 404
}
fn into_bytes(resp: Response) -> Result<Vec<u8>, io::Error> {
const BYTES_LIMIT: usize = 10 * 1_024 * 1_024;
let mut buf: Vec<u8> = vec![];
resp.into_reader()
.take((BYTES_LIMIT + 1) as u64)
.read_to_end(&mut buf)?;
if buf.len() > BYTES_LIMIT {
return Err(io::Error::new(
io::ErrorKind::Other,
"response too big for into_bytes",
));
}
Ok(buf)
}
impl ConfigurableBlockchain for EsploraBlockchain {
type Config = super::EsploraBlockchainConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
let mut agent_builder = ureq::AgentBuilder::new();
if let Some(timeout) = config.timeout {
agent_builder = agent_builder.timeout(Duration::from_secs(timeout));
}
if let Some(proxy) = &config.proxy {
agent_builder = agent_builder
.proxy(Proxy::new(proxy).map_err(|e| Error::Esplora(Box::new(e.into())))?);
}
let mut blockchain = EsploraBlockchain::new(config.base_url.as_str(), config.stop_gap)
.with_agent(agent_builder.build());
if let Some(concurrency) = config.concurrency {
blockchain = blockchain.with_concurrency(concurrency);
}
Ok(blockchain)
}
}
impl From<ureq::Error> for EsploraError {
fn from(e: ureq::Error) -> Self {
match e {
ureq::Error::Status(code, _) => EsploraError::HttpResponse(code),
e => EsploraError::Ureq(e),
}
}
}

View File

@@ -1,84 +1,178 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Blockchain backends
//!
//! This module provides the implementation of a few commonly-used backends like
//! [Electrum](crate::blockchain::electrum), [Esplora](crate::blockchain::esplora) and
//! [Compact Filters/Neutrino](crate::blockchain::compact_filters), along with a generalized trait
//! [`Blockchain`] that can be implemented to build customized backends.
use std::collections::HashSet;
use std::ops::Deref;
use std::sync::mpsc::{channel, Receiver, Sender};
use std::sync::Arc;
use bitcoin::{Transaction, Txid};
use crate::database::{BatchDatabase, DatabaseUtils};
use crate::database::BatchDatabase;
use crate::error::Error;
use crate::FeeRate;
pub mod utils;
#[cfg(any(
feature = "electrum",
feature = "esplora",
feature = "compact_filters",
feature = "rpc"
))]
pub mod any;
mod script_sync;
#[cfg(any(
feature = "electrum",
feature = "esplora",
feature = "compact_filters",
feature = "rpc"
))]
pub use any::{AnyBlockchain, AnyBlockchainConfig};
#[cfg(feature = "electrum")]
#[cfg_attr(docsrs, doc(cfg(feature = "electrum")))]
pub mod electrum;
#[cfg(feature = "electrum")]
pub use self::electrum::ElectrumBlockchain;
#[cfg(feature = "electrum")]
pub use self::electrum::ElectrumBlockchainConfig;
#[cfg(feature = "rpc")]
#[cfg_attr(docsrs, doc(cfg(feature = "rpc")))]
pub mod rpc;
#[cfg(feature = "rpc")]
pub use self::rpc::RpcBlockchain;
#[cfg(feature = "rpc")]
pub use self::rpc::RpcConfig;
#[cfg(feature = "esplora")]
#[cfg_attr(docsrs, doc(cfg(feature = "esplora")))]
pub mod esplora;
#[cfg(feature = "esplora")]
pub use self::esplora::EsploraBlockchain;
#[cfg(feature = "compact_filters")]
#[cfg_attr(docsrs, doc(cfg(feature = "compact_filters")))]
pub mod compact_filters;
#[cfg(feature = "compact_filters")]
pub use self::compact_filters::CompactFiltersBlockchain;
/// Capabilities that can be supported by a [`Blockchain`] backend
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum Capability {
/// Can recover the full history of a wallet and not only the set of currently spendable UTXOs
FullHistory,
/// Can fetch any historical transaction given its txid
GetAnyTx,
/// Can compute accurate fees for the transactions found during sync
AccurateFees,
}
/// Trait that defines the actions that must be supported by a blockchain backend
#[maybe_async]
pub trait Blockchain {
fn is_online(&self) -> bool;
/// Return the set of [`Capability`] supported by this backend
fn get_capabilities(&self) -> HashSet<Capability>;
fn offline() -> Self;
}
pub struct OfflineBlockchain;
impl Blockchain for OfflineBlockchain {
fn offline() -> Self {
OfflineBlockchain
}
fn is_online(&self) -> bool {
false
}
}
#[async_trait(?Send)]
pub trait OnlineBlockchain: Blockchain {
async fn get_capabilities(&self) -> HashSet<Capability>;
async fn setup<D: BatchDatabase + DatabaseUtils, P: Progress>(
&mut self,
stop_gap: Option<usize>,
/// Setup the backend and populate the internal database for the first time
///
/// This method is the equivalent of [`Blockchain::sync`], but it's guaranteed to only be
/// called once, at the first [`Wallet::sync`](crate::wallet::Wallet::sync).
///
/// The rationale behind the distinction between `sync` and `setup` is that some custom backends
/// might need to perform specific actions only the first time they are synced.
///
/// For types that do not have that distinction, only this method can be implemented, since
/// [`Blockchain::sync`] defaults to calling this internally if not overridden.
fn setup<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error>;
async fn sync<D: BatchDatabase + DatabaseUtils, P: Progress>(
&mut self,
stop_gap: Option<usize>,
/// Populate the internal database with transactions and UTXOs
///
/// If not overridden, it defaults to calling [`Blockchain::setup`] internally.
///
/// This method should implement the logic required to iterate over the list of the wallet's
/// script_pubkeys using [`Database::iter_script_pubkeys`] and look for relevant transactions
/// in the blockchain to populate the database with [`BatchOperations::set_tx`] and
/// [`BatchOperations::set_utxo`].
///
/// This method should also take care of removing UTXOs that are seen as spent in the
/// blockchain, using [`BatchOperations::del_utxo`].
///
/// The `progress_update` object can be used to give the caller updates about the progress by using
/// [`Progress::update`].
///
/// [`Database::iter_script_pubkeys`]: crate::database::Database::iter_script_pubkeys
/// [`BatchOperations::set_tx`]: crate::database::BatchOperations::set_tx
/// [`BatchOperations::set_utxo`]: crate::database::BatchOperations::set_utxo
/// [`BatchOperations::del_utxo`]: crate::database::BatchOperations::del_utxo
fn sync<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
self.setup(stop_gap, database, progress_update).await
maybe_await!(self.setup(database, progress_update))
}
async fn get_tx(&mut self, txid: &Txid) -> Result<Option<Transaction>, Error>;
async fn broadcast(&mut self, tx: &Transaction) -> Result<(), Error>;
/// Fetch a transaction from the blockchain given its txid
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error>;
/// Broadcast a transaction
fn broadcast(&self, tx: &Transaction) -> Result<(), Error>;
async fn get_height(&mut self) -> Result<usize, Error>;
/// Return the current height
fn get_height(&self) -> Result<u32, Error>;
/// Estimate the fee rate required to confirm a transaction in a given `target` of blocks
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error>;
}
/// Trait for [`Blockchain`] types that can be created given a configuration
pub trait ConfigurableBlockchain: Blockchain + Sized {
/// Type that contains the configuration
type Config: std::fmt::Debug;
/// Create a new instance given a configuration
fn from_config(config: &Self::Config) -> Result<Self, Error>;
}
/// Data sent with a progress update over a [`channel`]
pub type ProgressData = (f32, Option<String>);
pub trait Progress {
/// Trait for types that can receive and process progress updates during [`Blockchain::sync`] and
/// [`Blockchain::setup`]
pub trait Progress: Send {
/// Send a new progress update
///
/// The `progress` value should be in the range 0.0 - 100.0, and the `message` value is an
/// optional text message that can be displayed to the user.
fn update(&self, progress: f32, message: Option<String>) -> Result<(), Error>;
}
/// Shortcut to create a [`channel`] (pair of [`Sender`] and [`Receiver`]) that can transport [`ProgressData`]
pub fn progress() -> (Sender<ProgressData>, Receiver<ProgressData>) {
channel()
}
impl Progress for Sender<ProgressData> {
fn update(&self, progress: f32, message: Option<String>) -> Result<(), Error> {
if progress < 0.0 || progress > 100.0 {
if !(0.0..=100.0).contains(&progress) {
return Err(Error::InvalidProgressValue(progress));
}
@@ -87,8 +181,11 @@ impl Progress for Sender<ProgressData> {
}
}
/// Type that implements [`Progress`] and drops every update received
#[derive(Clone, Copy)]
pub struct NoopProgress;
/// Create a new instance of [`NoopProgress`]
pub fn noop_progress() -> NoopProgress {
NoopProgress
}
@@ -98,3 +195,61 @@ impl Progress for NoopProgress {
Ok(())
}
}
/// Type that implements [`Progress`] and logs at level `INFO` every update received
#[derive(Clone, Copy)]
pub struct LogProgress;
/// Create a new instance of [`LogProgress`]
pub fn log_progress() -> LogProgress {
LogProgress
}
impl Progress for LogProgress {
fn update(&self, progress: f32, message: Option<String>) -> Result<(), Error> {
log::info!(
"Sync {:.3}%: `{}`",
progress,
message.unwrap_or_else(|| "".into())
);
Ok(())
}
}
#[maybe_async]
impl<T: Blockchain> Blockchain for Arc<T> {
fn get_capabilities(&self) -> HashSet<Capability> {
maybe_await!(self.deref().get_capabilities())
}
fn setup<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
maybe_await!(self.deref().setup(database, progress_update))
}
fn sync<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
maybe_await!(self.deref().sync(database, progress_update))
}
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
maybe_await!(self.deref().get_tx(txid))
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
maybe_await!(self.deref().broadcast(tx))
}
fn get_height(&self) -> Result<u32, Error> {
maybe_await!(self.deref().get_height())
}
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
maybe_await!(self.deref().estimate_fee(target))
}
}

446
src/blockchain/rpc.rs Normal file
View File

@@ -0,0 +1,446 @@
// Bitcoin Dev Kit
// Written in 2021 by Riccardo Casatta <riccardo@casatta.it>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Rpc Blockchain
//!
//! Backend that gets blockchain data from Bitcoin Core RPC
//!
//! This is an **EXPERIMENTAL** feature, API and other major changes are expected.
//!
//! ## Example
//!
//! ```no_run
//! # use bdk::blockchain::{RpcConfig, RpcBlockchain, ConfigurableBlockchain, rpc::Auth};
//! let config = RpcConfig {
//! url: "127.0.0.1:18332".to_string(),
//! auth: Auth::Cookie {
//! file: "/home/user/.bitcoin/.cookie".into(),
//! },
//! network: bdk::bitcoin::Network::Testnet,
//! wallet_name: "wallet_name".to_string(),
//! skip_blocks: None,
//! };
//! let blockchain = RpcBlockchain::from_config(&config);
//! ```
use crate::bitcoin::consensus::deserialize;
use crate::bitcoin::{Address, Network, OutPoint, Transaction, TxOut, Txid};
use crate::blockchain::{Blockchain, Capability, ConfigurableBlockchain, Progress};
use crate::database::{BatchDatabase, DatabaseUtils};
use crate::{BlockTime, Error, FeeRate, KeychainKind, LocalUtxo, TransactionDetails};
use bitcoincore_rpc::json::{
GetAddressInfoResultLabel, ImportMultiOptions, ImportMultiRequest,
ImportMultiRequestScriptPubkey, ImportMultiRescanSince,
};
use bitcoincore_rpc::jsonrpc::serde_json::Value;
use bitcoincore_rpc::Auth as RpcAuth;
use bitcoincore_rpc::{Client, RpcApi};
use log::debug;
use serde::{Deserialize, Serialize};
use std::collections::{HashMap, HashSet};
use std::path::PathBuf;
use std::str::FromStr;
/// The main struct for RPC backend implementing the [crate::blockchain::Blockchain] trait
#[derive(Debug)]
pub struct RpcBlockchain {
/// Rpc client to the node, includes the wallet name
client: Client,
/// Network used
network: Network,
/// Blockchain capabilities, cached here at startup
capabilities: HashSet<Capability>,
/// Skip this many blocks of the blockchain at the first rescan, if None the rescan is done from the genesis block
skip_blocks: Option<u32>,
/// This is a fixed Address used as a hack key to store information on the node
_storage_address: Address,
}
/// RpcBlockchain configuration options
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
pub struct RpcConfig {
/// The bitcoin node url
pub url: String,
/// The bitcoin node authentication mechanism
pub auth: Auth,
/// The network we are using (it will be checked the bitcoin node network matches this)
pub network: Network,
/// The wallet name in the bitcoin node, consider using [crate::wallet::wallet_name_from_descriptor] for this
pub wallet_name: String,
/// Skip this many blocks of the blockchain at the first rescan, if None the rescan is done from the genesis block
pub skip_blocks: Option<u32>,
}
/// This struct is equivalent to [bitcoincore_rpc::Auth] but it implements [serde::Serialize]
/// To be removed once upstream equivalent is implementing Serialize (json serialization format
/// should be the same), see [rust-bitcoincore-rpc/pull/181](https://github.com/rust-bitcoin/rust-bitcoincore-rpc/pull/181)
#[derive(Clone, Debug, Hash, Eq, PartialEq, Ord, PartialOrd, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
#[serde(untagged)]
pub enum Auth {
/// None authentication
None,
/// Authentication with username and password, usually [Auth::Cookie] should be preferred
UserPass {
/// Username
username: String,
/// Password
password: String,
},
/// Authentication with a cookie file
Cookie {
/// Cookie file
file: PathBuf,
},
}
impl From<Auth> for RpcAuth {
fn from(auth: Auth) -> Self {
match auth {
Auth::None => RpcAuth::None,
Auth::UserPass { username, password } => RpcAuth::UserPass(username, password),
Auth::Cookie { file } => RpcAuth::CookieFile(file),
}
}
}
impl RpcBlockchain {
fn get_node_synced_height(&self) -> Result<u32, Error> {
let info = self.client.get_address_info(&self._storage_address)?;
if let Some(GetAddressInfoResultLabel::Simple(label)) = info.labels.first() {
Ok(label
.parse::<u32>()
.unwrap_or_else(|_| self.skip_blocks.unwrap_or(0)))
} else {
Ok(self.skip_blocks.unwrap_or(0))
}
}
/// Set the synced height in the core node by using a label of a fixed address so that
/// another client with the same descriptor doesn't rescan the blockchain
fn set_node_synced_height(&self, height: u32) -> Result<(), Error> {
Ok(self
.client
.set_label(&self._storage_address, &height.to_string())?)
}
}
impl Blockchain for RpcBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
self.capabilities.clone()
}
fn setup<D: BatchDatabase, P: 'static + Progress>(
&self,
database: &mut D,
progress_update: P,
) -> Result<(), Error> {
let mut scripts_pubkeys = database.iter_script_pubkeys(Some(KeychainKind::External))?;
scripts_pubkeys.extend(database.iter_script_pubkeys(Some(KeychainKind::Internal))?);
debug!(
"importing {} script_pubkeys (some maybe already imported)",
scripts_pubkeys.len()
);
let requests: Vec<_> = scripts_pubkeys
.iter()
.map(|s| ImportMultiRequest {
timestamp: ImportMultiRescanSince::Timestamp(0),
script_pubkey: Some(ImportMultiRequestScriptPubkey::Script(s)),
watchonly: Some(true),
..Default::default()
})
.collect();
let options = ImportMultiOptions {
rescan: Some(false),
};
// Note we use import_multi because as of bitcoin core 0.21.0 many descriptors are not supported
// https://bitcoindevkit.org/descriptors/#compatibility-matrix
//TODO maybe convenient using import_descriptor for compatible descriptor and import_multi as fallback
self.client.import_multi(&requests, Some(&options))?;
loop {
let current_height = self.get_height()?;
// min because block invalidate may cause height to go down
let node_synced = self.get_node_synced_height()?.min(current_height);
let sync_up_to = node_synced.saturating_add(10_000).min(current_height);
debug!("rescan_blockchain from:{} to:{}", node_synced, sync_up_to);
self.client
.rescan_blockchain(Some(node_synced as usize), Some(sync_up_to as usize))?;
progress_update.update((sync_up_to as f32) / (current_height as f32), None)?;
self.set_node_synced_height(sync_up_to)?;
if sync_up_to == current_height {
break;
}
}
self.sync(database, progress_update)
}
fn sync<D: BatchDatabase, P: 'static + Progress>(
&self,
db: &mut D,
_progress_update: P,
) -> Result<(), Error> {
let mut indexes = HashMap::new();
for keykind in &[KeychainKind::External, KeychainKind::Internal] {
indexes.insert(*keykind, db.get_last_index(*keykind)?.unwrap_or(0));
}
let mut known_txs: HashMap<_, _> = db
.iter_txs(true)?
.into_iter()
.map(|tx| (tx.txid, tx))
.collect();
let known_utxos: HashSet<_> = db.iter_utxos()?.into_iter().collect();
//TODO list_since_blocks would be more efficient
let current_utxo = self
.client
.list_unspent(Some(0), None, None, Some(true), None)?;
debug!("current_utxo len {}", current_utxo.len());
//TODO supported up to 1_000 txs, should use since_blocks or do paging
let list_txs = self
.client
.list_transactions(None, Some(1_000), None, Some(true))?;
let mut list_txs_ids = HashSet::new();
for tx_result in list_txs.iter().filter(|t| {
// list_txs returns all conflicting tx we want to
// filter out replaced tx => unconfirmed and not in the mempool
t.info.confirmations > 0 || self.client.get_mempool_entry(&t.info.txid).is_ok()
}) {
let txid = tx_result.info.txid;
list_txs_ids.insert(txid);
if let Some(mut known_tx) = known_txs.get_mut(&txid) {
let confirmation_time =
BlockTime::new(tx_result.info.blockheight, tx_result.info.blocktime);
if confirmation_time != known_tx.confirmation_time {
// reorg may change tx height
debug!(
"updating tx({}) confirmation time to: {:?}",
txid, confirmation_time
);
known_tx.confirmation_time = confirmation_time;
db.set_tx(known_tx)?;
}
} else {
//TODO check there is already the raw tx in db?
let tx_result = self.client.get_transaction(&txid, Some(true))?;
let tx: Transaction = deserialize(&tx_result.hex)?;
let mut received = 0u64;
let mut sent = 0u64;
for output in tx.output.iter() {
if let Ok(Some((kind, index))) =
db.get_path_from_script_pubkey(&output.script_pubkey)
{
if index > *indexes.get(&kind).unwrap() {
indexes.insert(kind, index);
}
received += output.value;
}
}
for input in tx.input.iter() {
if let Some(previous_output) = db.get_previous_output(&input.previous_output)? {
sent += previous_output.value;
}
}
let td = TransactionDetails {
transaction: Some(tx),
txid: tx_result.info.txid,
confirmation_time: BlockTime::new(
tx_result.info.blockheight,
tx_result.info.blocktime,
),
received,
sent,
fee: tx_result.fee.map(|f| f.as_sat().abs() as u64),
verified: true,
};
debug!(
"saving tx: {} tx_result.fee:{:?} td.fees:{:?}",
td.txid, tx_result.fee, td.fee
);
db.set_tx(&td)?;
}
}
for known_txid in known_txs.keys() {
if !list_txs_ids.contains(known_txid) {
debug!("removing tx: {}", known_txid);
db.del_tx(known_txid, false)?;
}
}
let current_utxos: HashSet<_> = current_utxo
.into_iter()
.map(|u| {
Ok(LocalUtxo {
outpoint: OutPoint::new(u.txid, u.vout),
keychain: db
.get_path_from_script_pubkey(&u.script_pub_key)?
.ok_or(Error::TransactionNotFound)?
.0,
txout: TxOut {
value: u.amount.as_sat(),
script_pubkey: u.script_pub_key,
},
})
})
.collect::<Result<_, Error>>()?;
let spent: HashSet<_> = known_utxos.difference(&current_utxos).collect();
for s in spent {
debug!("removing utxo: {:?}", s);
db.del_utxo(&s.outpoint)?;
}
let received: HashSet<_> = current_utxos.difference(&known_utxos).collect();
for s in received {
debug!("adding utxo: {:?}", s);
db.set_utxo(s)?;
}
for (keykind, index) in indexes {
debug!("{:?} max {}", keykind, index);
db.set_last_index(keykind, index)?;
}
Ok(())
}
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(Some(self.client.get_raw_transaction(txid, None)?))
}
fn broadcast(&self, tx: &Transaction) -> Result<(), Error> {
Ok(self.client.send_raw_transaction(tx).map(|_| ())?)
}
fn get_height(&self) -> Result<u32, Error> {
Ok(self.client.get_blockchain_info().map(|i| i.blocks as u32)?)
}
fn estimate_fee(&self, target: usize) -> Result<FeeRate, Error> {
let sat_per_kb = self
.client
.estimate_smart_fee(target as u16, None)?
.fee_rate
.ok_or(Error::FeeRateUnavailable)?
.as_sat() as f64;
Ok(FeeRate::from_sat_per_vb((sat_per_kb / 1000f64) as f32))
}
}
impl ConfigurableBlockchain for RpcBlockchain {
type Config = RpcConfig;
/// Returns RpcBlockchain backend creating an RPC client to a specific wallet named as the descriptor's checksum
/// if it's the first time it creates the wallet in the node and upon return is granted the wallet is loaded
fn from_config(config: &Self::Config) -> Result<Self, Error> {
let wallet_name = config.wallet_name.clone();
let wallet_url = format!("{}/wallet/{}", config.url, &wallet_name);
debug!("connecting to {} auth:{:?}", wallet_url, config.auth);
let client = Client::new(wallet_url.as_str(), config.auth.clone().into())?;
let loaded_wallets = client.list_wallets()?;
if loaded_wallets.contains(&wallet_name) {
debug!("wallet already loaded {:?}", wallet_name);
} else {
let existing_wallets = list_wallet_dir(&client)?;
if existing_wallets.contains(&wallet_name) {
client.load_wallet(&wallet_name)?;
debug!("wallet loaded {:?}", wallet_name);
} else {
client.create_wallet(&wallet_name, Some(true), None, None, None)?;
debug!("wallet created {:?}", wallet_name);
}
}
let blockchain_info = client.get_blockchain_info()?;
let network = match blockchain_info.chain.as_str() {
"main" => Network::Bitcoin,
"test" => Network::Testnet,
"regtest" => Network::Regtest,
"signet" => Network::Signet,
_ => return Err(Error::Generic("Invalid network".to_string())),
};
if network != config.network {
return Err(Error::InvalidNetwork {
requested: config.network,
found: network,
});
}
let mut capabilities: HashSet<_> = vec![Capability::FullHistory].into_iter().collect();
let rpc_version = client.version()?;
if rpc_version >= 210_000 {
let info: HashMap<String, Value> = client.call("getindexinfo", &[]).unwrap();
if info.contains_key("txindex") {
capabilities.insert(Capability::GetAnyTx);
capabilities.insert(Capability::AccurateFees);
}
}
// this is just a fixed address used only to store a label containing the synced height in the node
let mut storage_address =
Address::from_str("bc1qst0rewf0wm4kw6qn6kv0e5tc56nkf9yhcxlhqv").unwrap();
storage_address.network = network;
Ok(RpcBlockchain {
client,
network,
capabilities,
_storage_address: storage_address,
skip_blocks: config.skip_blocks,
})
}
}
/// return the wallets available in default wallet directory
//TODO use bitcoincore_rpc method when PR #179 lands
fn list_wallet_dir(client: &Client) -> Result<Vec<String>, Error> {
#[derive(Deserialize)]
struct Name {
name: String,
}
#[derive(Deserialize)]
struct CallResult {
wallets: Vec<Name>,
}
let result: CallResult = client.call("listwalletdir", &[])?;
Ok(result.wallets.into_iter().map(|n| n.name).collect())
}
#[cfg(test)]
#[cfg(feature = "test-rpc")]
crate::bdk_blockchain_tests! {
fn test_instance(test_client: &TestClient) -> RpcBlockchain {
let config = RpcConfig {
url: test_client.bitcoind.rpc_url(),
auth: Auth::Cookie { file: test_client.bitcoind.params.cookie_file.clone() },
network: Network::Regtest,
wallet_name: format!("client-wallet-test-{:?}", std::time::SystemTime::now() ),
skip_blocks: None,
};
RpcBlockchain::from_config(&config).unwrap()
}
}

View File

@@ -0,0 +1,394 @@
/*!
This models a how a sync happens where you have a server that you send your script pubkeys to and it
returns associated transactions i.e. electrum.
*/
#![allow(dead_code)]
use crate::{
database::{BatchDatabase, BatchOperations, DatabaseUtils},
wallet::time::Instant,
BlockTime, Error, KeychainKind, LocalUtxo, TransactionDetails,
};
use bitcoin::{OutPoint, Script, Transaction, TxOut, Txid};
use log::*;
use std::collections::{BTreeMap, BTreeSet, HashMap, HashSet, VecDeque};
/// A request for on-chain information
pub enum Request<'a, D: BatchDatabase> {
/// A request for transactions related to script pubkeys.
Script(ScriptReq<'a, D>),
/// A request for confirmation times for some transactions.
Conftime(ConftimeReq<'a, D>),
/// A request for full transaction details of some transactions.
Tx(TxReq<'a, D>),
/// Requests are finished here's a batch database update to reflect data gathered.
Finish(D::Batch),
}
/// starts a sync
pub fn start<D: BatchDatabase>(db: &D, stop_gap: usize) -> Result<Request<'_, D>, Error> {
use rand::seq::SliceRandom;
let mut keychains = vec![KeychainKind::Internal, KeychainKind::External];
// shuffling improve privacy, the server doesn't know my first request is from my internal or external addresses
keychains.shuffle(&mut rand::thread_rng());
let keychain = keychains.pop().unwrap();
let scripts_needed = db
.iter_script_pubkeys(Some(keychain))?
.into_iter()
.collect();
let state = State::new(db);
Ok(Request::Script(ScriptReq {
state,
scripts_needed,
script_index: 0,
stop_gap,
keychain,
next_keychains: keychains,
}))
}
pub struct ScriptReq<'a, D: BatchDatabase> {
state: State<'a, D>,
script_index: usize,
scripts_needed: VecDeque<Script>,
stop_gap: usize,
keychain: KeychainKind,
next_keychains: Vec<KeychainKind>,
}
/// The sync starts by returning script pubkeys we are interested in.
impl<'a, D: BatchDatabase> ScriptReq<'a, D> {
pub fn request(&self) -> impl Iterator<Item = &Script> + Clone {
self.scripts_needed.iter()
}
pub fn satisfy(
mut self,
// we want to know the txids assoiciated with the script and their height
txids: Vec<Vec<(Txid, Option<u32>)>>,
) -> Result<Request<'a, D>, Error> {
for (txid_list, script) in txids.iter().zip(self.scripts_needed.iter()) {
debug!(
"found {} transactions for script pubkey {}",
txid_list.len(),
script
);
if !txid_list.is_empty() {
// the address is active
self.state
.last_active_index
.insert(self.keychain, self.script_index);
}
for (txid, height) in txid_list {
// have we seen this txid already?
match self.state.db.get_tx(txid, true)? {
Some(mut details) => {
let old_height = details.confirmation_time.as_ref().map(|x| x.height);
match (old_height, height) {
(None, Some(_)) => {
// It looks like the tx has confirmed since we last saw it -- we
// need to know the confirmation time.
self.state.tx_missing_conftime.insert(*txid, details);
}
(Some(old_height), Some(new_height)) if old_height != *new_height => {
// The height of the tx has changed !? -- It's a reorg get the new confirmation time.
self.state.tx_missing_conftime.insert(*txid, details);
}
(Some(_), None) => {
// A re-org where the tx is not in the chain anymore.
details.confirmation_time = None;
self.state.finished_txs.push(details);
}
_ => self.state.finished_txs.push(details),
}
}
None => {
// we've never seen it let's get the whole thing
self.state.tx_needed.insert(*txid);
}
};
}
self.script_index += 1;
}
for _ in txids {
self.scripts_needed.pop_front();
}
let last_active_index = self
.state
.last_active_index
.get(&self.keychain)
.map(|x| x + 1)
.unwrap_or(0); // so no addresses active maps to 0
Ok(
if self.script_index > last_active_index + self.stop_gap
|| self.scripts_needed.is_empty()
{
debug!(
"finished scanning for transactions for keychain {:?} at index {}",
self.keychain, last_active_index
);
// we're done here -- check if we need to do the next keychain
if let Some(keychain) = self.next_keychains.pop() {
self.keychain = keychain;
self.script_index = 0;
self.scripts_needed = self
.state
.db
.iter_script_pubkeys(Some(keychain))?
.into_iter()
.collect();
Request::Script(self)
} else {
Request::Tx(TxReq { state: self.state })
}
} else {
Request::Script(self)
},
)
}
}
/// Then we get full transactions
pub struct TxReq<'a, D> {
state: State<'a, D>,
}
impl<'a, D: BatchDatabase> TxReq<'a, D> {
pub fn request(&self) -> impl Iterator<Item = &Txid> + Clone {
self.state.tx_needed.iter()
}
pub fn satisfy(
mut self,
tx_details: Vec<(Vec<Option<TxOut>>, Transaction)>,
) -> Result<Request<'a, D>, Error> {
let tx_details: Vec<TransactionDetails> = tx_details
.into_iter()
.zip(self.state.tx_needed.iter())
.map(|((vout, tx), txid)| {
debug!("found tx_details for {}", txid);
assert_eq!(tx.txid(), *txid);
let mut sent: u64 = 0;
let mut received: u64 = 0;
let mut inputs_sum: u64 = 0;
let mut outputs_sum: u64 = 0;
for (txout, input) in vout.into_iter().zip(tx.input.iter()) {
let txout = match txout {
Some(txout) => txout,
None => {
// skip coinbase inputs
debug_assert!(
input.previous_output.is_null(),
"prevout should only be missing for coinbase"
);
continue;
}
};
inputs_sum += txout.value;
if self.state.db.is_mine(&txout.script_pubkey)? {
sent += txout.value;
}
}
for out in &tx.output {
outputs_sum += out.value;
if self.state.db.is_mine(&out.script_pubkey)? {
received += out.value;
}
}
// we need to saturating sub since we want coinbase txs to map to 0 fee and
// this subtraction will be negative for coinbase txs.
let fee = inputs_sum.saturating_sub(outputs_sum);
Result::<_, Error>::Ok(TransactionDetails {
txid: *txid,
transaction: Some(tx),
received,
sent,
// we're going to fill this in later
confirmation_time: None,
fee: Some(fee),
verified: false,
})
})
.collect::<Result<Vec<_>, _>>()?;
for tx_detail in tx_details {
self.state.tx_needed.remove(&tx_detail.txid);
self.state
.tx_missing_conftime
.insert(tx_detail.txid, tx_detail);
}
if !self.state.tx_needed.is_empty() {
Ok(Request::Tx(self))
} else {
Ok(Request::Conftime(ConftimeReq { state: self.state }))
}
}
}
/// Final step is to get confirmation times
pub struct ConftimeReq<'a, D> {
state: State<'a, D>,
}
impl<'a, D: BatchDatabase> ConftimeReq<'a, D> {
pub fn request(&self) -> impl Iterator<Item = &Txid> + Clone {
self.state.tx_missing_conftime.keys()
}
pub fn satisfy(
mut self,
confirmation_times: Vec<Option<BlockTime>>,
) -> Result<Request<'a, D>, Error> {
let conftime_needed = self
.request()
.cloned()
.take(confirmation_times.len())
.collect::<Vec<_>>();
for (confirmation_time, txid) in confirmation_times.into_iter().zip(conftime_needed.iter())
{
debug!("confirmation time for {} was {:?}", txid, confirmation_time);
if let Some(mut tx_details) = self.state.tx_missing_conftime.remove(txid) {
tx_details.confirmation_time = confirmation_time;
self.state.finished_txs.push(tx_details);
}
}
if self.state.tx_missing_conftime.is_empty() {
Ok(Request::Finish(self.state.into_db_update()?))
} else {
Ok(Request::Conftime(self))
}
}
}
struct State<'a, D> {
db: &'a D,
last_active_index: HashMap<KeychainKind, usize>,
/// Transactions where we need to get the full details
tx_needed: BTreeSet<Txid>,
/// Transacitions that we know everything about
finished_txs: Vec<TransactionDetails>,
/// Transactions that discovered conftimes should be inserted into
tx_missing_conftime: BTreeMap<Txid, TransactionDetails>,
/// The start of the sync
start_time: Instant,
}
impl<'a, D: BatchDatabase> State<'a, D> {
fn new(db: &'a D) -> Self {
State {
db,
last_active_index: HashMap::default(),
finished_txs: vec![],
tx_needed: BTreeSet::default(),
tx_missing_conftime: BTreeMap::default(),
start_time: Instant::new(),
}
}
fn into_db_update(self) -> Result<D::Batch, Error> {
debug_assert!(self.tx_needed.is_empty() && self.tx_missing_conftime.is_empty());
let existing_txs = self.db.iter_txs(false)?;
let existing_txids: HashSet<Txid> = existing_txs.iter().map(|tx| tx.txid).collect();
let finished_txs = make_txs_consistent(&self.finished_txs);
let observed_txids: HashSet<Txid> = finished_txs.iter().map(|tx| tx.txid).collect();
let txids_to_delete = existing_txids.difference(&observed_txids);
let mut batch = self.db.begin_batch();
// Delete old txs that no longer exist
for txid in txids_to_delete {
if let Some(raw_tx) = self.db.get_raw_tx(txid)? {
for i in 0..raw_tx.output.len() {
// Also delete any utxos from the txs that no longer exist.
let _ = batch.del_utxo(&OutPoint {
txid: *txid,
vout: i as u32,
})?;
}
} else {
unreachable!("we should always have the raw tx");
}
batch.del_tx(txid, true)?;
}
// Set every tx we observed
for finished_tx in &finished_txs {
let tx = finished_tx
.transaction
.as_ref()
.expect("transaction will always be present here");
for (i, output) in tx.output.iter().enumerate() {
if let Some((keychain, _)) =
self.db.get_path_from_script_pubkey(&output.script_pubkey)?
{
// add utxos we own from the new transactions we've seen.
batch.set_utxo(&LocalUtxo {
outpoint: OutPoint {
txid: finished_tx.txid,
vout: i as u32,
},
txout: output.clone(),
keychain,
})?;
}
}
batch.set_tx(finished_tx)?;
}
// we don't do this in the loop above since we may want to delete some of the utxos we
// just added in case there are new tranasactions that spend form each other.
for finished_tx in &finished_txs {
let tx = finished_tx
.transaction
.as_ref()
.expect("transaction will always be present here");
for input in &tx.input {
// Delete any spent utxos
batch.del_utxo(&input.previous_output)?;
}
}
for (keychain, last_active_index) in self.last_active_index {
batch.set_last_index(keychain, last_active_index as u32)?;
}
info!(
"finished setup, elapsed {:?}ms",
self.start_time.elapsed().as_millis()
);
Ok(batch)
}
}
/// Remove conflicting transactions -- tie breaking them by fee.
fn make_txs_consistent(txs: &[TransactionDetails]) -> Vec<&TransactionDetails> {
let mut utxo_index: HashMap<OutPoint, &TransactionDetails> = HashMap::default();
for tx in txs {
for input in &tx.transaction.as_ref().unwrap().input {
utxo_index
.entry(input.previous_output)
.and_modify(|existing| match (tx.fee, existing.fee) {
(Some(fee), Some(existing_fee)) if fee > existing_fee => *existing = tx,
(Some(_), None) => *existing = tx,
_ => { /* leave it the same */ }
})
.or_insert(tx);
}
}
utxo_index
.into_iter()
.map(|(_, tx)| (tx.txid, tx))
.collect::<HashMap<_, _>>()
.into_iter()
.map(|(_, tx)| tx)
.collect()
}

View File

@@ -1,304 +0,0 @@
use std::cmp;
use std::collections::{HashSet, VecDeque};
use std::convert::TryFrom;
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use bitcoin::{Address, Network, OutPoint, Script, Transaction, Txid};
use super::*;
use crate::database::{BatchDatabase, BatchOperations, DatabaseUtils};
use crate::error::Error;
use crate::types::{ScriptType, TransactionDetails, UTXO};
use crate::wallet::utils::ChunksIterator;
#[derive(Debug)]
pub struct ELSGetHistoryRes {
pub height: i32,
pub tx_hash: Txid,
}
#[derive(Debug)]
pub struct ELSListUnspentRes {
pub height: usize,
pub tx_hash: Txid,
pub tx_pos: usize,
}
/// Implements the synchronization logic for an Electrum-like client.
#[async_trait(?Send)]
pub trait ElectrumLikeSync {
async fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script>>(
&mut self,
scripts: I,
) -> Result<Vec<Vec<ELSGetHistoryRes>>, Error>;
async fn els_batch_script_list_unspent<'s, I: IntoIterator<Item = &'s Script>>(
&mut self,
scripts: I,
) -> Result<Vec<Vec<ELSListUnspentRes>>, Error>;
async fn els_transaction_get(&mut self, txid: &Txid) -> Result<Transaction, Error>;
// Provided methods down here...
async fn electrum_like_setup<D: BatchDatabase + DatabaseUtils, P: Progress>(
&mut self,
stop_gap: Option<usize>,
database: &mut D,
_progress_update: P,
) -> Result<(), Error> {
// TODO: progress
let stop_gap = stop_gap.unwrap_or(20);
let batch_query_size = 20;
// check unconfirmed tx, delete so they are retrieved later
let mut del_batch = database.begin_batch();
for tx in database.iter_txs(false)? {
if tx.height.is_none() {
del_batch.del_tx(&tx.txid, false)?;
}
}
database.commit_batch(del_batch)?;
// maximum derivation index for a change address that we've seen during sync
let mut change_max_deriv = 0;
let mut already_checked: HashSet<Script> = HashSet::new();
let mut to_check_later = VecDeque::with_capacity(batch_query_size);
// insert the first chunk
let mut iter_scriptpubkeys = database
.iter_script_pubkeys(Some(ScriptType::External))?
.into_iter();
let chunk: Vec<Script> = iter_scriptpubkeys.by_ref().take(batch_query_size).collect();
for item in chunk.into_iter().rev() {
to_check_later.push_front(item);
}
let mut iterating_external = true;
let mut index = 0;
let mut last_found = 0;
while !to_check_later.is_empty() {
trace!("to_check_later size {}", to_check_later.len());
let until = cmp::min(to_check_later.len(), batch_query_size);
let chunk: Vec<Script> = to_check_later.drain(..until).collect();
let call_result = self.els_batch_script_get_history(chunk.iter()).await?;
for (script, history) in chunk.into_iter().zip(call_result.into_iter()) {
trace!("received history for {:?}, size {}", script, history.len());
if !history.is_empty() {
last_found = index;
let mut check_later_scripts = self
.check_history(database, script, history, &mut change_max_deriv)
.await?
.into_iter()
.filter(|x| already_checked.insert(x.clone()))
.collect();
to_check_later.append(&mut check_later_scripts);
}
index += 1;
}
match iterating_external {
true if index - last_found >= stop_gap => iterating_external = false,
true => {
trace!("pushing one more batch from `iter_scriptpubkeys`. index = {}, last_found = {}, stop_gap = {}", index, last_found, stop_gap);
let chunk: Vec<Script> =
iter_scriptpubkeys.by_ref().take(batch_query_size).collect();
for item in chunk.into_iter().rev() {
to_check_later.push_front(item);
}
}
_ => {}
}
}
// check utxo
// TODO: try to minimize network requests and re-use scripts if possible
let mut batch = database.begin_batch();
for chunk in ChunksIterator::new(database.iter_utxos()?.into_iter(), batch_query_size) {
let scripts: Vec<_> = chunk.iter().map(|u| &u.txout.script_pubkey).collect();
let call_result = self.els_batch_script_list_unspent(scripts).await?;
// check which utxos are actually still unspent
for (utxo, list_unspent) in chunk.into_iter().zip(call_result.iter()) {
debug!(
"outpoint {:?} is unspent for me, list unspent is {:?}",
utxo.outpoint, list_unspent
);
let mut spent = true;
for unspent in list_unspent {
let res_outpoint = OutPoint::new(unspent.tx_hash, unspent.tx_pos as u32);
if utxo.outpoint == res_outpoint {
spent = false;
break;
}
}
if spent {
info!("{} not anymore unspent, removing", utxo.outpoint);
batch.del_utxo(&utxo.outpoint)?;
}
}
}
let current_ext = database.get_last_index(ScriptType::External)?.unwrap_or(0);
let first_ext_new = last_found as u32 + 1;
if first_ext_new > current_ext {
info!("Setting external index to {}", first_ext_new);
database.set_last_index(ScriptType::External, first_ext_new)?;
}
let current_int = database.get_last_index(ScriptType::Internal)?.unwrap_or(0);
let first_int_new = change_max_deriv + 1;
if first_int_new > current_int {
info!("Setting internal index to {}", first_int_new);
database.set_last_index(ScriptType::Internal, first_int_new)?;
}
database.commit_batch(batch)?;
Ok(())
}
async fn check_tx_and_descendant<D: DatabaseUtils + BatchDatabase>(
&mut self,
database: &mut D,
txid: &Txid,
height: Option<u32>,
cur_script: &Script,
change_max_deriv: &mut u32,
) -> Result<Vec<Script>, Error> {
debug!(
"check_tx_and_descendant of {}, height: {:?}, script: {}",
txid, height, cur_script
);
let mut updates = database.begin_batch();
let tx = match database.get_tx(&txid, true)? {
// TODO: do we need the raw?
Some(mut saved_tx) => {
// update the height if it's different (in case of reorg)
if saved_tx.height != height {
info!(
"updating height from {:?} to {:?} for tx {}",
saved_tx.height, height, txid
);
saved_tx.height = height;
updates.set_tx(&saved_tx)?;
}
debug!("already have {} in db, returning the cached version", txid);
// unwrap since we explicitly ask for the raw_tx, if it's not present something
// went wrong
saved_tx.transaction.unwrap()
}
None => self.els_transaction_get(&txid).await?,
};
let mut incoming: u64 = 0;
let mut outgoing: u64 = 0;
// look for our own inputs
for (i, input) in tx.input.iter().enumerate() {
// the fact that we visit addresses in a BFS fashion starting from the external addresses
// should ensure that this query is always consistent (i.e. when we get to call this all
// the transactions at a lower depth have already been indexed, so if an outpoint is ours
// we are guaranteed to have it in the db).
if let Some(previous_output) = database.get_previous_output(&input.previous_output)? {
if database.is_mine(&previous_output.script_pubkey)? {
outgoing += previous_output.value;
debug!("{} input #{} is mine, removing from utxo", txid, i);
updates.del_utxo(&input.previous_output)?;
}
}
}
let mut to_check_later = vec![];
for (i, output) in tx.output.iter().enumerate() {
// this output is ours, we have a path to derive it
if let Some((script_type, child)) =
database.get_path_from_script_pubkey(&output.script_pubkey)?
{
debug!("{} output #{} is mine, adding utxo", txid, i);
updates.set_utxo(&UTXO {
outpoint: OutPoint::new(tx.txid(), i as u32),
txout: output.clone(),
})?;
incoming += output.value;
if output.script_pubkey != *cur_script {
debug!("{} output #{} script {} was not current script, adding script to be checked later", txid, i, output.script_pubkey);
to_check_later.push(output.script_pubkey.clone())
}
// derive as many change addrs as external addresses that we've seen
if script_type == ScriptType::Internal && child > *change_max_deriv {
*change_max_deriv = child;
}
}
}
let tx = TransactionDetails {
txid: tx.txid(),
transaction: Some(tx),
received: incoming,
sent: outgoing,
height,
timestamp: 0,
};
info!("Saving tx {}", txid);
updates.set_tx(&tx)?;
database.commit_batch(updates)?;
Ok(to_check_later)
}
async fn check_history<D: DatabaseUtils + BatchDatabase>(
&mut self,
database: &mut D,
script_pubkey: Script,
txs: Vec<ELSGetHistoryRes>,
change_max_deriv: &mut u32,
) -> Result<Vec<Script>, Error> {
let mut to_check_later = Vec::new();
debug!(
"history of {} script {} has {} tx",
Address::from_script(&script_pubkey, Network::Testnet).unwrap(),
script_pubkey,
txs.len()
);
for tx in txs {
let height: Option<u32> = match tx.height {
0 | -1 => None,
x => u32::try_from(x).ok(),
};
to_check_later.extend_from_slice(
&self
.check_tx_and_descendant(
database,
&tx.tx_hash,
height,
&script_pubkey,
change_max_deriv,
)
.await?,
);
}
Ok(to_check_later)
}
}

View File

@@ -1,436 +0,0 @@
use std::collections::BTreeMap;
use std::str::FromStr;
use clap::{App, Arg, ArgMatches, SubCommand};
#[allow(unused_imports)]
use log::{debug, error, info, trace, LevelFilter};
use bitcoin::consensus::encode::{deserialize, serialize, serialize_hex};
use bitcoin::hashes::hex::{FromHex, ToHex};
use bitcoin::util::psbt::PartiallySignedTransaction;
use bitcoin::{Address, OutPoint};
use crate::error::Error;
use crate::types::ScriptType;
use crate::Wallet;
fn parse_addressee(s: &str) -> Result<(Address, u64), String> {
let parts: Vec<_> = s.split(":").collect();
if parts.len() != 2 {
return Err("Invalid format".to_string());
}
let addr = Address::from_str(&parts[0]);
if let Err(e) = addr {
return Err(format!("{:?}", e));
}
let val = u64::from_str(&parts[1]);
if let Err(e) = val {
return Err(format!("{:?}", e));
}
Ok((addr.unwrap(), val.unwrap()))
}
fn parse_outpoint(s: &str) -> Result<OutPoint, String> {
OutPoint::from_str(s).map_err(|e| format!("{:?}", e))
}
fn addressee_validator(s: String) -> Result<(), String> {
parse_addressee(&s).map(|_| ())
}
fn outpoint_validator(s: String) -> Result<(), String> {
parse_outpoint(&s).map(|_| ())
}
pub fn make_cli_subcommands<'a, 'b>() -> App<'a, 'b> {
App::new("Magical Bitcoin Wallet")
.version(option_env!("CARGO_PKG_VERSION").unwrap_or("unknown"))
.author(option_env!("CARGO_PKG_AUTHORS").unwrap_or(""))
.about("A modern, lightweight, descriptor-based wallet")
.subcommand(
SubCommand::with_name("get_new_address").about("Generates a new external address"),
)
.subcommand(SubCommand::with_name("sync").about("Syncs with the chosen Electrum server"))
.subcommand(
SubCommand::with_name("list_unspent").about("Lists the available spendable UTXOs"),
)
.subcommand(
SubCommand::with_name("get_balance").about("Returns the current wallet balance"),
)
.subcommand(
SubCommand::with_name("create_tx")
.about("Creates a new unsigned tranasaction")
.arg(
Arg::with_name("to")
.long("to")
.value_name("ADDRESS:SAT")
.help("Adds an addressee to the transaction")
.takes_value(true)
.number_of_values(1)
.required(true)
.multiple(true)
.validator(addressee_validator),
)
.arg(
Arg::with_name("send_all")
.short("all")
.long("send_all")
.help("Sends all the funds (or all the selected utxos). Requires only one addressees of value 0"),
)
.arg(
Arg::with_name("utxos")
.long("utxos")
.value_name("TXID:VOUT")
.help("Selects which utxos *must* be spent")
.takes_value(true)
.number_of_values(1)
.multiple(true)
.validator(outpoint_validator),
)
.arg(
Arg::with_name("unspendable")
.long("unspendable")
.value_name("TXID:VOUT")
.help("Marks an utxo as unspendable")
.takes_value(true)
.number_of_values(1)
.multiple(true)
.validator(outpoint_validator),
)
.arg(
Arg::with_name("fee_rate")
.short("fee")
.long("fee_rate")
.value_name("SATS_VBYTE")
.help("Fee rate to use in sat/vbyte")
.takes_value(true),
)
.arg(
Arg::with_name("policy")
.long("policy")
.value_name("POLICY")
.help("Selects which policy should be used to satisfy the descriptor")
.takes_value(true)
.number_of_values(1),
),
)
.subcommand(
SubCommand::with_name("policies")
.about("Returns the available spending policies for the descriptor")
)
.subcommand(
SubCommand::with_name("public_descriptor")
.about("Returns the public version of the wallet's descriptor(s)")
)
.subcommand(
SubCommand::with_name("sign")
.about("Signs and tries to finalize a PSBT")
.arg(
Arg::with_name("psbt")
.long("psbt")
.value_name("BASE64_PSBT")
.help("Sets the PSBT to sign")
.takes_value(true)
.number_of_values(1)
.required(true),
)
.arg(
Arg::with_name("assume_height")
.long("assume_height")
.value_name("HEIGHT")
.help("Assume the blockchain has reached a specific height. This affects the transaction finalization, if there are timelocks in the descriptor")
.takes_value(true)
.number_of_values(1)
.required(false),
))
.subcommand(
SubCommand::with_name("broadcast")
.about("Broadcasts a transaction to the network. Takes either a raw transaction or a PSBT to extract")
.arg(
Arg::with_name("psbt")
.long("psbt")
.value_name("BASE64_PSBT")
.help("Sets the PSBT to extract and broadcast")
.takes_value(true)
.required_unless("tx")
.number_of_values(1))
.arg(
Arg::with_name("tx")
.long("tx")
.value_name("RAWTX")
.help("Sets the raw transaction to broadcast")
.takes_value(true)
.required_unless("psbt")
.number_of_values(1))
)
.subcommand(
SubCommand::with_name("extract_psbt")
.about("Extracts a raw transaction from a PSBT")
.arg(
Arg::with_name("psbt")
.long("psbt")
.value_name("BASE64_PSBT")
.help("Sets the PSBT to extract")
.takes_value(true)
.required(true)
.number_of_values(1))
)
.subcommand(
SubCommand::with_name("finalize_psbt")
.about("Finalizes a psbt")
.arg(
Arg::with_name("psbt")
.long("psbt")
.value_name("BASE64_PSBT")
.help("Sets the PSBT to finalize")
.takes_value(true)
.required(true)
.number_of_values(1))
.arg(
Arg::with_name("assume_height")
.long("assume_height")
.value_name("HEIGHT")
.help("Assume the blockchain has reached a specific height")
.takes_value(true)
.number_of_values(1)
.required(false))
)
.subcommand(
SubCommand::with_name("combine_psbt")
.about("Combines multiple PSBTs into one")
.arg(
Arg::with_name("psbt")
.long("psbt")
.value_name("BASE64_PSBT")
.help("Add one PSBT to comine. This option can be repeated multiple times, one for each PSBT")
.takes_value(true)
.number_of_values(1)
.required(true)
.multiple(true))
)
}
pub fn add_global_flags<'a, 'b>(app: App<'a, 'b>) -> App<'a, 'b> {
app.arg(
Arg::with_name("network")
.short("n")
.long("network")
.value_name("NETWORK")
.help("Sets the network")
.takes_value(true)
.default_value("testnet")
.possible_values(&["testnet", "regtest"]),
)
.arg(
Arg::with_name("wallet")
.short("w")
.long("wallet")
.value_name("WALLET_NAME")
.help("Selects the wallet to use")
.takes_value(true)
.default_value("main"),
)
.arg(
Arg::with_name("server")
.short("s")
.long("server")
.value_name("SERVER:PORT")
.help("Sets the Electrum server to use")
.takes_value(true)
.default_value("tn.not.fyi:55001"),
)
.arg(
Arg::with_name("descriptor")
.short("d")
.long("descriptor")
.value_name("DESCRIPTOR")
.help("Sets the descriptor to use for the external addresses")
.required(true)
.takes_value(true),
)
.arg(
Arg::with_name("change_descriptor")
.short("c")
.long("change_descriptor")
.value_name("DESCRIPTOR")
.help("Sets the descriptor to use for internal addresses")
.takes_value(true),
)
.arg(
Arg::with_name("v")
.short("v")
.multiple(true)
.help("Sets the level of verbosity"),
)
.subcommand(SubCommand::with_name("repl").about("Opens an interactive shell"))
}
pub async fn handle_matches<C, D>(
wallet: &Wallet<C, D>,
matches: ArgMatches<'_>,
) -> Result<Option<String>, Error>
where
C: crate::blockchain::OnlineBlockchain,
D: crate::database::BatchDatabase,
{
if let Some(_sub_matches) = matches.subcommand_matches("get_new_address") {
Ok(Some(format!("{}", wallet.get_new_address()?)))
} else if let Some(_sub_matches) = matches.subcommand_matches("sync") {
wallet.sync(None, None).await?;
Ok(None)
} else if let Some(_sub_matches) = matches.subcommand_matches("list_unspent") {
let mut res = String::new();
for utxo in wallet.list_unspent()? {
res += &format!("{} value {} SAT\n", utxo.outpoint, utxo.txout.value);
}
Ok(Some(res))
} else if let Some(_sub_matches) = matches.subcommand_matches("get_balance") {
Ok(Some(format!("{} SAT", wallet.get_balance()?)))
} else if let Some(sub_matches) = matches.subcommand_matches("create_tx") {
let addressees = sub_matches
.values_of("to")
.unwrap()
.map(|s| parse_addressee(s))
.collect::<Result<Vec<_>, _>>()
.map_err(|s| Error::Generic(s))?;
let send_all = sub_matches.is_present("send_all");
let fee_rate = sub_matches
.value_of("fee_rate")
.map(|s| f32::from_str(s).unwrap())
.unwrap_or(1.0);
let utxos = sub_matches
.values_of("utxos")
.map(|s| s.map(|i| parse_outpoint(i).unwrap()).collect());
let unspendable = sub_matches
.values_of("unspendable")
.map(|s| s.map(|i| parse_outpoint(i).unwrap()).collect());
let policy: Option<_> = sub_matches
.value_of("policy")
.map(|s| serde_json::from_str::<BTreeMap<String, Vec<usize>>>(&s).unwrap());
let result = wallet.create_tx(
addressees,
send_all,
fee_rate * 1e-5,
policy,
utxos,
unspendable,
)?;
Ok(Some(format!(
"{:#?}\nPSBT: {}",
result.1,
base64::encode(&serialize(&result.0))
)))
} else if let Some(_sub_matches) = matches.subcommand_matches("policies") {
Ok(Some(format!(
"External: {}\nInternal:{}",
serde_json::to_string(&wallet.policies(ScriptType::External)?).unwrap(),
serde_json::to_string(&wallet.policies(ScriptType::Internal)?).unwrap(),
)))
} else if let Some(_sub_matches) = matches.subcommand_matches("public_descriptor") {
let external = match wallet.public_descriptor(ScriptType::External)? {
Some(desc) => format!("{}", desc),
None => "<NONE>".into(),
};
let internal = match wallet.public_descriptor(ScriptType::Internal)? {
Some(desc) => format!("{}", desc),
None => "<NONE>".into(),
};
Ok(Some(format!(
"External: {}\nInternal:{}",
external, internal
)))
} else if let Some(sub_matches) = matches.subcommand_matches("sign") {
let psbt = base64::decode(sub_matches.value_of("psbt").unwrap()).unwrap();
let psbt: PartiallySignedTransaction = deserialize(&psbt).unwrap();
let assume_height = sub_matches
.value_of("assume_height")
.and_then(|s| Some(s.parse().unwrap()));
let (psbt, finalized) = wallet.sign(psbt, assume_height)?;
let mut res = String::new();
res += &format!("PSBT: {}\n", base64::encode(&serialize(&psbt)));
res += &format!("Finalized: {}", finalized);
if finalized {
res += &format!("\nExtracted: {}", serialize_hex(&psbt.extract_tx()));
}
Ok(Some(res))
} else if let Some(sub_matches) = matches.subcommand_matches("broadcast") {
let tx = if sub_matches.value_of("psbt").is_some() {
let psbt = base64::decode(&sub_matches.value_of("psbt").unwrap()).unwrap();
let psbt: PartiallySignedTransaction = deserialize(&psbt).unwrap();
psbt.extract_tx()
} else if sub_matches.value_of("tx").is_some() {
deserialize(&Vec::<u8>::from_hex(&sub_matches.value_of("tx").unwrap()).unwrap())
.unwrap()
} else {
panic!("Missing `psbt` and `tx` option");
};
let txid = wallet.broadcast(tx).await?;
Ok(Some(format!("TXID: {}", txid)))
} else if let Some(sub_matches) = matches.subcommand_matches("extract_psbt") {
let psbt = base64::decode(&sub_matches.value_of("psbt").unwrap()).unwrap();
let psbt: PartiallySignedTransaction = deserialize(&psbt).unwrap();
Ok(Some(format!(
"TX: {}",
serialize(&psbt.extract_tx()).to_hex()
)))
} else if let Some(sub_matches) = matches.subcommand_matches("finalize_psbt") {
let psbt = base64::decode(&sub_matches.value_of("psbt").unwrap()).unwrap();
let mut psbt: PartiallySignedTransaction = deserialize(&psbt).unwrap();
let assume_height = sub_matches
.value_of("assume_height")
.and_then(|s| Some(s.parse().unwrap()));
let finalized = wallet.finalize_psbt(&mut psbt, assume_height)?;
let mut res = String::new();
res += &format!("PSBT: {}\n", base64::encode(&serialize(&psbt)));
res += &format!("Finalized: {}", finalized);
if finalized {
res += &format!("\nExtracted: {}", serialize_hex(&psbt.extract_tx()));
}
Ok(Some(res))
} else if let Some(sub_matches) = matches.subcommand_matches("combine_psbt") {
let mut psbts = sub_matches
.values_of("psbt")
.unwrap()
.map(|s| {
let psbt = base64::decode(&s).unwrap();
let psbt: PartiallySignedTransaction = deserialize(&psbt).unwrap();
psbt
})
.collect::<Vec<_>>();
let init_psbt = psbts.pop().unwrap();
let final_psbt = psbts
.into_iter()
.try_fold::<_, _, Result<PartiallySignedTransaction, Error>>(
init_psbt,
|mut acc, x| {
acc.merge(x)?;
Ok(acc)
},
)?;
Ok(Some(format!(
"PSBT: {}",
base64::encode(&serialize(&final_psbt))
)))
} else {
Ok(None)
}
}

430
src/database/any.rs Normal file
View File

@@ -0,0 +1,430 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Runtime-checked database types
//!
//! This module provides the implementation of [`AnyDatabase`] which allows switching the
//! inner [`Database`] type at runtime.
//!
//! ## Example
//!
//! In this example, `wallet_memory` and `wallet_sled` have the same type of `Wallet<(), AnyDatabase>`.
//!
//! ```no_run
//! # use bitcoin::Network;
//! # use bdk::database::{AnyDatabase, MemoryDatabase};
//! # use bdk::{Wallet};
//! let memory = MemoryDatabase::default();
//! let wallet_memory = Wallet::new_offline("...", None, Network::Testnet, memory)?;
//!
//! # #[cfg(feature = "key-value-db")]
//! # {
//! let sled = sled::open("my-database")?.open_tree("default_tree")?;
//! let wallet_sled = Wallet::new_offline("...", None, Network::Testnet, sled)?;
//! # }
//! # Ok::<(), bdk::Error>(())
//! ```
//!
//! When paired with the use of [`ConfigurableDatabase`], it allows creating wallets with any
//! database supported using a single line of code:
//!
//! ```no_run
//! # use bitcoin::Network;
//! # use bdk::database::*;
//! # use bdk::{Wallet};
//! let config = serde_json::from_str("...")?;
//! let database = AnyDatabase::from_config(&config)?;
//! let wallet = Wallet::new_offline("...", None, Network::Testnet, database)?;
//! # Ok::<(), bdk::Error>(())
//! ```
use super::*;
macro_rules! impl_from {
( $from:ty, $to:ty, $variant:ident, $( $cfg:tt )* ) => {
$( $cfg )*
impl From<$from> for $to {
fn from(inner: $from) -> Self {
<$to>::$variant(inner)
}
}
};
}
macro_rules! impl_inner_method {
( $enum_name:ident, $self:expr, $name:ident $(, $args:expr)* ) => {
match $self {
$enum_name::Memory(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "key-value-db")]
$enum_name::Sled(inner) => inner.$name( $($args, )* ),
#[cfg(feature = "sqlite")]
$enum_name::Sqlite(inner) => inner.$name( $($args, )* ),
}
}
}
/// Type that can contain any of the [`Database`] types defined by the library
///
/// It allows switching database type at runtime.
///
/// See [this module](crate::database::any)'s documentation for a usage example.
#[derive(Debug)]
pub enum AnyDatabase {
/// In-memory ephemeral database
Memory(memory::MemoryDatabase),
#[cfg(feature = "key-value-db")]
#[cfg_attr(docsrs, doc(cfg(feature = "key-value-db")))]
/// Simple key-value embedded database based on [`sled`]
Sled(sled::Tree),
#[cfg(feature = "sqlite")]
#[cfg_attr(docsrs, doc(cfg(feature = "sqlite")))]
/// Sqlite embedded database using [`rusqlite`]
Sqlite(sqlite::SqliteDatabase),
}
impl_from!(memory::MemoryDatabase, AnyDatabase, Memory,);
impl_from!(sled::Tree, AnyDatabase, Sled, #[cfg(feature = "key-value-db")]);
impl_from!(sqlite::SqliteDatabase, AnyDatabase, Sqlite, #[cfg(feature = "sqlite")]);
/// Type that contains any of the [`BatchDatabase::Batch`] types defined by the library
pub enum AnyBatch {
/// In-memory ephemeral database
Memory(<memory::MemoryDatabase as BatchDatabase>::Batch),
#[cfg(feature = "key-value-db")]
#[cfg_attr(docsrs, doc(cfg(feature = "key-value-db")))]
/// Simple key-value embedded database based on [`sled`]
Sled(<sled::Tree as BatchDatabase>::Batch),
#[cfg(feature = "sqlite")]
#[cfg_attr(docsrs, doc(cfg(feature = "sqlite")))]
/// Sqlite embedded database using [`rusqlite`]
Sqlite(<sqlite::SqliteDatabase as BatchDatabase>::Batch),
}
impl_from!(
<memory::MemoryDatabase as BatchDatabase>::Batch,
AnyBatch,
Memory,
);
impl_from!(<sled::Tree as BatchDatabase>::Batch, AnyBatch, Sled, #[cfg(feature = "key-value-db")]);
impl_from!(<sqlite::SqliteDatabase as BatchDatabase>::Batch, AnyBatch, Sqlite, #[cfg(feature = "sqlite")]);
impl BatchOperations for AnyDatabase {
fn set_script_pubkey(
&mut self,
script: &Script,
keychain: KeychainKind,
child: u32,
) -> Result<(), Error> {
impl_inner_method!(
AnyDatabase,
self,
set_script_pubkey,
script,
keychain,
child
)
}
fn set_utxo(&mut self, utxo: &LocalUtxo) -> Result<(), Error> {
impl_inner_method!(AnyDatabase, self, set_utxo, utxo)
}
fn set_raw_tx(&mut self, transaction: &Transaction) -> Result<(), Error> {
impl_inner_method!(AnyDatabase, self, set_raw_tx, transaction)
}
fn set_tx(&mut self, transaction: &TransactionDetails) -> Result<(), Error> {
impl_inner_method!(AnyDatabase, self, set_tx, transaction)
}
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error> {
impl_inner_method!(AnyDatabase, self, set_last_index, keychain, value)
}
fn set_sync_time(&mut self, sync_time: SyncTime) -> Result<(), Error> {
impl_inner_method!(AnyDatabase, self, set_sync_time, sync_time)
}
fn del_script_pubkey_from_path(
&mut self,
keychain: KeychainKind,
child: u32,
) -> Result<Option<Script>, Error> {
impl_inner_method!(
AnyDatabase,
self,
del_script_pubkey_from_path,
keychain,
child
)
}
fn del_path_from_script_pubkey(
&mut self,
script: &Script,
) -> Result<Option<(KeychainKind, u32)>, Error> {
impl_inner_method!(AnyDatabase, self, del_path_from_script_pubkey, script)
}
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
impl_inner_method!(AnyDatabase, self, del_utxo, outpoint)
}
fn del_raw_tx(&mut self, txid: &Txid) -> Result<Option<Transaction>, Error> {
impl_inner_method!(AnyDatabase, self, del_raw_tx, txid)
}
fn del_tx(
&mut self,
txid: &Txid,
include_raw: bool,
) -> Result<Option<TransactionDetails>, Error> {
impl_inner_method!(AnyDatabase, self, del_tx, txid, include_raw)
}
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
impl_inner_method!(AnyDatabase, self, del_last_index, keychain)
}
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
impl_inner_method!(AnyDatabase, self, del_sync_time)
}
}
impl Database for AnyDatabase {
fn check_descriptor_checksum<B: AsRef<[u8]>>(
&mut self,
keychain: KeychainKind,
bytes: B,
) -> Result<(), Error> {
impl_inner_method!(
AnyDatabase,
self,
check_descriptor_checksum,
keychain,
bytes
)
}
fn iter_script_pubkeys(&self, keychain: Option<KeychainKind>) -> Result<Vec<Script>, Error> {
impl_inner_method!(AnyDatabase, self, iter_script_pubkeys, keychain)
}
fn iter_utxos(&self) -> Result<Vec<LocalUtxo>, Error> {
impl_inner_method!(AnyDatabase, self, iter_utxos)
}
fn iter_raw_txs(&self) -> Result<Vec<Transaction>, Error> {
impl_inner_method!(AnyDatabase, self, iter_raw_txs)
}
fn iter_txs(&self, include_raw: bool) -> Result<Vec<TransactionDetails>, Error> {
impl_inner_method!(AnyDatabase, self, iter_txs, include_raw)
}
fn get_script_pubkey_from_path(
&self,
keychain: KeychainKind,
child: u32,
) -> Result<Option<Script>, Error> {
impl_inner_method!(
AnyDatabase,
self,
get_script_pubkey_from_path,
keychain,
child
)
}
fn get_path_from_script_pubkey(
&self,
script: &Script,
) -> Result<Option<(KeychainKind, u32)>, Error> {
impl_inner_method!(AnyDatabase, self, get_path_from_script_pubkey, script)
}
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
impl_inner_method!(AnyDatabase, self, get_utxo, outpoint)
}
fn get_raw_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
impl_inner_method!(AnyDatabase, self, get_raw_tx, txid)
}
fn get_tx(&self, txid: &Txid, include_raw: bool) -> Result<Option<TransactionDetails>, Error> {
impl_inner_method!(AnyDatabase, self, get_tx, txid, include_raw)
}
fn get_last_index(&self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
impl_inner_method!(AnyDatabase, self, get_last_index, keychain)
}
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error> {
impl_inner_method!(AnyDatabase, self, get_sync_time)
}
fn increment_last_index(&mut self, keychain: KeychainKind) -> Result<u32, Error> {
impl_inner_method!(AnyDatabase, self, increment_last_index, keychain)
}
fn flush(&mut self) -> Result<(), Error> {
impl_inner_method!(AnyDatabase, self, flush)
}
}
impl BatchOperations for AnyBatch {
fn set_script_pubkey(
&mut self,
script: &Script,
keychain: KeychainKind,
child: u32,
) -> Result<(), Error> {
impl_inner_method!(AnyBatch, self, set_script_pubkey, script, keychain, child)
}
fn set_utxo(&mut self, utxo: &LocalUtxo) -> Result<(), Error> {
impl_inner_method!(AnyBatch, self, set_utxo, utxo)
}
fn set_raw_tx(&mut self, transaction: &Transaction) -> Result<(), Error> {
impl_inner_method!(AnyBatch, self, set_raw_tx, transaction)
}
fn set_tx(&mut self, transaction: &TransactionDetails) -> Result<(), Error> {
impl_inner_method!(AnyBatch, self, set_tx, transaction)
}
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error> {
impl_inner_method!(AnyBatch, self, set_last_index, keychain, value)
}
fn set_sync_time(&mut self, sync_time: SyncTime) -> Result<(), Error> {
impl_inner_method!(AnyBatch, self, set_sync_time, sync_time)
}
fn del_script_pubkey_from_path(
&mut self,
keychain: KeychainKind,
child: u32,
) -> Result<Option<Script>, Error> {
impl_inner_method!(AnyBatch, self, del_script_pubkey_from_path, keychain, child)
}
fn del_path_from_script_pubkey(
&mut self,
script: &Script,
) -> Result<Option<(KeychainKind, u32)>, Error> {
impl_inner_method!(AnyBatch, self, del_path_from_script_pubkey, script)
}
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
impl_inner_method!(AnyBatch, self, del_utxo, outpoint)
}
fn del_raw_tx(&mut self, txid: &Txid) -> Result<Option<Transaction>, Error> {
impl_inner_method!(AnyBatch, self, del_raw_tx, txid)
}
fn del_tx(
&mut self,
txid: &Txid,
include_raw: bool,
) -> Result<Option<TransactionDetails>, Error> {
impl_inner_method!(AnyBatch, self, del_tx, txid, include_raw)
}
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
impl_inner_method!(AnyBatch, self, del_last_index, keychain)
}
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
impl_inner_method!(AnyBatch, self, del_sync_time)
}
}
impl BatchDatabase for AnyDatabase {
type Batch = AnyBatch;
fn begin_batch(&self) -> Self::Batch {
match self {
AnyDatabase::Memory(inner) => inner.begin_batch().into(),
#[cfg(feature = "key-value-db")]
AnyDatabase::Sled(inner) => inner.begin_batch().into(),
#[cfg(feature = "sqlite")]
AnyDatabase::Sqlite(inner) => inner.begin_batch().into(),
}
}
fn commit_batch(&mut self, batch: Self::Batch) -> Result<(), Error> {
match self {
AnyDatabase::Memory(db) => match batch {
AnyBatch::Memory(batch) => db.commit_batch(batch),
#[cfg(any(feature = "key-value-db", feature = "sqlite"))]
_ => unimplemented!("Other batch shouldn't be used with Memory db."),
},
#[cfg(feature = "key-value-db")]
AnyDatabase::Sled(db) => match batch {
AnyBatch::Sled(batch) => db.commit_batch(batch),
_ => unimplemented!("Other batch shouldn't be used with Sled db."),
},
#[cfg(feature = "sqlite")]
AnyDatabase::Sqlite(db) => match batch {
AnyBatch::Sqlite(batch) => db.commit_batch(batch),
_ => unimplemented!("Other batch shouldn't be used with Sqlite db."),
},
}
}
}
/// Configuration type for a [`sled::Tree`] database
#[cfg(feature = "key-value-db")]
#[derive(Debug, serde::Serialize, serde::Deserialize)]
pub struct SledDbConfiguration {
/// Main directory of the db
pub path: String,
/// Name of the database tree, a separated namespace for the data
pub tree_name: String,
}
#[cfg(feature = "key-value-db")]
impl ConfigurableDatabase for sled::Tree {
type Config = SledDbConfiguration;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
Ok(sled::open(&config.path)?.open_tree(&config.tree_name)?)
}
}
/// Configuration type for a [`sqlite::SqliteDatabase`] database
#[cfg(feature = "sqlite")]
#[derive(Debug, serde::Serialize, serde::Deserialize)]
pub struct SqliteDbConfiguration {
/// Main directory of the db
pub path: String,
}
#[cfg(feature = "sqlite")]
impl ConfigurableDatabase for sqlite::SqliteDatabase {
type Config = SqliteDbConfiguration;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
Ok(sqlite::SqliteDatabase::new(config.path.clone()))
}
}
/// Type that can contain any of the database configurations defined by the library
///
/// This allows storing a single configuration that can be loaded into an [`AnyDatabase`]
/// instance. Wallets that plan to offer users the ability to switch blockchain backend at runtime
/// will find this particularly useful.
#[derive(Debug, serde::Serialize, serde::Deserialize)]
pub enum AnyDatabaseConfig {
/// Memory database has no config
Memory(()),
#[cfg(feature = "key-value-db")]
#[cfg_attr(docsrs, doc(cfg(feature = "key-value-db")))]
/// Simple key-value embedded database based on [`sled`]
Sled(SledDbConfiguration),
#[cfg(feature = "sqlite")]
#[cfg_attr(docsrs, doc(cfg(feature = "sqlite")))]
/// Sqlite embedded database using [`rusqlite`]
Sqlite(SqliteDbConfiguration),
}
impl ConfigurableDatabase for AnyDatabase {
type Config = AnyDatabaseConfig;
fn from_config(config: &Self::Config) -> Result<Self, Error> {
Ok(match config {
AnyDatabaseConfig::Memory(inner) => {
AnyDatabase::Memory(memory::MemoryDatabase::from_config(inner)?)
}
#[cfg(feature = "key-value-db")]
AnyDatabaseConfig::Sled(inner) => AnyDatabase::Sled(sled::Tree::from_config(inner)?),
#[cfg(feature = "sqlite")]
AnyDatabaseConfig::Sqlite(inner) => {
AnyDatabase::Sqlite(sqlite::SqliteDatabase::from_config(inner)?)
}
})
}
}
impl_from!((), AnyDatabaseConfig, Memory,);
impl_from!(SledDbConfiguration, AnyDatabaseConfig, Sled, #[cfg(feature = "key-value-db")]);
impl_from!(SqliteDbConfiguration, AnyDatabaseConfig, Sqlite, #[cfg(feature = "sqlite")]);

View File

@@ -1,3 +1,14 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::convert::TryInto;
use sled::{Batch, Tree};
@@ -7,19 +18,19 @@ use bitcoin::hash_types::Txid;
use bitcoin::{OutPoint, Script, Transaction};
use crate::database::memory::MapKey;
use crate::database::{BatchDatabase, BatchOperations, Database};
use crate::database::{BatchDatabase, BatchOperations, Database, SyncTime};
use crate::error::Error;
use crate::types::*;
macro_rules! impl_batch_operations {
( { $($after_insert:tt)* }, $process_delete:ident ) => {
fn set_script_pubkey(&mut self, script: &Script, script_type: ScriptType, path: u32) -> Result<(), Error> {
let key = MapKey::Path((Some(script_type), Some(path))).as_map_key();
fn set_script_pubkey(&mut self, script: &Script, keychain: KeychainKind, path: u32) -> Result<(), Error> {
let key = MapKey::Path((Some(keychain), Some(path))).as_map_key();
self.insert(key, serialize(script))$($after_insert)*;
let key = MapKey::Script(Some(script)).as_map_key();
let value = json!({
"t": script_type,
"t": keychain,
"p": path,
});
self.insert(key, serde_json::to_vec(&value)?)$($after_insert)*;
@@ -27,10 +38,13 @@ macro_rules! impl_batch_operations {
Ok(())
}
fn set_utxo(&mut self, utxo: &UTXO) -> Result<(), Error> {
let key = MapKey::UTXO(Some(&utxo.outpoint)).as_map_key();
let value = serialize(&utxo.txout);
self.insert(key, value)$($after_insert)*;
fn set_utxo(&mut self, utxo: &LocalUtxo) -> Result<(), Error> {
let key = MapKey::Utxo(Some(&utxo.outpoint)).as_map_key();
let value = json!({
"t": utxo.txout,
"i": utxo.keychain,
});
self.insert(key, serde_json::to_vec(&value)?)$($after_insert)*;
Ok(())
}
@@ -61,22 +75,29 @@ macro_rules! impl_batch_operations {
Ok(())
}
fn set_last_index(&mut self, script_type: ScriptType, value: u32) -> Result<(), Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
self.insert(key, &value.to_be_bytes())$($after_insert)*;
Ok(())
}
fn del_script_pubkey_from_path(&mut self, script_type: ScriptType, path: u32) -> Result<Option<Script>, Error> {
let key = MapKey::Path((Some(script_type), Some(path))).as_map_key();
fn set_sync_time(&mut self, data: SyncTime) -> Result<(), Error> {
let key = MapKey::SyncTime.as_map_key();
self.insert(key, serde_json::to_vec(&data)?)$($after_insert)*;
Ok(())
}
fn del_script_pubkey_from_path(&mut self, keychain: KeychainKind, path: u32) -> Result<Option<Script>, Error> {
let key = MapKey::Path((Some(keychain), Some(path))).as_map_key();
let res = self.remove(key);
let res = $process_delete!(res);
Ok(res.map_or(Ok(None), |x| Some(deserialize(&x)).transpose())?)
}
fn del_path_from_script_pubkey(&mut self, script: &Script) -> Result<Option<(ScriptType, u32)>, Error> {
fn del_path_from_script_pubkey(&mut self, script: &Script) -> Result<Option<(KeychainKind, u32)>, Error> {
let key = MapKey::Script(Some(script)).as_map_key();
let res = self.remove(key);
let res = $process_delete!(res);
@@ -93,16 +114,19 @@ macro_rules! impl_batch_operations {
}
}
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<UTXO>, Error> {
let key = MapKey::UTXO(Some(outpoint)).as_map_key();
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
let key = MapKey::Utxo(Some(outpoint)).as_map_key();
let res = self.remove(key);
let res = $process_delete!(res);
match res {
None => Ok(None),
Some(b) => {
let txout = deserialize(&b)?;
Ok(Some(UTXO { outpoint: outpoint.clone(), txout }))
let mut val: serde_json::Value = serde_json::from_slice(&b)?;
let txout = serde_json::from_value(val["t"].take())?;
let keychain = serde_json::from_value(val["i"].take())?;
Ok(Some(LocalUtxo { outpoint: outpoint.clone(), txout, keychain }))
}
}
}
@@ -137,8 +161,8 @@ macro_rules! impl_batch_operations {
}
}
fn del_last_index(&mut self, script_type: ScriptType) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
let res = self.remove(key);
let res = $process_delete!(res);
@@ -151,6 +175,14 @@ macro_rules! impl_batch_operations {
}
}
}
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
let key = MapKey::SyncTime.as_map_key();
let res = self.remove(key);
let res = $process_delete!(res);
Ok(res.map(|b| serde_json::from_slice(&b)).transpose()?)
}
}
}
@@ -176,10 +208,10 @@ impl BatchOperations for Batch {
impl Database for Tree {
fn check_descriptor_checksum<B: AsRef<[u8]>>(
&mut self,
script_type: ScriptType,
keychain: KeychainKind,
bytes: B,
) -> Result<(), Error> {
let key = MapKey::DescriptorChecksum(script_type).as_map_key();
let key = MapKey::DescriptorChecksum(keychain).as_map_key();
let prev = self.get(&key)?.map(|x| x.to_vec());
if let Some(val) = prev {
@@ -194,8 +226,8 @@ impl Database for Tree {
}
}
fn iter_script_pubkeys(&self, script_type: Option<ScriptType>) -> Result<Vec<Script>, Error> {
let key = MapKey::Path((script_type, None)).as_map_key();
fn iter_script_pubkeys(&self, keychain: Option<KeychainKind>) -> Result<Vec<Script>, Error> {
let key = MapKey::Path((keychain, None)).as_map_key();
self.scan_prefix(key)
.map(|x| -> Result<_, Error> {
let (_, v) = x?;
@@ -204,14 +236,22 @@ impl Database for Tree {
.collect()
}
fn iter_utxos(&self) -> Result<Vec<UTXO>, Error> {
let key = MapKey::UTXO(None).as_map_key();
fn iter_utxos(&self) -> Result<Vec<LocalUtxo>, Error> {
let key = MapKey::Utxo(None).as_map_key();
self.scan_prefix(key)
.map(|x| -> Result<_, Error> {
let (k, v) = x?;
let outpoint = deserialize(&k[1..])?;
let txout = deserialize(&v)?;
Ok(UTXO { outpoint, txout })
let mut val: serde_json::Value = serde_json::from_slice(&v)?;
let txout = serde_json::from_value(val["t"].take())?;
let keychain = serde_json::from_value(val["i"].take())?;
Ok(LocalUtxo {
outpoint,
txout,
keychain,
})
})
.collect()
}
@@ -244,17 +284,17 @@ impl Database for Tree {
fn get_script_pubkey_from_path(
&self,
script_type: ScriptType,
keychain: KeychainKind,
path: u32,
) -> Result<Option<Script>, Error> {
let key = MapKey::Path((Some(script_type), Some(path))).as_map_key();
let key = MapKey::Path((Some(keychain), Some(path))).as_map_key();
Ok(self.get(key)?.map(|b| deserialize(&b)).transpose()?)
}
fn get_path_from_script_pubkey(
&self,
script: &Script,
) -> Result<Option<(ScriptType, u32)>, Error> {
) -> Result<Option<(KeychainKind, u32)>, Error> {
let key = MapKey::Script(Some(script)).as_map_key();
self.get(key)?
.map(|b| -> Result<_, Error> {
@@ -267,14 +307,18 @@ impl Database for Tree {
.transpose()
}
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<UTXO>, Error> {
let key = MapKey::UTXO(Some(outpoint)).as_map_key();
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
let key = MapKey::Utxo(Some(outpoint)).as_map_key();
self.get(key)?
.map(|b| -> Result<_, Error> {
let txout = deserialize(&b)?;
Ok(UTXO {
outpoint: outpoint.clone(),
let mut val: serde_json::Value = serde_json::from_slice(&b)?;
let txout = serde_json::from_value(val["t"].take())?;
let keychain = serde_json::from_value(val["i"].take())?;
Ok(LocalUtxo {
outpoint: *outpoint,
txout,
keychain,
})
})
.transpose()
@@ -291,7 +335,7 @@ impl Database for Tree {
.map(|b| -> Result<_, Error> {
let mut txdetails: TransactionDetails = serde_json::from_slice(&b)?;
if include_raw {
txdetails.transaction = self.get_raw_tx(&txid)?;
txdetails.transaction = self.get_raw_tx(txid)?;
}
Ok(txdetails)
@@ -299,8 +343,8 @@ impl Database for Tree {
.transpose()
}
fn get_last_index(&self, script_type: ScriptType) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn get_last_index(&self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
self.get(key)?
.map(|b| -> Result<_, Error> {
let array: [u8; 4] = b
@@ -313,9 +357,17 @@ impl Database for Tree {
.transpose()
}
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error> {
let key = MapKey::SyncTime.as_map_key();
Ok(self
.get(key)?
.map(|b| serde_json::from_slice(&b))
.transpose()?)
}
// inserts 0 if not present
fn increment_last_index(&mut self, script_type: ScriptType) -> Result<u32, Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn increment_last_index(&mut self, keychain: KeychainKind) -> Result<u32, Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
self.update_and_fetch(key, |prev| {
let new = match prev {
Some(b) => {
@@ -338,6 +390,10 @@ impl Database for Tree {
Ok(val)
})
}
fn flush(&mut self) -> Result<(), Error> {
Ok(Tree::flush(self).map(|_| ())?)
}
}
impl BatchDatabase for Tree {
@@ -354,18 +410,12 @@ impl BatchDatabase for Tree {
#[cfg(test)]
mod test {
use std::str::FromStr;
use lazy_static::lazy_static;
use std::sync::{Arc, Condvar, Mutex, Once};
use std::time::{SystemTime, UNIX_EPOCH};
use sled::{Db, Tree};
use bitcoin::consensus::encode::deserialize;
use bitcoin::hashes::hex::*;
use bitcoin::*;
use crate::database::*;
static mut COUNT: usize = 0;
lazy_static! {
@@ -406,185 +456,46 @@ mod test {
#[test]
fn test_script_pubkey() {
let mut tree = get_tree();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
tree.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(script_type, path).unwrap(),
Some(script.clone())
);
assert_eq!(
tree.get_path_from_script_pubkey(&script).unwrap(),
Some((script_type, path.clone()))
);
crate::database::test::test_script_pubkey(get_tree());
}
#[test]
fn test_batch_script_pubkey() {
let mut tree = get_tree();
let mut batch = tree.begin_batch();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
batch.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(script_type, path).unwrap(),
None
);
assert_eq!(tree.get_path_from_script_pubkey(&script).unwrap(), None);
tree.commit_batch(batch).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(script_type, path).unwrap(),
Some(script.clone())
);
assert_eq!(
tree.get_path_from_script_pubkey(&script).unwrap(),
Some((script_type, path.clone()))
);
crate::database::test::test_batch_script_pubkey(get_tree());
}
#[test]
fn test_iter_script_pubkey() {
let mut tree = get_tree();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
tree.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
crate::database::test::test_iter_script_pubkey(get_tree());
}
#[test]
fn test_del_script_pubkey() {
let mut tree = get_tree();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
tree.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
tree.del_script_pubkey_from_path(script_type, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 0);
crate::database::test::test_del_script_pubkey(get_tree());
}
#[test]
fn test_utxo() {
let mut tree = get_tree();
let outpoint = OutPoint::from_str(
"5df6e0e2761359d30a8275058e299fcc0381534545f55cf43e41983f5d4c9456:0",
)
.unwrap();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let txout = TxOut {
value: 133742,
script_pubkey: script,
};
let utxo = UTXO { txout, outpoint };
tree.set_utxo(&utxo).unwrap();
assert_eq!(tree.get_utxo(&outpoint).unwrap(), Some(utxo));
crate::database::test::test_utxo(get_tree());
}
#[test]
fn test_raw_tx() {
let mut tree = get_tree();
let hex_tx = Vec::<u8>::from_hex("0100000001a15d57094aa7a21a28cb20b59aab8fc7d1149a3bdbcddba9c622e4f5f6a99ece010000006c493046022100f93bb0e7d8db7bd46e40132d1f8242026e045f03a0efe71bbb8e3f475e970d790221009337cd7f1f929f00cc6ff01f03729b069a7c21b59b1736ddfee5db5946c5da8c0121033b9b137ee87d5a812d6f506efdd37f0affa7ffc310711c06c7f3e097c9447c52ffffffff0100e1f505000000001976a9140389035a9225b3839e2bbf32d826a1e222031fd888ac00000000").unwrap();
let tx: Transaction = deserialize(&hex_tx).unwrap();
tree.set_raw_tx(&tx).unwrap();
let txid = tx.txid();
assert_eq!(tree.get_raw_tx(&txid).unwrap(), Some(tx));
crate::database::test::test_raw_tx(get_tree());
}
#[test]
fn test_tx() {
let mut tree = get_tree();
let hex_tx = Vec::<u8>::from_hex("0100000001a15d57094aa7a21a28cb20b59aab8fc7d1149a3bdbcddba9c622e4f5f6a99ece010000006c493046022100f93bb0e7d8db7bd46e40132d1f8242026e045f03a0efe71bbb8e3f475e970d790221009337cd7f1f929f00cc6ff01f03729b069a7c21b59b1736ddfee5db5946c5da8c0121033b9b137ee87d5a812d6f506efdd37f0affa7ffc310711c06c7f3e097c9447c52ffffffff0100e1f505000000001976a9140389035a9225b3839e2bbf32d826a1e222031fd888ac00000000").unwrap();
let tx: Transaction = deserialize(&hex_tx).unwrap();
let txid = tx.txid();
let mut tx_details = TransactionDetails {
transaction: Some(tx),
txid,
timestamp: 123456,
received: 1337,
sent: 420420,
height: Some(1000),
};
tree.set_tx(&tx_details).unwrap();
// get with raw tx too
assert_eq!(
tree.get_tx(&tx_details.txid, true).unwrap(),
Some(tx_details.clone())
);
// get only raw_tx
assert_eq!(
tree.get_raw_tx(&tx_details.txid).unwrap(),
tx_details.transaction
);
// now get without raw_tx
tx_details.transaction = None;
assert_eq!(
tree.get_tx(&tx_details.txid, false).unwrap(),
Some(tx_details)
);
crate::database::test::test_tx(get_tree());
}
#[test]
fn test_last_index() {
let mut tree = get_tree();
tree.set_last_index(ScriptType::External, 1337).unwrap();
assert_eq!(
tree.get_last_index(ScriptType::External).unwrap(),
Some(1337)
);
assert_eq!(tree.get_last_index(ScriptType::Internal).unwrap(), None);
let res = tree.increment_last_index(ScriptType::External).unwrap();
assert_eq!(res, 1338);
let res = tree.increment_last_index(ScriptType::Internal).unwrap();
assert_eq!(res, 0);
assert_eq!(
tree.get_last_index(ScriptType::External).unwrap(),
Some(1338)
);
assert_eq!(tree.get_last_index(ScriptType::Internal).unwrap(), Some(0));
crate::database::test::test_last_index(get_tree());
}
// TODO: more tests...
#[test]
fn test_sync_time() {
crate::database::test::test_sync_time(get_tree());
}
}

View File

@@ -1,3 +1,20 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! In-memory ephemeral database
//!
//! This module defines an in-memory database type called [`MemoryDatabase`] that is based on a
//! [`BTreeMap`].
use std::any::Any;
use std::collections::BTreeMap;
use std::ops::Bound::{Excluded, Included};
@@ -5,7 +22,7 @@ use bitcoin::consensus::encode::{deserialize, serialize};
use bitcoin::hash_types::Txid;
use bitcoin::{OutPoint, Script, Transaction};
use crate::database::{BatchDatabase, BatchOperations, Database};
use crate::database::{BatchDatabase, BatchOperations, ConfigurableDatabase, Database, SyncTime};
use crate::error::Error;
use crate::types::*;
@@ -16,19 +33,21 @@ use crate::types::*;
// transactions t<txid> -> tx details
// deriv indexes c{i,e} -> u32
// descriptor checksum d{i,e} -> vec<u8>
// last sync time l -> { height, timestamp }
pub(crate) enum MapKey<'a> {
Path((Option<ScriptType>, Option<u32>)),
Path((Option<KeychainKind>, Option<u32>)),
Script(Option<&'a Script>),
UTXO(Option<&'a OutPoint>),
Utxo(Option<&'a OutPoint>),
RawTx(Option<&'a Txid>),
Transaction(Option<&'a Txid>),
LastIndex(ScriptType),
DescriptorChecksum(ScriptType),
LastIndex(KeychainKind),
SyncTime,
DescriptorChecksum(KeychainKind),
}
impl MapKey<'_> {
pub fn as_prefix(&self) -> Vec<u8> {
fn as_prefix(&self) -> Vec<u8> {
match self {
MapKey::Path((st, _)) => {
let mut v = b"p".to_vec();
@@ -38,19 +57,20 @@ impl MapKey<'_> {
v
}
MapKey::Script(_) => b"s".to_vec(),
MapKey::UTXO(_) => b"u".to_vec(),
MapKey::Utxo(_) => b"u".to_vec(),
MapKey::RawTx(_) => b"r".to_vec(),
MapKey::Transaction(_) => b"t".to_vec(),
MapKey::LastIndex(st) => [b"c", st.as_ref()].concat(),
MapKey::SyncTime => b"l".to_vec(),
MapKey::DescriptorChecksum(st) => [b"d", st.as_ref()].concat(),
}
}
fn serialize_content(&self) -> Vec<u8> {
match self {
MapKey::Path((_, Some(child))) => u32::from(*child).to_be_bytes().to_vec(),
MapKey::Path((_, Some(child))) => child.to_be_bytes().to_vec(),
MapKey::Script(Some(s)) => serialize(*s),
MapKey::UTXO(Some(s)) => serialize(*s),
MapKey::Utxo(Some(s)) => serialize(*s),
MapKey::RawTx(Some(s)) => serialize(*s),
MapKey::Transaction(Some(s)) => serialize(*s),
_ => vec![],
@@ -65,24 +85,41 @@ impl MapKey<'_> {
}
}
fn after(key: &Vec<u8>) -> Vec<u8> {
let mut key = key.clone();
let len = key.len();
if len > 0 {
// TODO i guess it could break if the value is 0xFF, but it's fine for now
key[len - 1] += 1;
fn after(key: &[u8]) -> Vec<u8> {
let mut key = key.to_owned();
let mut idx = key.len();
while idx > 0 {
if key[idx - 1] == 0xFF {
idx -= 1;
continue;
} else {
key[idx - 1] += 1;
break;
}
}
key
}
#[derive(Debug)]
/// In-memory ephemeral database
///
/// This database can be used as a temporary storage for wallets that are not kept permanently on
/// a device, or on platforms that don't provide a filesystem, like `wasm32`.
///
/// Once it's dropped its content will be lost.
///
/// If you are looking for a permanent storage solution, you can try with the default key-value
/// database called [`sled`]. See the [`database`] module documentation for more details.
///
/// [`database`]: crate::database
#[derive(Debug, Default)]
pub struct MemoryDatabase {
map: BTreeMap<Vec<u8>, Box<dyn std::any::Any>>,
map: BTreeMap<Vec<u8>, Box<dyn Any + Send + Sync>>,
deleted_keys: Vec<Vec<u8>>,
}
impl MemoryDatabase {
/// Create a new empty database
pub fn new() -> Self {
MemoryDatabase {
map: BTreeMap::new(),
@@ -95,15 +132,15 @@ impl BatchOperations for MemoryDatabase {
fn set_script_pubkey(
&mut self,
script: &Script,
script_type: ScriptType,
keychain: KeychainKind,
path: u32,
) -> Result<(), Error> {
let key = MapKey::Path((Some(script_type), Some(path))).as_map_key();
let key = MapKey::Path((Some(keychain), Some(path))).as_map_key();
self.map.insert(key, Box::new(script.clone()));
let key = MapKey::Script(Some(script)).as_map_key();
let value = json!({
"t": script_type,
"t": keychain,
"p": path,
});
self.map.insert(key, Box::new(value));
@@ -111,9 +148,10 @@ impl BatchOperations for MemoryDatabase {
Ok(())
}
fn set_utxo(&mut self, utxo: &UTXO) -> Result<(), Error> {
let key = MapKey::UTXO(Some(&utxo.outpoint)).as_map_key();
self.map.insert(key, Box::new(utxo.txout.clone()));
fn set_utxo(&mut self, utxo: &LocalUtxo) -> Result<(), Error> {
let key = MapKey::Utxo(Some(&utxo.outpoint)).as_map_key();
self.map
.insert(key, Box::new((utxo.txout.clone(), utxo.keychain)));
Ok(())
}
@@ -139,19 +177,25 @@ impl BatchOperations for MemoryDatabase {
Ok(())
}
fn set_last_index(&mut self, script_type: ScriptType, value: u32) -> Result<(), Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
self.map.insert(key, Box::new(value));
Ok(())
}
fn set_sync_time(&mut self, data: SyncTime) -> Result<(), Error> {
let key = MapKey::SyncTime.as_map_key();
self.map.insert(key, Box::new(data));
Ok(())
}
fn del_script_pubkey_from_path(
&mut self,
script_type: ScriptType,
keychain: KeychainKind,
path: u32,
) -> Result<Option<Script>, Error> {
let key = MapKey::Path((Some(script_type), Some(path))).as_map_key();
let key = MapKey::Path((Some(keychain), Some(path))).as_map_key();
let res = self.map.remove(&key);
self.deleted_keys.push(key);
@@ -160,7 +204,7 @@ impl BatchOperations for MemoryDatabase {
fn del_path_from_script_pubkey(
&mut self,
script: &Script,
) -> Result<Option<(ScriptType, u32)>, Error> {
) -> Result<Option<(KeychainKind, u32)>, Error> {
let key = MapKey::Script(Some(script)).as_map_key();
let res = self.map.remove(&key);
self.deleted_keys.push(key);
@@ -176,18 +220,19 @@ impl BatchOperations for MemoryDatabase {
}
}
}
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<UTXO>, Error> {
let key = MapKey::UTXO(Some(outpoint)).as_map_key();
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
let key = MapKey::Utxo(Some(outpoint)).as_map_key();
let res = self.map.remove(&key);
self.deleted_keys.push(key);
match res {
None => Ok(None),
Some(b) => {
let txout = b.downcast_ref().cloned().unwrap();
Ok(Some(UTXO {
outpoint: outpoint.clone(),
let (txout, keychain) = b.downcast_ref().cloned().unwrap();
Ok(Some(LocalUtxo {
outpoint: *outpoint,
txout,
keychain,
}))
}
}
@@ -224,8 +269,8 @@ impl BatchOperations for MemoryDatabase {
}
}
}
fn del_last_index(&mut self, script_type: ScriptType) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
let res = self.map.remove(&key);
self.deleted_keys.push(key);
@@ -234,15 +279,22 @@ impl BatchOperations for MemoryDatabase {
Some(b) => Ok(Some(*b.downcast_ref().unwrap())),
}
}
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error> {
let key = MapKey::SyncTime.as_map_key();
let res = self.map.remove(&key);
self.deleted_keys.push(key);
Ok(res.map(|b| b.downcast_ref().cloned().unwrap()))
}
}
impl Database for MemoryDatabase {
fn check_descriptor_checksum<B: AsRef<[u8]>>(
&mut self,
script_type: ScriptType,
keychain: KeychainKind,
bytes: B,
) -> Result<(), Error> {
let key = MapKey::DescriptorChecksum(script_type).as_map_key();
let key = MapKey::DescriptorChecksum(keychain).as_map_key();
let prev = self
.map
@@ -260,22 +312,26 @@ impl Database for MemoryDatabase {
}
}
fn iter_script_pubkeys(&self, script_type: Option<ScriptType>) -> Result<Vec<Script>, Error> {
let key = MapKey::Path((script_type, None)).as_map_key();
fn iter_script_pubkeys(&self, keychain: Option<KeychainKind>) -> Result<Vec<Script>, Error> {
let key = MapKey::Path((keychain, None)).as_map_key();
self.map
.range::<Vec<u8>, _>((Included(&key), Excluded(&after(&key))))
.map(|(_, v)| Ok(v.downcast_ref().cloned().unwrap()))
.collect()
}
fn iter_utxos(&self) -> Result<Vec<UTXO>, Error> {
let key = MapKey::UTXO(None).as_map_key();
fn iter_utxos(&self) -> Result<Vec<LocalUtxo>, Error> {
let key = MapKey::Utxo(None).as_map_key();
self.map
.range::<Vec<u8>, _>((Included(&key), Excluded(&after(&key))))
.map(|(k, v)| {
let outpoint = deserialize(&k[1..]).unwrap();
let txout = v.downcast_ref().cloned().unwrap();
Ok(UTXO { outpoint, txout })
let (txout, keychain) = v.downcast_ref().cloned().unwrap();
Ok(LocalUtxo {
outpoint,
txout,
keychain,
})
})
.collect()
}
@@ -306,10 +362,10 @@ impl Database for MemoryDatabase {
fn get_script_pubkey_from_path(
&self,
script_type: ScriptType,
keychain: KeychainKind,
path: u32,
) -> Result<Option<Script>, Error> {
let key = MapKey::Path((Some(script_type), Some(path))).as_map_key();
let key = MapKey::Path((Some(keychain), Some(path))).as_map_key();
Ok(self
.map
.get(&key)
@@ -319,7 +375,7 @@ impl Database for MemoryDatabase {
fn get_path_from_script_pubkey(
&self,
script: &Script,
) -> Result<Option<(ScriptType, u32)>, Error> {
) -> Result<Option<(KeychainKind, u32)>, Error> {
let key = MapKey::Script(Some(script)).as_map_key();
Ok(self.map.get(&key).map(|b| {
let mut val: serde_json::Value = b.downcast_ref().cloned().unwrap();
@@ -330,13 +386,14 @@ impl Database for MemoryDatabase {
}))
}
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<UTXO>, Error> {
let key = MapKey::UTXO(Some(outpoint)).as_map_key();
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error> {
let key = MapKey::Utxo(Some(outpoint)).as_map_key();
Ok(self.map.get(&key).map(|b| {
let txout = b.downcast_ref().cloned().unwrap();
UTXO {
outpoint: outpoint.clone(),
let (txout, keychain) = b.downcast_ref().cloned().unwrap();
LocalUtxo {
outpoint: *outpoint,
txout,
keychain,
}
}))
}
@@ -354,31 +411,43 @@ impl Database for MemoryDatabase {
Ok(self.map.get(&key).map(|b| {
let mut txdetails: TransactionDetails = b.downcast_ref().cloned().unwrap();
if include_raw {
txdetails.transaction = self.get_raw_tx(&txid).unwrap();
txdetails.transaction = self.get_raw_tx(txid).unwrap();
}
txdetails
}))
}
fn get_last_index(&self, script_type: ScriptType) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn get_last_index(&self, keychain: KeychainKind) -> Result<Option<u32>, Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
Ok(self.map.get(&key).map(|b| *b.downcast_ref().unwrap()))
}
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error> {
let key = MapKey::SyncTime.as_map_key();
Ok(self
.map
.get(&key)
.map(|b| b.downcast_ref().cloned().unwrap()))
}
// inserts 0 if not present
fn increment_last_index(&mut self, script_type: ScriptType) -> Result<u32, Error> {
let key = MapKey::LastIndex(script_type).as_map_key();
fn increment_last_index(&mut self, keychain: KeychainKind) -> Result<u32, Error> {
let key = MapKey::LastIndex(keychain).as_map_key();
let value = self
.map
.entry(key.clone())
.entry(key)
.and_modify(|x| *x.downcast_mut::<u32>().unwrap() += 1)
.or_insert(Box::<u32>::new(0))
.or_insert_with(|| Box::<u32>::new(0))
.downcast_mut()
.unwrap();
Ok(*value)
}
fn flush(&mut self) -> Result<(), Error> {
Ok(())
}
}
impl BatchDatabase for MemoryDatabase {
@@ -389,24 +458,116 @@ impl BatchDatabase for MemoryDatabase {
}
fn commit_batch(&mut self, mut batch: Self::Batch) -> Result<(), Error> {
for key in batch.deleted_keys {
self.map.remove(&key);
for key in batch.deleted_keys.iter() {
self.map.remove(key);
}
self.map.append(&mut batch.map);
Ok(())
}
}
impl ConfigurableDatabase for MemoryDatabase {
type Config = ();
fn from_config(_config: &Self::Config) -> Result<Self, Error> {
Ok(MemoryDatabase::default())
}
}
#[macro_export]
#[doc(hidden)]
/// Artificially insert a tx in the database, as if we had found it with a `sync`. This is a hidden
/// macro and not a `[cfg(test)]` function so it can be called within the context of doctests which
/// don't have `test` set.
macro_rules! populate_test_db {
($db:expr, $tx_meta:expr, $current_height:expr$(,)?) => {{
use std::str::FromStr;
use $crate::database::BatchOperations;
let mut db = $db;
let tx_meta = $tx_meta;
let current_height: Option<u32> = $current_height;
let tx = $crate::bitcoin::Transaction {
version: 1,
lock_time: 0,
input: vec![],
output: tx_meta
.output
.iter()
.map(|out_meta| $crate::bitcoin::TxOut {
value: out_meta.value,
script_pubkey: $crate::bitcoin::Address::from_str(&out_meta.to_address)
.unwrap()
.script_pubkey(),
})
.collect(),
};
let txid = tx.txid();
let confirmation_time = tx_meta.min_confirmations.map(|conf| $crate::BlockTime {
height: current_height.unwrap().checked_sub(conf as u32).unwrap(),
timestamp: 0,
});
let tx_details = $crate::TransactionDetails {
transaction: Some(tx.clone()),
txid,
fee: Some(0),
received: 0,
sent: 0,
confirmation_time,
verified: current_height.is_some(),
};
db.set_tx(&tx_details).unwrap();
for (vout, out) in tx.output.iter().enumerate() {
db.set_utxo(&$crate::LocalUtxo {
txout: out.clone(),
outpoint: $crate::bitcoin::OutPoint {
txid,
vout: vout as u32,
},
keychain: $crate::KeychainKind::External,
})
.unwrap();
}
Ok(self.map.append(&mut batch.map))
}
txid
}};
}
#[macro_export]
#[doc(hidden)]
/// Macro for getting a wallet for use in a doctest
macro_rules! doctest_wallet {
() => {{
use $crate::bitcoin::Network;
use $crate::database::MemoryDatabase;
use $crate::testutils;
let descriptor = "wpkh(cVpPVruEDdmutPzisEsYvtST1usBR3ntr8pXSyt6D2YYqXRyPcFW)";
let descriptors = testutils!(@descriptors (descriptor) (descriptor));
let mut db = MemoryDatabase::new();
let txid = populate_test_db!(
&mut db,
testutils! {
@tx ( (@external descriptors, 0) => 500_000 ) (@confirmations 1)
},
Some(100),
);
$crate::Wallet::new_offline(
&descriptors.0,
descriptors.1.as_ref(),
Network::Regtest,
db
)
.unwrap()
}}
}
#[cfg(test)]
mod test {
use std::str::FromStr;
use bitcoin::consensus::encode::deserialize;
use bitcoin::hashes::hex::*;
use bitcoin::*;
use super::*;
use crate::database::*;
use super::MemoryDatabase;
fn get_tree() -> MemoryDatabase {
MemoryDatabase::new()
@@ -414,209 +575,46 @@ mod test {
#[test]
fn test_script_pubkey() {
let mut tree = get_tree();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
tree.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(script_type, path).unwrap(),
Some(script.clone())
);
assert_eq!(
tree.get_path_from_script_pubkey(&script).unwrap(),
Some((script_type, path.clone()))
);
crate::database::test::test_script_pubkey(get_tree());
}
#[test]
fn test_batch_script_pubkey() {
let mut tree = get_tree();
let mut batch = tree.begin_batch();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
batch.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(script_type, path).unwrap(),
None
);
assert_eq!(tree.get_path_from_script_pubkey(&script).unwrap(), None);
tree.commit_batch(batch).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(script_type, path).unwrap(),
Some(script.clone())
);
assert_eq!(
tree.get_path_from_script_pubkey(&script).unwrap(),
Some((script_type, path))
);
crate::database::test::test_batch_script_pubkey(get_tree());
}
#[test]
fn test_iter_script_pubkey() {
let mut tree = get_tree();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
tree.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
crate::database::test::test_iter_script_pubkey(get_tree());
}
#[test]
fn test_del_script_pubkey() {
let mut tree = get_tree();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
tree.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
tree.del_script_pubkey_from_path(script_type, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 0);
}
#[test]
fn test_del_script_pubkey_batch() {
let mut tree = get_tree();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let script_type = ScriptType::External;
tree.set_script_pubkey(&script, script_type, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
let mut batch = tree.begin_batch();
batch
.del_script_pubkey_from_path(script_type, path)
.unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
tree.commit_batch(batch).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 0);
crate::database::test::test_del_script_pubkey(get_tree());
}
#[test]
fn test_utxo() {
let mut tree = get_tree();
let outpoint = OutPoint::from_str(
"5df6e0e2761359d30a8275058e299fcc0381534545f55cf43e41983f5d4c9456:0",
)
.unwrap();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let txout = TxOut {
value: 133742,
script_pubkey: script,
};
let utxo = UTXO { txout, outpoint };
tree.set_utxo(&utxo).unwrap();
assert_eq!(tree.get_utxo(&outpoint).unwrap(), Some(utxo));
crate::database::test::test_utxo(get_tree());
}
#[test]
fn test_raw_tx() {
let mut tree = get_tree();
let hex_tx = Vec::<u8>::from_hex("0100000001a15d57094aa7a21a28cb20b59aab8fc7d1149a3bdbcddba9c622e4f5f6a99ece010000006c493046022100f93bb0e7d8db7bd46e40132d1f8242026e045f03a0efe71bbb8e3f475e970d790221009337cd7f1f929f00cc6ff01f03729b069a7c21b59b1736ddfee5db5946c5da8c0121033b9b137ee87d5a812d6f506efdd37f0affa7ffc310711c06c7f3e097c9447c52ffffffff0100e1f505000000001976a9140389035a9225b3839e2bbf32d826a1e222031fd888ac00000000").unwrap();
let tx: Transaction = deserialize(&hex_tx).unwrap();
tree.set_raw_tx(&tx).unwrap();
let txid = tx.txid();
assert_eq!(tree.get_raw_tx(&txid).unwrap(), Some(tx));
crate::database::test::test_raw_tx(get_tree());
}
#[test]
fn test_tx() {
let mut tree = get_tree();
let hex_tx = Vec::<u8>::from_hex("0100000001a15d57094aa7a21a28cb20b59aab8fc7d1149a3bdbcddba9c622e4f5f6a99ece010000006c493046022100f93bb0e7d8db7bd46e40132d1f8242026e045f03a0efe71bbb8e3f475e970d790221009337cd7f1f929f00cc6ff01f03729b069a7c21b59b1736ddfee5db5946c5da8c0121033b9b137ee87d5a812d6f506efdd37f0affa7ffc310711c06c7f3e097c9447c52ffffffff0100e1f505000000001976a9140389035a9225b3839e2bbf32d826a1e222031fd888ac00000000").unwrap();
let tx: Transaction = deserialize(&hex_tx).unwrap();
let txid = tx.txid();
let mut tx_details = TransactionDetails {
transaction: Some(tx),
txid,
timestamp: 123456,
received: 1337,
sent: 420420,
height: Some(1000),
};
tree.set_tx(&tx_details).unwrap();
// get with raw tx too
assert_eq!(
tree.get_tx(&tx_details.txid, true).unwrap(),
Some(tx_details.clone())
);
// get only raw_tx
assert_eq!(
tree.get_raw_tx(&tx_details.txid).unwrap(),
tx_details.transaction
);
// now get without raw_tx
tx_details.transaction = None;
assert_eq!(
tree.get_tx(&tx_details.txid, false).unwrap(),
Some(tx_details)
);
crate::database::test::test_tx(get_tree());
}
#[test]
fn test_last_index() {
let mut tree = get_tree();
tree.set_last_index(ScriptType::External, 1337).unwrap();
assert_eq!(
tree.get_last_index(ScriptType::External).unwrap(),
Some(1337)
);
assert_eq!(tree.get_last_index(ScriptType::Internal).unwrap(), None);
let res = tree.increment_last_index(ScriptType::External).unwrap();
assert_eq!(res, 1338);
let res = tree.increment_last_index(ScriptType::Internal).unwrap();
assert_eq!(res, 0);
assert_eq!(
tree.get_last_index(ScriptType::External).unwrap(),
Some(1338)
);
assert_eq!(tree.get_last_index(ScriptType::Internal).unwrap(), Some(0));
crate::database::test::test_last_index(get_tree());
}
// TODO: more tests...
#[test]
fn test_sync_time() {
crate::database::test::test_sync_time(get_tree());
}
}

View File

@@ -1,104 +1,213 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Database types
//!
//! This module provides the implementation of some defaults database types, along with traits that
//! can be implemented externally to let [`Wallet`]s use customized databases.
//!
//! It's important to note that the databases defined here only contains "blockchain-related" data.
//! They can be seen more as a cache than a critical piece of storage that contains secrets and
//! keys.
//!
//! The currently recommended database is [`sled`], which is a pretty simple key-value embedded
//! database written in Rust. If the `key-value-db` feature is enabled (which by default is),
//! this library automatically implements all the required traits for [`sled::Tree`].
//!
//! [`Wallet`]: crate::wallet::Wallet
use serde::{Deserialize, Serialize};
use bitcoin::hash_types::Txid;
use bitcoin::{OutPoint, Script, Transaction, TxOut};
use crate::error::Error;
use crate::types::*;
#[cfg(any(feature = "key-value-db", feature = "default"))]
pub mod keyvalue;
pub mod memory;
pub mod any;
pub use any::{AnyDatabase, AnyDatabaseConfig};
#[cfg(feature = "key-value-db")]
pub(crate) mod keyvalue;
#[cfg(feature = "sqlite")]
pub(crate) mod sqlite;
#[cfg(feature = "sqlite")]
pub use sqlite::SqliteDatabase;
pub mod memory;
pub use memory::MemoryDatabase;
/// Blockchain state at the time of syncing
///
/// Contains only the block time and height at the moment
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct SyncTime {
/// Block timestamp and height at the time of sync
pub block_time: BlockTime,
}
/// Trait for operations that can be batched
///
/// This trait defines the list of operations that must be implemented on the [`Database`] type and
/// the [`BatchDatabase::Batch`] type.
pub trait BatchOperations {
/// Store a script_pubkey along with its keychain and child number.
fn set_script_pubkey(
&mut self,
script: &Script,
script_type: ScriptType,
keychain: KeychainKind,
child: u32,
) -> Result<(), Error>;
fn set_utxo(&mut self, utxo: &UTXO) -> Result<(), Error>;
/// Store a [`LocalUtxo`]
fn set_utxo(&mut self, utxo: &LocalUtxo) -> Result<(), Error>;
/// Store a raw transaction
fn set_raw_tx(&mut self, transaction: &Transaction) -> Result<(), Error>;
/// Store the metadata of a transaction
fn set_tx(&mut self, transaction: &TransactionDetails) -> Result<(), Error>;
fn set_last_index(&mut self, script_type: ScriptType, value: u32) -> Result<(), Error>;
/// Store the last derivation index for a given keychain.
fn set_last_index(&mut self, keychain: KeychainKind, value: u32) -> Result<(), Error>;
/// Store the sync time
fn set_sync_time(&mut self, sync_time: SyncTime) -> Result<(), Error>;
/// Delete a script_pubkey given the keychain and its child number.
fn del_script_pubkey_from_path(
&mut self,
script_type: ScriptType,
keychain: KeychainKind,
child: u32,
) -> Result<Option<Script>, Error>;
/// Delete the data related to a specific script_pubkey, meaning the keychain and the child
/// number.
fn del_path_from_script_pubkey(
&mut self,
script: &Script,
) -> Result<Option<(ScriptType, u32)>, Error>;
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<UTXO>, Error>;
) -> Result<Option<(KeychainKind, u32)>, Error>;
/// Delete a [`LocalUtxo`] given its [`OutPoint`]
fn del_utxo(&mut self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error>;
/// Delete a raw transaction given its [`Txid`]
fn del_raw_tx(&mut self, txid: &Txid) -> Result<Option<Transaction>, Error>;
/// Delete the metadata of a transaction and optionally the raw transaction itself
fn del_tx(
&mut self,
txid: &Txid,
include_raw: bool,
) -> Result<Option<TransactionDetails>, Error>;
fn del_last_index(&mut self, script_type: ScriptType) -> Result<Option<u32>, Error>;
/// Delete the last derivation index for a keychain.
fn del_last_index(&mut self, keychain: KeychainKind) -> Result<Option<u32>, Error>;
/// Reset the sync time to `None`
///
/// Returns the removed value
fn del_sync_time(&mut self) -> Result<Option<SyncTime>, Error>;
}
/// Trait for reading data from a database
///
/// This traits defines the operations that can be used to read data out of a database
pub trait Database: BatchOperations {
/// Read and checks the descriptor checksum for a given keychain.
///
/// Should return [`Error::ChecksumMismatch`](crate::error::Error::ChecksumMismatch) if the
/// checksum doesn't match. If there's no checksum in the database, simply store it for the
/// next time.
fn check_descriptor_checksum<B: AsRef<[u8]>>(
&mut self,
script_type: ScriptType,
keychain: KeychainKind,
bytes: B,
) -> Result<(), Error>;
fn iter_script_pubkeys(&self, script_type: Option<ScriptType>) -> Result<Vec<Script>, Error>;
fn iter_utxos(&self) -> Result<Vec<UTXO>, Error>;
/// Return the list of script_pubkeys
fn iter_script_pubkeys(&self, keychain: Option<KeychainKind>) -> Result<Vec<Script>, Error>;
/// Return the list of [`LocalUtxo`]s
fn iter_utxos(&self) -> Result<Vec<LocalUtxo>, Error>;
/// Return the list of raw transactions
fn iter_raw_txs(&self) -> Result<Vec<Transaction>, Error>;
/// Return the list of transactions metadata
fn iter_txs(&self, include_raw: bool) -> Result<Vec<TransactionDetails>, Error>;
/// Fetch a script_pubkey given the child number of a keychain.
fn get_script_pubkey_from_path(
&self,
script_type: ScriptType,
keychain: KeychainKind,
child: u32,
) -> Result<Option<Script>, Error>;
/// Fetch the keychain and child number of a given script_pubkey
fn get_path_from_script_pubkey(
&self,
script: &Script,
) -> Result<Option<(ScriptType, u32)>, Error>;
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<UTXO>, Error>;
) -> Result<Option<(KeychainKind, u32)>, Error>;
/// Fetch a [`LocalUtxo`] given its [`OutPoint`]
fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<LocalUtxo>, Error>;
/// Fetch a raw transaction given its [`Txid`]
fn get_raw_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error>;
/// Fetch the transaction metadata and optionally also the raw transaction
fn get_tx(&self, txid: &Txid, include_raw: bool) -> Result<Option<TransactionDetails>, Error>;
fn get_last_index(&self, script_type: ScriptType) -> Result<Option<u32>, Error>;
/// Return the last derivation index for a keychain.
fn get_last_index(&self, keychain: KeychainKind) -> Result<Option<u32>, Error>;
/// Return the sync time, if present
fn get_sync_time(&self) -> Result<Option<SyncTime>, Error>;
// inserts 0 if not present
fn increment_last_index(&mut self, script_type: ScriptType) -> Result<u32, Error>;
/// Increment the last derivation index for a keychain and return it
///
/// It should insert and return `0` if not present in the database
fn increment_last_index(&mut self, keychain: KeychainKind) -> Result<u32, Error>;
/// Force changes to be written to disk
fn flush(&mut self) -> Result<(), Error>;
}
/// Trait for a database that supports batch operations
///
/// This trait defines the methods to start and apply a batch of operations.
pub trait BatchDatabase: Database {
/// Container for the operations
type Batch: BatchOperations;
/// Create a new batch container
fn begin_batch(&self) -> Self::Batch;
/// Consume and apply a batch of operations
fn commit_batch(&mut self, batch: Self::Batch) -> Result<(), Error>;
}
pub trait DatabaseUtils: Database {
/// Trait for [`Database`] types that can be created given a configuration
pub trait ConfigurableDatabase: Database + Sized {
/// Type that contains the configuration
type Config: std::fmt::Debug;
/// Create a new instance given a configuration
fn from_config(config: &Self::Config) -> Result<Self, Error>;
}
pub(crate) trait DatabaseUtils: Database {
fn is_mine(&self, script: &Script) -> Result<bool, Error> {
self.get_path_from_script_pubkey(script)
.map(|o| o.is_some())
}
fn get_raw_tx_or<F>(&self, txid: &Txid, f: F) -> Result<Option<Transaction>, Error>
fn get_raw_tx_or<D>(&self, txid: &Txid, default: D) -> Result<Option<Transaction>, Error>
where
F: FnOnce() -> Result<Option<Transaction>, Error>,
D: FnOnce() -> Result<Option<Transaction>, Error>,
{
self.get_tx(txid, true)?
.map(|t| t.transaction)
.flatten()
.map_or_else(f, |t| Ok(Some(t)))
.map_or_else(default, |t| Ok(Some(t)))
}
fn get_previous_output(&self, outpoint: &OutPoint) -> Result<Option<TxOut>, Error> {
self.get_raw_tx(&outpoint.txid)?
.and_then(|previous_tx| {
.map(|previous_tx| {
if outpoint.vout as usize >= previous_tx.output.len() {
Some(Err(Error::InvalidOutpoint(outpoint.clone())))
Err(Error::InvalidOutpoint(*outpoint))
} else {
Some(Ok(previous_tx.output[outpoint.vout as usize].clone()))
Ok(previous_tx.output[outpoint.vout as usize].clone())
}
})
.transpose()
@@ -106,3 +215,206 @@ pub trait DatabaseUtils: Database {
}
impl<T: Database> DatabaseUtils for T {}
#[cfg(test)]
pub mod test {
use std::str::FromStr;
use bitcoin::consensus::encode::deserialize;
use bitcoin::hashes::hex::*;
use bitcoin::*;
use super::*;
pub fn test_script_pubkey<D: Database>(mut tree: D) {
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let keychain = KeychainKind::External;
tree.set_script_pubkey(&script, keychain, path).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(keychain, path).unwrap(),
Some(script.clone())
);
assert_eq!(
tree.get_path_from_script_pubkey(&script).unwrap(),
Some((keychain, path))
);
}
pub fn test_batch_script_pubkey<D: BatchDatabase>(mut tree: D) {
let mut batch = tree.begin_batch();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let keychain = KeychainKind::External;
batch.set_script_pubkey(&script, keychain, path).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(keychain, path).unwrap(),
None
);
assert_eq!(tree.get_path_from_script_pubkey(&script).unwrap(), None);
tree.commit_batch(batch).unwrap();
assert_eq!(
tree.get_script_pubkey_from_path(keychain, path).unwrap(),
Some(script.clone())
);
assert_eq!(
tree.get_path_from_script_pubkey(&script).unwrap(),
Some((keychain, path))
);
}
pub fn test_iter_script_pubkey<D: Database>(mut tree: D) {
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let keychain = KeychainKind::External;
tree.set_script_pubkey(&script, keychain, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
}
pub fn test_del_script_pubkey<D: Database>(mut tree: D) {
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let path = 42;
let keychain = KeychainKind::External;
tree.set_script_pubkey(&script, keychain, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 1);
tree.del_script_pubkey_from_path(keychain, path).unwrap();
assert_eq!(tree.iter_script_pubkeys(None).unwrap().len(), 0);
}
pub fn test_utxo<D: Database>(mut tree: D) {
let outpoint = OutPoint::from_str(
"5df6e0e2761359d30a8275058e299fcc0381534545f55cf43e41983f5d4c9456:0",
)
.unwrap();
let script = Script::from(
Vec::<u8>::from_hex("76a91402306a7c23f3e8010de41e9e591348bb83f11daa88ac").unwrap(),
);
let txout = TxOut {
value: 133742,
script_pubkey: script,
};
let utxo = LocalUtxo {
txout,
outpoint,
keychain: KeychainKind::External,
};
tree.set_utxo(&utxo).unwrap();
assert_eq!(tree.get_utxo(&outpoint).unwrap(), Some(utxo));
}
pub fn test_raw_tx<D: Database>(mut tree: D) {
let hex_tx = Vec::<u8>::from_hex("0100000001a15d57094aa7a21a28cb20b59aab8fc7d1149a3bdbcddba9c622e4f5f6a99ece010000006c493046022100f93bb0e7d8db7bd46e40132d1f8242026e045f03a0efe71bbb8e3f475e970d790221009337cd7f1f929f00cc6ff01f03729b069a7c21b59b1736ddfee5db5946c5da8c0121033b9b137ee87d5a812d6f506efdd37f0affa7ffc310711c06c7f3e097c9447c52ffffffff0100e1f505000000001976a9140389035a9225b3839e2bbf32d826a1e222031fd888ac00000000").unwrap();
let tx: Transaction = deserialize(&hex_tx).unwrap();
tree.set_raw_tx(&tx).unwrap();
let txid = tx.txid();
assert_eq!(tree.get_raw_tx(&txid).unwrap(), Some(tx));
}
pub fn test_tx<D: Database>(mut tree: D) {
let hex_tx = Vec::<u8>::from_hex("0100000001a15d57094aa7a21a28cb20b59aab8fc7d1149a3bdbcddba9c622e4f5f6a99ece010000006c493046022100f93bb0e7d8db7bd46e40132d1f8242026e045f03a0efe71bbb8e3f475e970d790221009337cd7f1f929f00cc6ff01f03729b069a7c21b59b1736ddfee5db5946c5da8c0121033b9b137ee87d5a812d6f506efdd37f0affa7ffc310711c06c7f3e097c9447c52ffffffff0100e1f505000000001976a9140389035a9225b3839e2bbf32d826a1e222031fd888ac00000000").unwrap();
let tx: Transaction = deserialize(&hex_tx).unwrap();
let txid = tx.txid();
let mut tx_details = TransactionDetails {
transaction: Some(tx),
txid,
received: 1337,
sent: 420420,
fee: Some(140),
confirmation_time: Some(BlockTime {
timestamp: 123456,
height: 1000,
}),
verified: true,
};
tree.set_tx(&tx_details).unwrap();
// get with raw tx too
assert_eq!(
tree.get_tx(&tx_details.txid, true).unwrap(),
Some(tx_details.clone())
);
// get only raw_tx
assert_eq!(
tree.get_raw_tx(&tx_details.txid).unwrap(),
tx_details.transaction
);
// now get without raw_tx
tx_details.transaction = None;
assert_eq!(
tree.get_tx(&tx_details.txid, false).unwrap(),
Some(tx_details)
);
}
pub fn test_last_index<D: Database>(mut tree: D) {
tree.set_last_index(KeychainKind::External, 1337).unwrap();
assert_eq!(
tree.get_last_index(KeychainKind::External).unwrap(),
Some(1337)
);
assert_eq!(tree.get_last_index(KeychainKind::Internal).unwrap(), None);
let res = tree.increment_last_index(KeychainKind::External).unwrap();
assert_eq!(res, 1338);
let res = tree.increment_last_index(KeychainKind::Internal).unwrap();
assert_eq!(res, 0);
assert_eq!(
tree.get_last_index(KeychainKind::External).unwrap(),
Some(1338)
);
assert_eq!(
tree.get_last_index(KeychainKind::Internal).unwrap(),
Some(0)
);
}
pub fn test_sync_time<D: Database>(mut tree: D) {
assert!(tree.get_sync_time().unwrap().is_none());
tree.set_sync_time(SyncTime {
block_time: BlockTime {
height: 100,
timestamp: 1000,
},
})
.unwrap();
let extracted = tree.get_sync_time().unwrap();
assert!(extracted.is_some());
assert_eq!(extracted.as_ref().unwrap().block_time.height, 100);
assert_eq!(extracted.as_ref().unwrap().block_time.timestamp, 1000);
tree.del_sync_time().unwrap();
assert!(tree.get_sync_time().unwrap().is_none());
}
// TODO: more tests...
}

1033
src/database/sqlite.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,22 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Descriptor checksum
//!
//! This module contains a re-implementation of the function used by Bitcoin Core to calculate the
//! checksum of a descriptor
use std::iter::FromIterator;
use crate::descriptor::Error;
use crate::descriptor::DescriptorError;
const INPUT_CHARSET: &str = "0123456789()[],'/*abcdefgh@:$%{}IJKLMNOPQRSTUVWXYZ&+-.;<=>?!^_|~ijklmnopqrstuvwxyzABCDEFGH`#\"\\ ";
const CHECKSUM_CHARSET: &str = "qpzry9x8gf2tvdw0s3jn54khce6mua7l";
@@ -27,14 +43,15 @@ fn poly_mod(mut c: u64, val: u64) -> u64 {
c
}
pub fn get_checksum(desc: &str) -> Result<String, Error> {
/// Compute the checksum of a descriptor
pub fn get_checksum(desc: &str) -> Result<String, DescriptorError> {
let mut c = 1;
let mut cls = 0;
let mut clscount = 0;
for ch in desc.chars() {
let pos = INPUT_CHARSET
.find(ch)
.ok_or(Error::InvalidDescriptorCharacter(ch))? as u64;
.ok_or(DescriptorError::InvalidDescriptorCharacter(ch))? as u64;
c = poly_mod(c, pos & 31);
cls = cls * 3 + (pos >> 5);
clscount += 1;
@@ -62,3 +79,35 @@ pub fn get_checksum(desc: &str) -> Result<String, Error> {
Ok(String::from_iter(chars))
}
#[cfg(test)]
mod test {
use super::*;
use crate::descriptor::get_checksum;
// test get_checksum() function; it should return the same value as Bitcoin Core
#[test]
fn test_get_checksum() {
let desc = "wpkh(tprv8ZgxMBicQKsPdpkqS7Eair4YxjcuuvDPNYmKX3sCniCf16tHEVrjjiSXEkFRnUH77yXc6ZcwHHcLNfjdi5qUvw3VDfgYiH5mNsj5izuiu2N/1/2/*)";
assert_eq!(get_checksum(desc).unwrap(), "tqz0nc62");
let desc = "pkh(tpubD6NzVbkrYhZ4XHndKkuB8FifXm8r5FQHwrN6oZuWCz13qb93rtgKvD4PQsqC4HP4yhV3tA2fqr2RbY5mNXfM7RxXUoeABoDtsFUq2zJq6YK/44'/1'/0'/0/*)";
assert_eq!(get_checksum(desc).unwrap(), "lasegmfs");
}
#[test]
fn test_get_checksum_invalid_character() {
let sparkle_heart = vec![240, 159, 146, 150];
let sparkle_heart = std::str::from_utf8(&sparkle_heart)
.unwrap()
.chars()
.next()
.unwrap();
let invalid_desc = format!("wpkh(tprv8ZgxMBicQKsPdpkqS7Eair4YxjcuuvDPNYmKX3sCniCf16tHEVrjjiSXEkFRnUH77yXc6ZcwHHcL{}fjdi5qUvw3VDfgYiH5mNsj5izuiu2N/1/2/*)", sparkle_heart);
assert!(matches!(
get_checksum(&invalid_desc).err(),
Some(DescriptorError::InvalidDescriptorCharacter(invalid_char)) if invalid_char == sparkle_heart
));
}
}

150
src/descriptor/derived.rs Normal file
View File

@@ -0,0 +1,150 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Derived descriptor keys
use std::cmp::Ordering;
use std::fmt;
use std::hash::{Hash, Hasher};
use std::ops::Deref;
use bitcoin::hashes::hash160;
use bitcoin::PublicKey;
pub use miniscript::{
descriptor::KeyMap, descriptor::Wildcard, Descriptor, DescriptorPublicKey, Legacy, Miniscript,
ScriptContext, Segwitv0,
};
use miniscript::{MiniscriptKey, ToPublicKey, TranslatePk};
use crate::wallet::utils::SecpCtx;
/// Extended [`DescriptorPublicKey`] that has been derived
///
/// Derived keys are guaranteed to never contain wildcards of any kind
#[derive(Debug, Clone)]
pub struct DerivedDescriptorKey<'s>(DescriptorPublicKey, &'s SecpCtx);
impl<'s> DerivedDescriptorKey<'s> {
/// Construct a new derived key
///
/// Panics if the key is wildcard
pub fn new(key: DescriptorPublicKey, secp: &'s SecpCtx) -> DerivedDescriptorKey<'s> {
if let DescriptorPublicKey::XPub(xpub) = &key {
assert!(xpub.wildcard == Wildcard::None)
}
DerivedDescriptorKey(key, secp)
}
}
impl<'s> Deref for DerivedDescriptorKey<'s> {
type Target = DescriptorPublicKey;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<'s> PartialEq for DerivedDescriptorKey<'s> {
fn eq(&self, other: &Self) -> bool {
self.0 == other.0
}
}
impl<'s> Eq for DerivedDescriptorKey<'s> {}
impl<'s> PartialOrd for DerivedDescriptorKey<'s> {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
self.0.partial_cmp(&other.0)
}
}
impl<'s> Ord for DerivedDescriptorKey<'s> {
fn cmp(&self, other: &Self) -> Ordering {
self.0.cmp(&other.0)
}
}
impl<'s> fmt::Display for DerivedDescriptorKey<'s> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.0.fmt(f)
}
}
impl<'s> Hash for DerivedDescriptorKey<'s> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.0.hash(state);
}
}
impl<'s> MiniscriptKey for DerivedDescriptorKey<'s> {
type Hash = Self;
fn to_pubkeyhash(&self) -> Self::Hash {
DerivedDescriptorKey(self.0.to_pubkeyhash(), self.1)
}
fn is_uncompressed(&self) -> bool {
self.0.is_uncompressed()
}
fn serialized_len(&self) -> usize {
self.0.serialized_len()
}
}
impl<'s> ToPublicKey for DerivedDescriptorKey<'s> {
fn to_public_key(&self) -> PublicKey {
match &self.0 {
DescriptorPublicKey::SinglePub(ref spub) => spub.key.to_public_key(),
DescriptorPublicKey::XPub(ref xpub) => {
xpub.xkey
.derive_pub(self.1, &xpub.derivation_path)
.expect("Shouldn't fail, only normal derivations")
.public_key
}
}
}
fn hash_to_hash160(hash: &Self::Hash) -> hash160::Hash {
hash.to_public_key().to_pubkeyhash()
}
}
pub(crate) trait AsDerived {
// Derive a descriptor and transform all of its keys to `DerivedDescriptorKey`
fn as_derived<'s>(&self, index: u32, secp: &'s SecpCtx)
-> Descriptor<DerivedDescriptorKey<'s>>;
// Transform the keys into `DerivedDescriptorKey`.
//
// Panics if the descriptor is not "fixed", i.e. if it's derivable
fn as_derived_fixed<'s>(&self, secp: &'s SecpCtx) -> Descriptor<DerivedDescriptorKey<'s>>;
}
impl AsDerived for Descriptor<DescriptorPublicKey> {
fn as_derived<'s>(
&self,
index: u32,
secp: &'s SecpCtx,
) -> Descriptor<DerivedDescriptorKey<'s>> {
self.derive(index).translate_pk_infallible(
|key| DerivedDescriptorKey::new(key.clone(), secp),
|key| DerivedDescriptorKey::new(key.clone(), secp),
)
}
fn as_derived_fixed<'s>(&self, secp: &'s SecpCtx) -> Descriptor<DerivedDescriptorKey<'s>> {
assert!(!self.is_deriveable());
self.as_derived(0, secp)
}
}

1065
src/descriptor/dsl.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,37 +1,69 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Descriptor errors
/// Errors related to the parsing and usage of descriptors
#[derive(Debug)]
pub enum Error {
InternalError,
InvalidPrefix(Vec<u8>),
HardenedDerivationOnXpub,
MalformedInput,
KeyParsingError(String),
/// Invalid HD Key path, such as having a wildcard but a length != 1
InvalidHdKeyPath,
/// The provided descriptor doesn't match its checksum
InvalidDescriptorChecksum,
/// The descriptor contains hardened derivation steps on public extended keys
HardenedDerivationXpub,
/// The descriptor contains multiple keys with the same BIP32 fingerprint
DuplicatedKeys,
/// Error thrown while working with [`keys`](crate::keys)
Key(crate::keys::KeyError),
/// Error while extracting and manipulating policies
Policy(crate::descriptor::policy::PolicyError),
InputIndexDoesntExist,
MissingPublicKey,
MissingDetails,
/// Invalid character found in the descriptor checksum
InvalidDescriptorCharacter(char),
CantDeriveWithMiniscript,
BIP32(bitcoin::util::bip32::Error),
/// BIP32 error
Bip32(bitcoin::util::bip32::Error),
/// Error during base58 decoding
Base58(bitcoin::util::base58::Error),
PK(bitcoin::util::key::Error),
/// Key-related error
Pk(bitcoin::util::key::Error),
/// Miniscript error
Miniscript(miniscript::Error),
/// Hex decoding error
Hex(bitcoin::hashes::hex::Error),
}
impl From<crate::keys::KeyError> for Error {
fn from(key_error: crate::keys::KeyError) -> Error {
match key_error {
crate::keys::KeyError::Miniscript(inner) => Error::Miniscript(inner),
crate::keys::KeyError::Bip32(inner) => Error::Bip32(inner),
e => Error::Key(e),
}
}
}
impl std::fmt::Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?}", self)
}
}
impl_error!(bitcoin::util::bip32::Error, BIP32);
impl std::error::Error for Error {}
impl_error!(bitcoin::util::bip32::Error, Bip32);
impl_error!(bitcoin::util::base58::Error, Base58);
impl_error!(bitcoin::util::key::Error, PK);
impl_error!(bitcoin::util::key::Error, Pk);
impl_error!(miniscript::Error, Miniscript);
impl_error!(bitcoin::hashes::hex::Error, Hex);
impl_error!(crate::descriptor::policy::PolicyError, Policy);

View File

@@ -1,372 +0,0 @@
use std::fmt::{self, Display};
use std::str::FromStr;
use bitcoin::hashes::hex::{FromHex, ToHex};
use bitcoin::secp256k1;
use bitcoin::util::base58;
use bitcoin::util::bip32::{
ChildNumber, DerivationPath, ExtendedPrivKey, ExtendedPubKey, Fingerprint,
};
use bitcoin::PublicKey;
#[allow(unused_imports)]
use log::{debug, error, info, trace};
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
pub enum DerivationIndex {
Fixed,
Normal,
Hardened,
}
impl DerivationIndex {
fn as_path(&self, index: u32) -> DerivationPath {
match self {
DerivationIndex::Fixed => vec![],
DerivationIndex::Normal => vec![ChildNumber::Normal { index }],
DerivationIndex::Hardened => vec![ChildNumber::Hardened { index }],
}
.into()
}
}
impl Display for DerivationIndex {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let chars = match *self {
Self::Fixed => "",
Self::Normal => "/*",
Self::Hardened => "/*'",
};
write!(f, "{}", chars)
}
}
#[derive(Clone, Debug)]
pub struct DescriptorExtendedKey {
pub master_fingerprint: Option<Fingerprint>,
pub master_derivation: Option<DerivationPath>,
pub pubkey: ExtendedPubKey,
pub secret: Option<ExtendedPrivKey>,
pub path: DerivationPath,
pub final_index: DerivationIndex,
}
impl DescriptorExtendedKey {
pub fn full_path(&self, index: u32) -> DerivationPath {
let mut final_path: Vec<ChildNumber> = Vec::new();
if let Some(path) = &self.master_derivation {
let path_as_vec: Vec<ChildNumber> = path.clone().into();
final_path.extend_from_slice(&path_as_vec);
}
let our_path: Vec<ChildNumber> = self.path_with_index(index).into();
final_path.extend_from_slice(&our_path);
final_path.into()
}
pub fn path_with_index(&self, index: u32) -> DerivationPath {
let mut final_path: Vec<ChildNumber> = Vec::new();
let our_path: Vec<ChildNumber> = self.path.clone().into();
final_path.extend_from_slice(&our_path);
let other_path: Vec<ChildNumber> = self.final_index.as_path(index).into();
final_path.extend_from_slice(&other_path);
final_path.into()
}
pub fn derive<C: secp256k1::Verification + secp256k1::Signing>(
&self,
ctx: &secp256k1::Secp256k1<C>,
index: u32,
) -> Result<PublicKey, super::Error> {
Ok(self.derive_xpub(ctx, index)?.public_key)
}
pub fn derive_xpub<C: secp256k1::Verification + secp256k1::Signing>(
&self,
ctx: &secp256k1::Secp256k1<C>,
index: u32,
) -> Result<ExtendedPubKey, super::Error> {
if let Some(xprv) = self.secret {
let derive_priv = xprv.derive_priv(ctx, &self.path_with_index(index))?;
Ok(ExtendedPubKey::from_private(ctx, &derive_priv))
} else {
Ok(self.pubkey.derive_pub(ctx, &self.path_with_index(index))?)
}
}
pub fn root_xpub<C: secp256k1::Verification + secp256k1::Signing>(
&self,
ctx: &secp256k1::Secp256k1<C>,
) -> ExtendedPubKey {
if let Some(ref xprv) = self.secret {
ExtendedPubKey::from_private(ctx, xprv)
} else {
self.pubkey
}
}
}
impl Display for DescriptorExtendedKey {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if let Some(ref fingerprint) = self.master_fingerprint {
write!(f, "[{}", fingerprint.to_hex())?;
if let Some(ref path) = self.master_derivation {
write!(f, "{}", &path.to_string()[1..])?;
}
write!(f, "]")?;
}
if let Some(xprv) = self.secret {
write!(f, "{}", xprv)?
} else {
write!(f, "{}", self.pubkey)?
}
write!(f, "{}{}", &self.path.to_string()[1..], self.final_index)
}
}
impl FromStr for DescriptorExtendedKey {
type Err = super::Error;
fn from_str(inp: &str) -> Result<DescriptorExtendedKey, Self::Err> {
let len = inp.len();
let (master_fingerprint, master_derivation, offset) = match inp.starts_with("[") {
false => (None, None, 0),
true => {
if inp.len() < 9 {
return Err(super::Error::MalformedInput);
}
let master_fingerprint = &inp[1..9];
let close_bracket_index =
&inp[9..].find("]").ok_or(super::Error::MalformedInput)?;
let path = if *close_bracket_index > 0 {
Some(DerivationPath::from_str(&format!(
"m{}",
&inp[9..9 + *close_bracket_index]
))?)
} else {
None
};
(
Some(Fingerprint::from_hex(master_fingerprint)?),
path,
9 + *close_bracket_index + 1,
)
}
};
let (key_range, offset) = match &inp[offset..].find("/") {
Some(index) => (offset..offset + *index, offset + *index),
None => (offset..len, len),
};
let data = base58::from_check(&inp[key_range.clone()])?;
let secp = secp256k1::Secp256k1::new();
let (pubkey, secret) = match &data[0..4] {
[0x04u8, 0x88, 0xB2, 0x1E] | [0x04u8, 0x35, 0x87, 0xCF] => {
(ExtendedPubKey::from_str(&inp[key_range])?, None)
}
[0x04u8, 0x88, 0xAD, 0xE4] | [0x04u8, 0x35, 0x83, 0x94] => {
let private = ExtendedPrivKey::from_str(&inp[key_range])?;
(ExtendedPubKey::from_private(&secp, &private), Some(private))
}
data => return Err(super::Error::InvalidPrefix(data.into())),
};
let (path, final_index, _) = match &inp[offset..].starts_with("/") {
false => (DerivationPath::from(vec![]), DerivationIndex::Fixed, offset),
true => {
let (all, skip) = match &inp[len - 2..len] {
"/*" => (DerivationIndex::Normal, 2),
"*'" | "*h" => (DerivationIndex::Hardened, 3),
_ => (DerivationIndex::Fixed, 0),
};
if all == DerivationIndex::Hardened && secret.is_none() {
return Err(super::Error::HardenedDerivationOnXpub);
}
(
DerivationPath::from_str(&format!("m{}", &inp[offset..len - skip]))?,
all,
len,
)
}
};
if secret.is_none()
&& path.into_iter().any(|child| match child {
ChildNumber::Hardened { .. } => true,
_ => false,
})
{
return Err(super::Error::HardenedDerivationOnXpub);
}
Ok(DescriptorExtendedKey {
master_fingerprint,
master_derivation,
pubkey,
secret,
path,
final_index,
})
}
}
#[cfg(test)]
mod test {
use std::str::FromStr;
use bitcoin::hashes::hex::FromHex;
use bitcoin::util::bip32::{ChildNumber, DerivationPath};
use crate::descriptor::*;
macro_rules! hex_fingerprint {
($hex:expr) => {
Fingerprint::from_hex($hex).unwrap()
};
}
macro_rules! deriv_path {
($str:expr) => {
DerivationPath::from_str($str).unwrap()
};
() => {
DerivationPath::from(vec![])
};
}
#[test]
fn test_derivation_index_fixed() {
let index = DerivationIndex::Fixed;
assert_eq!(index.as_path(1337), DerivationPath::from(vec![]));
assert_eq!(format!("{}", index), "");
}
#[test]
fn test_derivation_index_normal() {
let index = DerivationIndex::Normal;
assert_eq!(
index.as_path(1337),
DerivationPath::from(vec![ChildNumber::Normal { index: 1337 }])
);
assert_eq!(format!("{}", index), "/*");
}
#[test]
fn test_derivation_index_hardened() {
let index = DerivationIndex::Hardened;
assert_eq!(
index.as_path(1337),
DerivationPath::from(vec![ChildNumber::Hardened { index: 1337 }])
);
assert_eq!(format!("{}", index), "/*'");
}
#[test]
fn test_parse_xpub_no_path_fixed() {
let key = "xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.pubkey.fingerprint(), hex_fingerprint!("31a507b8"));
assert_eq!(ek.path, deriv_path!());
assert_eq!(ek.final_index, DerivationIndex::Fixed);
}
#[test]
fn test_parse_xpub_with_path_fixed() {
let key = "xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL/1/2/3";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.pubkey.fingerprint(), hex_fingerprint!("31a507b8"));
assert_eq!(ek.path, deriv_path!("m/1/2/3"));
assert_eq!(ek.final_index, DerivationIndex::Fixed);
}
#[test]
fn test_parse_xpub_with_path_normal() {
let key = "xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL/1/2/3/*";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.pubkey.fingerprint(), hex_fingerprint!("31a507b8"));
assert_eq!(ek.path, deriv_path!("m/1/2/3"));
assert_eq!(ek.final_index, DerivationIndex::Normal);
}
#[test]
#[should_panic(expected = "HardenedDerivationOnXpub")]
fn test_parse_xpub_with_path_hardened() {
let key = "xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL/*'";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.pubkey.fingerprint(), hex_fingerprint!("31a507b8"));
assert_eq!(ek.path, deriv_path!("m/1/2/3"));
assert_eq!(ek.final_index, DerivationIndex::Fixed);
}
#[test]
fn test_parse_tprv_with_path_hardened() {
let key = "tprv8ZgxMBicQKsPduL5QnGihpprdHyypMGi4DhimjtzYemu7se5YQNcZfAPLqXRuGHb5ZX2eTQj62oNqMnyxJ7B7wz54Uzswqw8fFqMVdcmVF7/1/2/3/*'";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert!(ek.secret.is_some());
assert_eq!(ek.pubkey.fingerprint(), hex_fingerprint!("5ea4190e"));
assert_eq!(ek.path, deriv_path!("m/1/2/3"));
assert_eq!(ek.final_index, DerivationIndex::Hardened);
}
#[test]
fn test_parse_xpub_master_details() {
let key = "[d34db33f/44'/0'/0']xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.master_fingerprint, Some(hex_fingerprint!("d34db33f")));
assert_eq!(ek.master_derivation, Some(deriv_path!("m/44'/0'/0'")));
}
#[test]
fn test_parse_xpub_master_details_empty_derivation() {
let key = "[d34db33f]xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.master_fingerprint, Some(hex_fingerprint!("d34db33f")));
assert_eq!(ek.master_derivation, None);
}
#[test]
#[should_panic(expected = "MalformedInput")]
fn test_parse_xpub_short_input() {
let key = "[d34d";
DescriptorExtendedKey::from_str(key).unwrap();
}
#[test]
#[should_panic(expected = "MalformedInput")]
fn test_parse_xpub_missing_closing_bracket() {
let key = "[d34db33fxpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL";
DescriptorExtendedKey::from_str(key).unwrap();
}
#[test]
#[should_panic(expected = "InvalidChar")]
fn test_parse_xpub_invalid_fingerprint() {
let key = "[d34db33z]xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL";
DescriptorExtendedKey::from_str(key).unwrap();
}
#[test]
fn test_xpub_normal_full_path() {
let key = "xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL/1/2/*";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.full_path(42), deriv_path!("m/1/2/42"));
}
#[test]
fn test_xpub_fixed_full_path() {
let key = "xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL/1/2";
let ek = DescriptorExtendedKey::from_str(key).unwrap();
assert_eq!(ek.full_path(42), deriv_path!("m/1/2"));
assert_eq!(ek.full_path(1337), deriv_path!("m/1/2"));
}
}

View File

@@ -1,181 +0,0 @@
use bitcoin::secp256k1::{All, Secp256k1};
use bitcoin::{PrivateKey, PublicKey};
use bitcoin::util::bip32::{
ChildNumber, DerivationPath, ExtendedPrivKey, ExtendedPubKey, Fingerprint,
};
use super::error::Error;
use super::extended_key::DerivationIndex;
use super::DescriptorExtendedKey;
pub(super) trait Key: std::fmt::Debug + std::fmt::Display {
fn fingerprint(&self, secp: &Secp256k1<All>) -> Option<Fingerprint>;
fn as_public_key(&self, secp: &Secp256k1<All>, index: Option<u32>) -> Result<PublicKey, Error>;
fn as_secret_key(&self) -> Option<PrivateKey>;
fn xprv(&self) -> Option<ExtendedPrivKey>;
fn full_path(&self, index: u32) -> Option<DerivationPath>;
fn is_fixed(&self) -> bool;
fn has_secret(&self) -> bool {
self.xprv().is_some() || self.as_secret_key().is_some()
}
fn public(&self, secp: &Secp256k1<All>) -> Result<Box<dyn Key>, Error> {
Ok(Box::new(self.as_public_key(secp, None)?))
}
}
impl Key for PublicKey {
fn fingerprint(&self, _secp: &Secp256k1<All>) -> Option<Fingerprint> {
None
}
fn as_public_key(
&self,
_secp: &Secp256k1<All>,
_index: Option<u32>,
) -> Result<PublicKey, Error> {
Ok(PublicKey::clone(self))
}
fn as_secret_key(&self) -> Option<PrivateKey> {
None
}
fn xprv(&self) -> Option<ExtendedPrivKey> {
None
}
fn full_path(&self, _index: u32) -> Option<DerivationPath> {
None
}
fn is_fixed(&self) -> bool {
true
}
}
impl Key for PrivateKey {
fn fingerprint(&self, _secp: &Secp256k1<All>) -> Option<Fingerprint> {
None
}
fn as_public_key(
&self,
secp: &Secp256k1<All>,
_index: Option<u32>,
) -> Result<PublicKey, Error> {
Ok(self.public_key(secp))
}
fn as_secret_key(&self) -> Option<PrivateKey> {
Some(PrivateKey::clone(self))
}
fn xprv(&self) -> Option<ExtendedPrivKey> {
None
}
fn full_path(&self, _index: u32) -> Option<DerivationPath> {
None
}
fn is_fixed(&self) -> bool {
true
}
}
impl Key for DescriptorExtendedKey {
fn fingerprint(&self, secp: &Secp256k1<All>) -> Option<Fingerprint> {
if let Some(fing) = self.master_fingerprint {
Some(fing.clone())
} else {
Some(self.root_xpub(secp).fingerprint())
}
}
fn as_public_key(&self, secp: &Secp256k1<All>, index: Option<u32>) -> Result<PublicKey, Error> {
Ok(self.derive_xpub(secp, index.unwrap_or(0))?.public_key)
}
fn public(&self, secp: &Secp256k1<All>) -> Result<Box<dyn Key>, Error> {
if self.final_index == DerivationIndex::Hardened {
return Err(Error::HardenedDerivationOnXpub);
}
if self.xprv().is_none() {
return Ok(Box::new(self.clone()));
}
// copy the part of the path that can be derived on the xpub
let path = self
.path
.into_iter()
.rev()
.take_while(|child| match child {
ChildNumber::Normal { .. } => true,
_ => false,
})
.cloned()
.collect::<Vec<_>>();
// take the prefix that has to be derived on the xprv
let master_derivation_add = self
.path
.into_iter()
.take(self.path.as_ref().len() - path.len())
.cloned()
.collect::<Vec<_>>();
let has_derived = !master_derivation_add.is_empty();
let derived_xprv = self
.secret
.as_ref()
.unwrap()
.derive_priv(secp, &master_derivation_add)?;
let pubkey = ExtendedPubKey::from_private(secp, &derived_xprv);
let master_derivation = self
.master_derivation
.as_ref()
.map_or(vec![], |path| path.as_ref().to_vec())
.into_iter()
.chain(master_derivation_add.into_iter())
.collect::<Vec<_>>();
let master_derivation = match &master_derivation[..] {
&[] => None,
child_vec => Some(child_vec.into()),
};
let master_fingerprint = match self.master_fingerprint {
Some(desc) => Some(desc.clone()),
None if has_derived => Some(self.fingerprint(secp).unwrap()),
_ => None,
};
Ok(Box::new(DescriptorExtendedKey {
master_fingerprint,
master_derivation,
pubkey,
secret: None,
path: path.into(),
final_index: self.final_index,
}))
}
fn as_secret_key(&self) -> Option<PrivateKey> {
None
}
fn xprv(&self) -> Option<ExtendedPrivKey> {
self.secret
}
fn full_path(&self, index: u32) -> Option<DerivationPath> {
Some(self.full_path(index))
}
fn is_fixed(&self) -> bool {
self.final_index == DerivationIndex::Fixed
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

726
src/descriptor/template.rs Normal file
View File

@@ -0,0 +1,726 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Descriptor templates
//!
//! This module contains the definition of various common script templates that are ready to be
//! used. See the documentation of each template for an example.
use bitcoin::util::bip32;
use bitcoin::Network;
use miniscript::{Legacy, Segwitv0};
use super::{ExtendedDescriptor, IntoWalletDescriptor, KeyMap};
use crate::descriptor::DescriptorError;
use crate::keys::{DerivableKey, IntoDescriptorKey, ValidNetworks};
use crate::wallet::utils::SecpCtx;
use crate::{descriptor, KeychainKind};
/// Type alias for the return type of [`DescriptorTemplate`], [`descriptor!`](crate::descriptor!) and others
pub type DescriptorTemplateOut = (ExtendedDescriptor, KeyMap, ValidNetworks);
/// Trait for descriptor templates that can be built into a full descriptor
///
/// Since [`IntoWalletDescriptor`] is implemented for any [`DescriptorTemplate`], they can also be
/// passed directly to the [`Wallet`](crate::Wallet) constructor.
///
/// ## Example
///
/// ```
/// use bdk::descriptor::error::Error as DescriptorError;
/// use bdk::keys::{IntoDescriptorKey, KeyError};
/// use bdk::miniscript::Legacy;
/// use bdk::template::{DescriptorTemplate, DescriptorTemplateOut};
///
/// struct MyP2PKH<K: IntoDescriptorKey<Legacy>>(K);
///
/// impl<K: IntoDescriptorKey<Legacy>> DescriptorTemplate for MyP2PKH<K> {
/// fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
/// Ok(bdk::descriptor!(pkh(self.0))?)
/// }
/// }
/// ```
pub trait DescriptorTemplate {
/// Build the complete descriptor
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError>;
}
/// Turns a [`DescriptorTemplate`] into a valid wallet descriptor by calling its
/// [`build`](DescriptorTemplate::build) method
impl<T: DescriptorTemplate> IntoWalletDescriptor for T {
fn into_wallet_descriptor(
self,
secp: &SecpCtx,
network: Network,
) -> Result<(ExtendedDescriptor, KeyMap), DescriptorError> {
self.build()?.into_wallet_descriptor(secp, network)
}
}
/// P2PKH template. Expands to a descriptor `pkh(key)`
///
/// ## Example
///
/// ```
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::P2Pkh;
///
/// let key =
/// bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")?;
/// let wallet = Wallet::new_offline(
/// P2Pkh(key),
/// None,
/// Network::Testnet,
/// MemoryDatabase::default(),
/// )?;
///
/// assert_eq!(
/// wallet.get_address(New)?.to_string(),
/// "mwJ8hxFYW19JLuc65RCTaP4v1rzVU8cVMT"
/// );
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct P2Pkh<K: IntoDescriptorKey<Legacy>>(pub K);
impl<K: IntoDescriptorKey<Legacy>> DescriptorTemplate for P2Pkh<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
descriptor!(pkh(self.0))
}
}
/// P2WPKH-P2SH template. Expands to a descriptor `sh(wpkh(key))`
///
/// ## Example
///
/// ```
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::P2Wpkh_P2Sh;
///
/// let key =
/// bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")?;
/// let wallet = Wallet::new_offline(
/// P2Wpkh_P2Sh(key),
/// None,
/// Network::Testnet,
/// MemoryDatabase::default(),
/// )?;
///
/// assert_eq!(
/// wallet.get_address(New)?.to_string(),
/// "2NB4ox5VDRw1ecUv6SnT3VQHPXveYztRqk5"
/// );
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
#[allow(non_camel_case_types)]
pub struct P2Wpkh_P2Sh<K: IntoDescriptorKey<Segwitv0>>(pub K);
impl<K: IntoDescriptorKey<Segwitv0>> DescriptorTemplate for P2Wpkh_P2Sh<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
descriptor!(sh(wpkh(self.0)))
}
}
/// P2WPKH template. Expands to a descriptor `wpkh(key)`
///
/// ## Example
///
/// ```
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::P2Wpkh;
///
/// let key =
/// bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")?;
/// let wallet = Wallet::new_offline(
/// P2Wpkh(key),
/// None,
/// Network::Testnet,
/// MemoryDatabase::default(),
/// )?;
///
/// assert_eq!(
/// wallet.get_address(New)?.to_string(),
/// "tb1q4525hmgw265tl3drrl8jjta7ayffu6jf68ltjd"
/// );
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct P2Wpkh<K: IntoDescriptorKey<Segwitv0>>(pub K);
impl<K: IntoDescriptorKey<Segwitv0>> DescriptorTemplate for P2Wpkh<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
descriptor!(wpkh(self.0))
}
}
/// BIP44 template. Expands to `pkh(key/44'/0'/0'/{0,1}/*)`
///
/// Since there are hardened derivation steps, this template requires a private derivable key (generally a `xprv`/`tprv`).
///
/// See [`Bip44Public`] for a template that can work with a `xpub`/`tpub`.
///
/// ## Example
///
/// ```
/// # use std::str::FromStr;
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet, KeychainKind};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::Bip44;
///
/// let key = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPeZRHk4rTG6orPS2CRNFX3njhUXx5vj9qGog5ZMH4uGReDWN5kCkY3jmWEtWause41CDvBRXD1shKknAMKxT99o9qUTRVC6m")?;
/// let wallet = Wallet::new_offline(
/// Bip44(key.clone(), KeychainKind::External),
/// Some(Bip44(key, KeychainKind::Internal)),
/// Network::Testnet,
/// MemoryDatabase::default()
/// )?;
///
/// assert_eq!(wallet.get_address(New)?.to_string(), "miNG7dJTzJqNbFS19svRdTCisC65dsubtR");
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "pkh([c55b303f/44'/0'/0']tpubDDDzQ31JkZB7VxUr9bjvBivDdqoFLrDPyLWtLapArAi51ftfmCb2DPxwLQzX65iNcXz1DGaVvyvo6JQ6rTU73r2gqdEo8uov9QKRb7nKCSU/0/*)#xgaaevjx");
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct Bip44<K: DerivableKey<Legacy>>(pub K, pub KeychainKind);
impl<K: DerivableKey<Legacy>> DescriptorTemplate for Bip44<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
P2Pkh(legacy::make_bipxx_private(44, self.0, self.1)?).build()
}
}
/// BIP44 public template. Expands to `pkh(key/{0,1}/*)`
///
/// This assumes that the key used has already been derived with `m/44'/0'/0'`.
///
/// This template requires the parent fingerprint to populate correctly the metadata of PSBTs.
///
/// See [`Bip44`] for a template that does the full derivation, but requires private data
/// for the key.
///
/// ## Example
///
/// ```
/// # use std::str::FromStr;
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet, KeychainKind};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::Bip44Public;
///
/// let key = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDDDzQ31JkZB7VxUr9bjvBivDdqoFLrDPyLWtLapArAi51ftfmCb2DPxwLQzX65iNcXz1DGaVvyvo6JQ6rTU73r2gqdEo8uov9QKRb7nKCSU")?;
/// let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f")?;
/// let wallet = Wallet::new_offline(
/// Bip44Public(key.clone(), fingerprint, KeychainKind::External),
/// Some(Bip44Public(key, fingerprint, KeychainKind::Internal)),
/// Network::Testnet,
/// MemoryDatabase::default()
/// )?;
///
/// assert_eq!(wallet.get_address(New)?.to_string(), "miNG7dJTzJqNbFS19svRdTCisC65dsubtR");
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "pkh([c55b303f/44'/0'/0']tpubDDDzQ31JkZB7VxUr9bjvBivDdqoFLrDPyLWtLapArAi51ftfmCb2DPxwLQzX65iNcXz1DGaVvyvo6JQ6rTU73r2gqdEo8uov9QKRb7nKCSU/0/*)#xgaaevjx");
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct Bip44Public<K: DerivableKey<Legacy>>(pub K, pub bip32::Fingerprint, pub KeychainKind);
impl<K: DerivableKey<Legacy>> DescriptorTemplate for Bip44Public<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
P2Pkh(legacy::make_bipxx_public(44, self.0, self.1, self.2)?).build()
}
}
/// BIP49 template. Expands to `sh(wpkh(key/49'/0'/0'/{0,1}/*))`
///
/// Since there are hardened derivation steps, this template requires a private derivable key (generally a `xprv`/`tprv`).
///
/// See [`Bip49Public`] for a template that can work with a `xpub`/`tpub`.
///
/// ## Example
///
/// ```
/// # use std::str::FromStr;
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet, KeychainKind};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::Bip49;
///
/// let key = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPeZRHk4rTG6orPS2CRNFX3njhUXx5vj9qGog5ZMH4uGReDWN5kCkY3jmWEtWause41CDvBRXD1shKknAMKxT99o9qUTRVC6m")?;
/// let wallet = Wallet::new_offline(
/// Bip49(key.clone(), KeychainKind::External),
/// Some(Bip49(key, KeychainKind::Internal)),
/// Network::Testnet,
/// MemoryDatabase::default()
/// )?;
///
/// assert_eq!(wallet.get_address(New)?.to_string(), "2N3K4xbVAHoiTQSwxkZjWDfKoNC27pLkYnt");
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "sh(wpkh([c55b303f/49\'/0\'/0\']tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L/0/*))#gsmdv4xr");
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct Bip49<K: DerivableKey<Segwitv0>>(pub K, pub KeychainKind);
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip49<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
P2Wpkh_P2Sh(segwit_v0::make_bipxx_private(49, self.0, self.1)?).build()
}
}
/// BIP49 public template. Expands to `sh(wpkh(key/{0,1}/*))`
///
/// This assumes that the key used has already been derived with `m/49'/0'/0'`.
///
/// This template requires the parent fingerprint to populate correctly the metadata of PSBTs.
///
/// See [`Bip49`] for a template that does the full derivation, but requires private data
/// for the key.
///
/// ## Example
///
/// ```
/// # use std::str::FromStr;
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet, KeychainKind};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::Bip49Public;
///
/// let key = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L")?;
/// let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f")?;
/// let wallet = Wallet::new_offline(
/// Bip49Public(key.clone(), fingerprint, KeychainKind::External),
/// Some(Bip49Public(key, fingerprint, KeychainKind::Internal)),
/// Network::Testnet,
/// MemoryDatabase::default()
/// )?;
///
/// assert_eq!(wallet.get_address(New)?.to_string(), "2N3K4xbVAHoiTQSwxkZjWDfKoNC27pLkYnt");
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "sh(wpkh([c55b303f/49\'/0\'/0\']tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L/0/*))#gsmdv4xr");
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct Bip49Public<K: DerivableKey<Segwitv0>>(pub K, pub bip32::Fingerprint, pub KeychainKind);
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip49Public<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
P2Wpkh_P2Sh(segwit_v0::make_bipxx_public(49, self.0, self.1, self.2)?).build()
}
}
/// BIP84 template. Expands to `wpkh(key/84'/0'/0'/{0,1}/*)`
///
/// Since there are hardened derivation steps, this template requires a private derivable key (generally a `xprv`/`tprv`).
///
/// See [`Bip84Public`] for a template that can work with a `xpub`/`tpub`.
///
/// ## Example
///
/// ```
/// # use std::str::FromStr;
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet, KeychainKind};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::Bip84;
///
/// let key = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPeZRHk4rTG6orPS2CRNFX3njhUXx5vj9qGog5ZMH4uGReDWN5kCkY3jmWEtWause41CDvBRXD1shKknAMKxT99o9qUTRVC6m")?;
/// let wallet = Wallet::new_offline(
/// Bip84(key.clone(), KeychainKind::External),
/// Some(Bip84(key, KeychainKind::Internal)),
/// Network::Testnet,
/// MemoryDatabase::default()
/// )?;
///
/// assert_eq!(wallet.get_address(New)?.to_string(), "tb1qedg9fdlf8cnnqfd5mks6uz5w4kgpk2pr6y4qc7");
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "wpkh([c55b303f/84\'/0\'/0\']tpubDC2Qwo2TFsaNC4ju8nrUJ9mqVT3eSgdmy1yPqhgkjwmke3PRXutNGRYAUo6RCHTcVQaDR3ohNU9we59brGHuEKPvH1ags2nevW5opEE9Z5Q/0/*)#nkk5dtkg");
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct Bip84<K: DerivableKey<Segwitv0>>(pub K, pub KeychainKind);
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip84<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
P2Wpkh(segwit_v0::make_bipxx_private(84, self.0, self.1)?).build()
}
}
/// BIP84 public template. Expands to `wpkh(key/{0,1}/*)`
///
/// This assumes that the key used has already been derived with `m/84'/0'/0'`.
///
/// This template requires the parent fingerprint to populate correctly the metadata of PSBTs.
///
/// See [`Bip84`] for a template that does the full derivation, but requires private data
/// for the key.
///
/// ## Example
///
/// ```
/// # use std::str::FromStr;
/// # use bdk::bitcoin::{PrivateKey, Network};
/// # use bdk::{Wallet, KeychainKind};
/// # use bdk::database::MemoryDatabase;
/// # use bdk::wallet::AddressIndex::New;
/// use bdk::template::Bip84Public;
///
/// let key = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC2Qwo2TFsaNC4ju8nrUJ9mqVT3eSgdmy1yPqhgkjwmke3PRXutNGRYAUo6RCHTcVQaDR3ohNU9we59brGHuEKPvH1ags2nevW5opEE9Z5Q")?;
/// let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f")?;
/// let wallet = Wallet::new_offline(
/// Bip84Public(key.clone(), fingerprint, KeychainKind::External),
/// Some(Bip84Public(key, fingerprint, KeychainKind::Internal)),
/// Network::Testnet,
/// MemoryDatabase::default()
/// )?;
///
/// assert_eq!(wallet.get_address(New)?.to_string(), "tb1qedg9fdlf8cnnqfd5mks6uz5w4kgpk2pr6y4qc7");
/// assert_eq!(wallet.public_descriptor(KeychainKind::External)?.unwrap().to_string(), "wpkh([c55b303f/84\'/0\'/0\']tpubDC2Qwo2TFsaNC4ju8nrUJ9mqVT3eSgdmy1yPqhgkjwmke3PRXutNGRYAUo6RCHTcVQaDR3ohNU9we59brGHuEKPvH1ags2nevW5opEE9Z5Q/0/*)#nkk5dtkg");
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub struct Bip84Public<K: DerivableKey<Segwitv0>>(pub K, pub bip32::Fingerprint, pub KeychainKind);
impl<K: DerivableKey<Segwitv0>> DescriptorTemplate for Bip84Public<K> {
fn build(self) -> Result<DescriptorTemplateOut, DescriptorError> {
P2Wpkh(segwit_v0::make_bipxx_public(84, self.0, self.1, self.2)?).build()
}
}
macro_rules! expand_make_bipxx {
( $mod_name:ident, $ctx:ty ) => {
mod $mod_name {
use super::*;
pub(super) fn make_bipxx_private<K: DerivableKey<$ctx>>(
bip: u32,
key: K,
keychain: KeychainKind,
) -> Result<impl IntoDescriptorKey<$ctx>, DescriptorError> {
let mut derivation_path = Vec::with_capacity(4);
derivation_path.push(bip32::ChildNumber::from_hardened_idx(bip)?);
derivation_path.push(bip32::ChildNumber::from_hardened_idx(0)?);
derivation_path.push(bip32::ChildNumber::from_hardened_idx(0)?);
match keychain {
KeychainKind::External => {
derivation_path.push(bip32::ChildNumber::from_normal_idx(0)?)
}
KeychainKind::Internal => {
derivation_path.push(bip32::ChildNumber::from_normal_idx(1)?)
}
};
let derivation_path: bip32::DerivationPath = derivation_path.into();
Ok((key, derivation_path))
}
pub(super) fn make_bipxx_public<K: DerivableKey<$ctx>>(
bip: u32,
key: K,
parent_fingerprint: bip32::Fingerprint,
keychain: KeychainKind,
) -> Result<impl IntoDescriptorKey<$ctx>, DescriptorError> {
let derivation_path: bip32::DerivationPath = match keychain {
KeychainKind::External => vec![bip32::ChildNumber::from_normal_idx(0)?].into(),
KeychainKind::Internal => vec![bip32::ChildNumber::from_normal_idx(1)?].into(),
};
let source_path = bip32::DerivationPath::from(vec![
bip32::ChildNumber::from_hardened_idx(bip)?,
bip32::ChildNumber::from_hardened_idx(0)?,
bip32::ChildNumber::from_hardened_idx(0)?,
]);
Ok((key, (parent_fingerprint, source_path), derivation_path))
}
}
};
}
expand_make_bipxx!(legacy, Legacy);
expand_make_bipxx!(segwit_v0, Segwitv0);
#[cfg(test)]
mod test {
// test existing descriptor templates, make sure they are expanded to the right descriptors
use std::str::FromStr;
use super::*;
use crate::descriptor::derived::AsDerived;
use crate::descriptor::{DescriptorError, DescriptorMeta};
use crate::keys::ValidNetworks;
use bitcoin::network::constants::Network::Regtest;
use bitcoin::secp256k1::Secp256k1;
use miniscript::descriptor::{DescriptorPublicKey, DescriptorTrait, KeyMap};
use miniscript::Descriptor;
// verify template descriptor generates expected address(es)
fn check(
desc: Result<(Descriptor<DescriptorPublicKey>, KeyMap, ValidNetworks), DescriptorError>,
is_witness: bool,
is_fixed: bool,
expected: &[&str],
) {
let secp = Secp256k1::new();
let (desc, _key_map, _networks) = desc.unwrap();
assert_eq!(desc.is_witness(), is_witness);
assert_eq!(!desc.is_deriveable(), is_fixed);
for i in 0..expected.len() {
let index = i as u32;
let child_desc = if !desc.is_deriveable() {
desc.as_derived_fixed(&secp)
} else {
desc.as_derived(index, &secp)
};
let address = child_desc.address(Regtest).unwrap();
assert_eq!(address.to_string(), *expected.get(i).unwrap());
}
}
// P2PKH
#[test]
fn test_p2ph_template() {
let prvkey =
bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")
.unwrap();
check(
P2Pkh(prvkey).build(),
false,
true,
&["mwJ8hxFYW19JLuc65RCTaP4v1rzVU8cVMT"],
);
let pubkey = bitcoin::PublicKey::from_str(
"03a34b99f22c790c4e36b2b3c2c35a36db06226e41c692fc82b8b56ac1c540c5bd",
)
.unwrap();
check(
P2Pkh(pubkey).build(),
false,
true,
&["muZpTpBYhxmRFuCjLc7C6BBDF32C8XVJUi"],
);
}
// P2WPKH-P2SH `sh(wpkh(key))`
#[test]
fn test_p2wphp2sh_template() {
let prvkey =
bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")
.unwrap();
check(
P2Wpkh_P2Sh(prvkey).build(),
true,
true,
&["2NB4ox5VDRw1ecUv6SnT3VQHPXveYztRqk5"],
);
let pubkey = bitcoin::PublicKey::from_str(
"03a34b99f22c790c4e36b2b3c2c35a36db06226e41c692fc82b8b56ac1c540c5bd",
)
.unwrap();
check(
P2Wpkh_P2Sh(pubkey).build(),
true,
true,
&["2N5LiC3CqzxDamRTPG1kiNv1FpNJQ7x28sb"],
);
}
// P2WPKH `wpkh(key)`
#[test]
fn test_p2wph_template() {
let prvkey =
bitcoin::PrivateKey::from_wif("cTc4vURSzdx6QE6KVynWGomDbLaA75dNALMNyfjh3p8DRRar84Um")
.unwrap();
check(
P2Wpkh(prvkey).build(),
true,
true,
&["bcrt1q4525hmgw265tl3drrl8jjta7ayffu6jfcwxx9y"],
);
let pubkey = bitcoin::PublicKey::from_str(
"03a34b99f22c790c4e36b2b3c2c35a36db06226e41c692fc82b8b56ac1c540c5bd",
)
.unwrap();
check(
P2Wpkh(pubkey).build(),
true,
true,
&["bcrt1qngw83fg8dz0k749cg7k3emc7v98wy0c7azaa6h"],
);
}
// BIP44 `pkh(key/44'/0'/0'/{0,1}/*)`
#[test]
fn test_bip44_template() {
let prvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy").unwrap();
check(
Bip44(prvkey, KeychainKind::External).build(),
false,
false,
&[
"n453VtnjDHPyDt2fDstKSu7A3YCJoHZ5g5",
"mvfrrumXgTtwFPWDNUecBBgzuMXhYM7KRP",
"mzYvhRAuQqbdSKMVVzXNYyqihgNdRadAUQ",
],
);
check(
Bip44(prvkey, KeychainKind::Internal).build(),
false,
false,
&[
"muHF98X9KxEzdKrnFAX85KeHv96eXopaip",
"n4hpyLJE5ub6B5Bymv4eqFxS5KjrewSmYR",
"mgvkdv1ffmsXd2B1sRKQ5dByK3SzpG42rA",
],
);
}
// BIP44 public `pkh(key/{0,1}/*)`
#[test]
fn test_bip44_public_template() {
let pubkey = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDDDzQ31JkZB7VxUr9bjvBivDdqoFLrDPyLWtLapArAi51ftfmCb2DPxwLQzX65iNcXz1DGaVvyvo6JQ6rTU73r2gqdEo8uov9QKRb7nKCSU").unwrap();
let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f").unwrap();
check(
Bip44Public(pubkey, fingerprint, KeychainKind::External).build(),
false,
false,
&[
"miNG7dJTzJqNbFS19svRdTCisC65dsubtR",
"n2UqaDbCjWSFJvpC84m3FjUk5UaeibCzYg",
"muCPpS6Ue7nkzeJMWDViw7Lkwr92Yc4K8g",
],
);
check(
Bip44Public(pubkey, fingerprint, KeychainKind::Internal).build(),
false,
false,
&[
"moDr3vJ8wpt5nNxSK55MPq797nXJb2Ru9H",
"ms7A1Yt4uTezT2XkefW12AvLoko8WfNJMG",
"mhYiyat2rtEnV77cFfQsW32y1m2ceCGHPo",
],
);
}
// BIP49 `sh(wpkh(key/49'/0'/0'/{0,1}/*))`
#[test]
fn test_bip49_template() {
let prvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy").unwrap();
check(
Bip49(prvkey, KeychainKind::External).build(),
true,
false,
&[
"2N9bCAJXGm168MjVwpkBdNt6ucka3PKVoUV",
"2NDckYkqrYyDMtttEav5hB3Bfw9EGAW5HtS",
"2NAFTVtksF9T4a97M7nyCjwUBD24QevZ5Z4",
],
);
check(
Bip49(prvkey, KeychainKind::Internal).build(),
true,
false,
&[
"2NB3pA8PnzJLGV8YEKNDFpbViZv3Bm1K6CG",
"2NBiX2Wzxngb5rPiWpUiJQ2uLVB4HBjFD4p",
"2NA8ek4CdQ6aMkveYF6AYuEYNrftB47QGTn",
],
);
}
// BIP49 public `sh(wpkh(key/{0,1}/*))`
#[test]
fn test_bip49_public_template() {
let pubkey = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC49r947KGK52X5rBWS4BLs5m9SRY3pYHnvRrm7HcybZ3BfdEsGFyzCMzayi1u58eT82ZeyFZwH7DD6Q83E3fM9CpfMtmnTygnLfP59jL9L").unwrap();
let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f").unwrap();
check(
Bip49Public(pubkey, fingerprint, KeychainKind::External).build(),
true,
false,
&[
"2N3K4xbVAHoiTQSwxkZjWDfKoNC27pLkYnt",
"2NCTQfJ1sZa3wQ3pPseYRHbaNEpC3AquEfX",
"2MveFxAuC8BYPzTybx7FxSzW8HSd8ATT4z7",
],
);
check(
Bip49Public(pubkey, fingerprint, KeychainKind::Internal).build(),
true,
false,
&[
"2NF2vttKibwyxigxtx95Zw8K7JhDbo5zPVJ",
"2Mtmyd8taksxNVWCJ4wVvaiss7QPZGcAJuH",
"2NBs3CTVYPr1HCzjB4YFsnWCPCtNg8uMEfp",
],
);
}
// BIP84 `wpkh(key/84'/0'/0'/{0,1}/*)`
#[test]
fn test_bip84_template() {
let prvkey = bitcoin::util::bip32::ExtendedPrivKey::from_str("tprv8ZgxMBicQKsPcx5nBGsR63Pe8KnRUqmbJNENAfGftF3yuXoMMoVJJcYeUw5eVkm9WBPjWYt6HMWYJNesB5HaNVBaFc1M6dRjWSYnmewUMYy").unwrap();
check(
Bip84(prvkey, KeychainKind::External).build(),
true,
false,
&[
"bcrt1qkmvk2nadgplmd57ztld8nf8v2yxkzmdvwtjf8s",
"bcrt1qx0v6zgfwe50m4kqc58cqzcyem7ay2sfl3gvqhp",
"bcrt1q4h7fq9zhxst6e69p3n882nfj649l7w9g3zccfp",
],
);
check(
Bip84(prvkey, KeychainKind::Internal).build(),
true,
false,
&[
"bcrt1qtrwtz00wxl69e5xex7amy4xzlxkaefg3gfdkxa",
"bcrt1qqqasfhxpkkf7zrxqnkr2sfhn74dgsrc3e3ky45",
"bcrt1qpks7n0gq74hsgsz3phn5vuazjjq0f5eqhsgyce",
],
);
}
// BIP84 public `wpkh(key/{0,1}/*)`
#[test]
fn test_bip84_public_template() {
let pubkey = bitcoin::util::bip32::ExtendedPubKey::from_str("tpubDC2Qwo2TFsaNC4ju8nrUJ9mqVT3eSgdmy1yPqhgkjwmke3PRXutNGRYAUo6RCHTcVQaDR3ohNU9we59brGHuEKPvH1ags2nevW5opEE9Z5Q").unwrap();
let fingerprint = bitcoin::util::bip32::Fingerprint::from_str("c55b303f").unwrap();
check(
Bip84Public(pubkey, fingerprint, KeychainKind::External).build(),
true,
false,
&[
"bcrt1qedg9fdlf8cnnqfd5mks6uz5w4kgpk2prcdvd0h",
"bcrt1q3lncdlwq3lgcaaeyruynjnlccr0ve0kakh6ana",
"bcrt1qt9800y6xl3922jy3uyl0z33jh5wfpycyhcylr9",
],
);
check(
Bip84Public(pubkey, fingerprint, KeychainKind::Internal).build(),
true,
false,
&[
"bcrt1qm6wqukenh7guu792lj2njgw9n78cmwsy8xy3z2",
"bcrt1q694twxtjn4nnrvnyvra769j0a23rllj5c6cgwp",
"bcrt1qhlac3c5ranv5w5emlnqs7wxhkxt8maelylcarp",
],
);
}
}

14
src/doctest.rs Normal file
View File

@@ -0,0 +1,14 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
#[doc(include = "../README.md")]
#[cfg(doctest)]
pub struct ReadmeDoctests;

View File

@@ -1,83 +1,228 @@
use bitcoin::{OutPoint, Script, Txid};
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::fmt;
use crate::bitcoin::Network;
use crate::{descriptor, wallet, wallet::address_validator};
use bitcoin::OutPoint;
/// Errors that can be thrown by the [`Wallet`](crate::wallet::Wallet)
#[derive(Debug)]
pub enum Error {
KeyMismatch(bitcoin::secp256k1::PublicKey, bitcoin::secp256k1::PublicKey),
MissingInputUTXO(usize),
/// Wrong number of bytes found when trying to convert to u32
InvalidU32Bytes(Vec<u8>),
/// Generic error
Generic(String),
/// This error is thrown when trying to convert Bare and Public key script to address
ScriptDoesntHaveAddressForm,
SendAllMultipleOutputs,
/// Cannot build a tx without recipients
NoRecipients,
/// `manually_selected_only` option is selected but no utxo has been passed
NoUtxosSelected,
/// Output created is under the dust limit, 546 satoshis
OutputBelowDustLimit(usize),
InsufficientFunds,
UnknownUTXO,
DifferentTransactions,
/// Wallet's UTXO set is not enough to cover recipient's requested plus fee
InsufficientFunds {
/// Sats needed for some transaction
needed: u64,
/// Sats available for spending
available: u64,
},
/// Branch and bound coin selection possible attempts with sufficiently big UTXO set could grow
/// exponentially, thus a limit is set, and when hit, this error is thrown
BnBTotalTriesExceeded,
/// Branch and bound coin selection tries to avoid needing a change by finding the right inputs for
/// the desired outputs plus fee, if there is not such combination this error is thrown
BnBNoExactMatch,
/// Happens when trying to spend an UTXO that is not in the internal database
UnknownUtxo,
/// Thrown when a tx is not found in the internal database
TransactionNotFound,
/// Happens when trying to bump a transaction that is already confirmed
TransactionConfirmed,
/// Trying to replace a tx that has a sequence >= `0xFFFFFFFE`
IrreplaceableTransaction,
/// When bumping a tx the fee rate requested is lower than required
FeeRateTooLow {
/// Required fee rate (satoshi/vbyte)
required: crate::types::FeeRate,
},
/// When bumping a tx the absolute fee requested is lower than replaced tx absolute fee
FeeTooLow {
/// Required fee absolute value (satoshi)
required: u64,
},
/// Node doesn't have data to estimate a fee rate
FeeRateUnavailable,
/// In order to use the [`TxBuilder::add_global_xpubs`] option every extended
/// key in the descriptor must either be a master key itself (having depth = 0) or have an
/// explicit origin provided
///
/// [`TxBuilder::add_global_xpubs`]: crate::wallet::tx_builder::TxBuilder::add_global_xpubs
MissingKeyOrigin(String),
/// Error while working with [`keys`](crate::keys)
Key(crate::keys::KeyError),
/// Descriptor checksum mismatch
ChecksumMismatch,
DifferentDescriptorStructure,
SpendingPolicyRequired,
/// Spending policy is not compatible with this [`KeychainKind`](crate::types::KeychainKind)
SpendingPolicyRequired(crate::types::KeychainKind),
/// Error while extracting and manipulating policies
InvalidPolicyPathError(crate::descriptor::policy::PolicyError),
/// Signing error
Signer(crate::wallet::signer::SignerError),
/// Invalid network
InvalidNetwork {
/// requested network, for example what is given as bdk-cli option
requested: Network,
/// found network, for example the network of the bitcoin node
found: Network,
},
#[cfg(feature = "verify")]
/// Transaction verification error
Verification(crate::wallet::verify::VerifyError),
// Signing errors (expected, received)
InputTxidMismatch((Txid, OutPoint)),
InputRedeemScriptMismatch((Script, Script)), // scriptPubKey, redeemScript
InputWitnessScriptMismatch((Script, Script)), // scriptPubKey, redeemScript
InputUnknownSegwitScript(Script),
InputMissingWitnessScript(usize),
MissingUTXO,
// Blockchain interface errors
Uncapable(crate::blockchain::Capability),
OfflineClient,
/// Progress value must be between `0.0` (included) and `100.0` (included)
InvalidProgressValue(f32),
/// Progress update error (maybe the channel has been closed)
ProgressUpdateError,
MissingCachedAddresses,
/// Requested outpoint doesn't exist in the tx (vout greater than available outputs)
InvalidOutpoint(OutPoint),
/// Error related to the parsing and usage of descriptors
Descriptor(crate::descriptor::error::Error),
/// Error that can be returned to fail the validation of an address
AddressValidator(crate::wallet::address_validator::AddressValidatorError),
/// Encoding error
Encode(bitcoin::consensus::encode::Error),
BIP32(bitcoin::util::bip32::Error),
/// Miniscript error
Miniscript(miniscript::Error),
/// BIP32 error
Bip32(bitcoin::util::bip32::Error),
/// An ECDSA error
Secp256k1(bitcoin::secp256k1::Error),
JSON(serde_json::Error),
/// Error serializing or deserializing JSON data
Json(serde_json::Error),
/// Hex decoding error
Hex(bitcoin::hashes::hex::Error),
PSBT(bitcoin::util::psbt::Error),
/// Partially signed bitcoin transaction error
Psbt(bitcoin::util::psbt::Error),
/// Partially signed bitcoin transaction parse error
PsbtParse(bitcoin::util::psbt::PsbtParseError),
//KeyMismatch(bitcoin::secp256k1::PublicKey, bitcoin::secp256k1::PublicKey),
//MissingInputUTXO(usize),
//InvalidAddressNetwork(Address),
//DifferentTransactions,
//DifferentDescriptorStructure,
//Uncapable(crate::blockchain::Capability),
//MissingCachedAddresses,
#[cfg(feature = "electrum")]
/// Electrum client error
Electrum(electrum_client::Error),
#[cfg(feature = "esplora")]
Esplora(crate::blockchain::esplora::EsploraError),
/// Esplora client error
Esplora(Box<crate::blockchain::esplora::EsploraError>),
#[cfg(feature = "compact_filters")]
/// Compact filters client error)
CompactFilters(crate::blockchain::compact_filters::CompactFiltersError),
#[cfg(feature = "key-value-db")]
/// Sled database error
Sled(sled::Error),
#[cfg(feature = "rpc")]
/// Rpc client error
Rpc(bitcoincore_rpc::Error),
#[cfg(feature = "sqlite")]
/// Rusqlite client error
Rusqlite(rusqlite::Error),
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
impl std::error::Error for Error {}
macro_rules! impl_error {
( $from:ty, $to:ident ) => {
impl std::convert::From<$from> for Error {
impl_error!($from, $to, Error);
};
( $from:ty, $to:ident, $impl_for:ty ) => {
impl std::convert::From<$from> for $impl_for {
fn from(err: $from) -> Self {
Error::$to(err)
<$impl_for>::$to(err)
}
}
};
}
impl_error!(crate::descriptor::error::Error, Descriptor);
impl_error!(
crate::descriptor::policy::PolicyError,
InvalidPolicyPathError
);
impl_error!(descriptor::error::Error, Descriptor);
impl_error!(address_validator::AddressValidatorError, AddressValidator);
impl_error!(descriptor::policy::PolicyError, InvalidPolicyPathError);
impl_error!(wallet::signer::SignerError, Signer);
impl From<crate::keys::KeyError> for Error {
fn from(key_error: crate::keys::KeyError) -> Error {
match key_error {
crate::keys::KeyError::Miniscript(inner) => Error::Miniscript(inner),
crate::keys::KeyError::Bip32(inner) => Error::Bip32(inner),
crate::keys::KeyError::InvalidChecksum => Error::ChecksumMismatch,
e => Error::Key(e),
}
}
}
impl_error!(bitcoin::consensus::encode::Error, Encode);
impl_error!(bitcoin::util::bip32::Error, BIP32);
impl_error!(miniscript::Error, Miniscript);
impl_error!(bitcoin::util::bip32::Error, Bip32);
impl_error!(bitcoin::secp256k1::Error, Secp256k1);
impl_error!(serde_json::Error, JSON);
impl_error!(serde_json::Error, Json);
impl_error!(bitcoin::hashes::hex::Error, Hex);
impl_error!(bitcoin::util::psbt::Error, PSBT);
impl_error!(bitcoin::util::psbt::Error, Psbt);
impl_error!(bitcoin::util::psbt::PsbtParseError, PsbtParse);
#[cfg(feature = "electrum")]
impl_error!(electrum_client::Error, Electrum);
#[cfg(feature = "esplora")]
impl_error!(crate::blockchain::esplora::EsploraError, Esplora);
#[cfg(feature = "key-value-db")]
impl_error!(sled::Error, Sled);
#[cfg(feature = "rpc")]
impl_error!(bitcoincore_rpc::Error, Rpc);
#[cfg(feature = "sqlite")]
impl_error!(rusqlite::Error, Rusqlite);
#[cfg(feature = "compact_filters")]
impl From<crate::blockchain::compact_filters::CompactFiltersError> for Error {
fn from(other: crate::blockchain::compact_filters::CompactFiltersError) -> Self {
match other {
crate::blockchain::compact_filters::CompactFiltersError::Global(e) => *e,
err => Error::CompactFilters(err),
}
}
}
#[cfg(feature = "verify")]
impl From<crate::wallet::verify::VerifyError> for Error {
fn from(other: crate::wallet::verify::VerifyError) -> Self {
match other {
crate::wallet::verify::VerifyError::Global(inner) => *inner,
err => Error::Verification(err),
}
}
}
#[cfg(feature = "esplora")]
impl From<crate::blockchain::esplora::EsploraError> for Error {
fn from(other: crate::blockchain::esplora::EsploraError) -> Self {
Error::Esplora(Box::new(other))
}
}

208
src/keys/bip39.rs Normal file
View File

@@ -0,0 +1,208 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! BIP-0039
// TODO: maybe write our own implementation of bip39? Seems stupid to have an extra dependency for
// something that should be fairly simple to re-implement.
use bitcoin::util::bip32;
use bitcoin::Network;
use miniscript::ScriptContext;
pub use bip39::{Language, Mnemonic};
type Seed = [u8; 64];
/// Type describing entropy length (aka word count) in the mnemonic
pub enum WordCount {
/// 12 words mnemonic (128 bits entropy)
Words12 = 128,
/// 15 words mnemonic (160 bits entropy)
Words15 = 160,
/// 18 words mnemonic (192 bits entropy)
Words18 = 192,
/// 21 words mnemonic (224 bits entropy)
Words21 = 224,
/// 24 words mnemonic (256 bits entropy)
Words24 = 256,
}
use super::{
any_network, DerivableKey, DescriptorKey, ExtendedKey, GeneratableKey, GeneratedKey, KeyError,
};
fn set_valid_on_any_network<Ctx: ScriptContext>(
descriptor_key: DescriptorKey<Ctx>,
) -> DescriptorKey<Ctx> {
// We have to pick one network to build the xprv, but since the bip39 standard doesn't
// encode the network, the xprv we create is actually valid everywhere. So we override the
// valid networks with `any_network()`.
descriptor_key.override_valid_networks(any_network())
}
/// Type for a BIP39 mnemonic with an optional passphrase
pub type MnemonicWithPassphrase = (Mnemonic, Option<String>);
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
impl<Ctx: ScriptContext> DerivableKey<Ctx> for Seed {
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
Ok(bip32::ExtendedPrivKey::new_master(Network::Bitcoin, &self[..])?.into())
}
fn into_descriptor_key(
self,
source: Option<bip32::KeySource>,
derivation_path: bip32::DerivationPath,
) -> Result<DescriptorKey<Ctx>, KeyError> {
let descriptor_key = self
.into_extended_key()?
.into_descriptor_key(source, derivation_path)?;
Ok(set_valid_on_any_network(descriptor_key))
}
}
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
impl<Ctx: ScriptContext> DerivableKey<Ctx> for MnemonicWithPassphrase {
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
let (mnemonic, passphrase) = self;
let seed: Seed = mnemonic.to_seed(passphrase.as_deref().unwrap_or(""));
seed.into_extended_key()
}
fn into_descriptor_key(
self,
source: Option<bip32::KeySource>,
derivation_path: bip32::DerivationPath,
) -> Result<DescriptorKey<Ctx>, KeyError> {
let descriptor_key = self
.into_extended_key()?
.into_descriptor_key(source, derivation_path)?;
Ok(set_valid_on_any_network(descriptor_key))
}
}
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
impl<Ctx: ScriptContext> DerivableKey<Ctx> for Mnemonic {
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
(self, None).into_extended_key()
}
fn into_descriptor_key(
self,
source: Option<bip32::KeySource>,
derivation_path: bip32::DerivationPath,
) -> Result<DescriptorKey<Ctx>, KeyError> {
let descriptor_key = self
.into_extended_key()?
.into_descriptor_key(source, derivation_path)?;
Ok(set_valid_on_any_network(descriptor_key))
}
}
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
impl<Ctx: ScriptContext> GeneratableKey<Ctx> for Mnemonic {
type Entropy = [u8; 32];
type Options = (WordCount, Language);
type Error = Option<bip39::Error>;
fn generate_with_entropy(
(word_count, language): Self::Options,
entropy: Self::Entropy,
) -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
let entropy = &entropy.as_ref()[..(word_count as usize / 8)];
let mnemonic = Mnemonic::from_entropy_in(language, entropy)?;
Ok(GeneratedKey::new(mnemonic, any_network()))
}
}
#[cfg(test)]
mod test {
use std::str::FromStr;
use bitcoin::util::bip32;
use bip39::{Language, Mnemonic};
use crate::keys::{any_network, GeneratableKey, GeneratedKey};
use super::WordCount;
#[test]
fn test_keys_bip39_mnemonic() {
let mnemonic =
"aim bunker wash balance finish force paper analyst cabin spoon stable organ";
let mnemonic = Mnemonic::parse_in(Language::English, mnemonic).unwrap();
let path = bip32::DerivationPath::from_str("m/44'/0'/0'/0").unwrap();
let key = (mnemonic, path);
let (desc, keys, networks) = crate::descriptor!(wpkh(key)).unwrap();
assert_eq!(desc.to_string(), "wpkh([be83839f/44'/0'/0']xpub6DCQ1YcqvZtSwGWMrwHELPehjWV3f2MGZ69yBADTxFEUAoLwb5Mp5GniQK6tTp3AgbngVz9zEFbBJUPVnkG7LFYt8QMTfbrNqs6FNEwAPKA/0/*)#0r8v4nkv");
assert_eq!(keys.len(), 1);
assert_eq!(networks.len(), 4);
}
#[test]
fn test_keys_bip39_mnemonic_passphrase() {
let mnemonic =
"aim bunker wash balance finish force paper analyst cabin spoon stable organ";
let mnemonic = Mnemonic::parse_in(Language::English, mnemonic).unwrap();
let path = bip32::DerivationPath::from_str("m/44'/0'/0'/0").unwrap();
let key = ((mnemonic, Some("passphrase".into())), path);
let (desc, keys, networks) = crate::descriptor!(wpkh(key)).unwrap();
assert_eq!(desc.to_string(), "wpkh([8f6cb80c/44'/0'/0']xpub6DWYS8bbihFevy29M4cbw4ZR3P5E12jB8R88gBDWCTCNpYiDHhYWNywrCF9VZQYagzPmsZpxXpytzSoxynyeFr4ZyzheVjnpLKuse4fiwZw/0/*)#h0j0tg5m");
assert_eq!(keys.len(), 1);
assert_eq!(networks.len(), 4);
}
#[test]
fn test_keys_generate_bip39() {
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
Mnemonic::generate_with_entropy(
(WordCount::Words12, Language::English),
crate::keys::test::TEST_ENTROPY,
)
.unwrap();
assert_eq!(generated_mnemonic.valid_networks, any_network());
assert_eq!(
generated_mnemonic.to_string(),
"primary fetch primary fetch primary fetch primary fetch primary fetch primary fever"
);
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
Mnemonic::generate_with_entropy(
(WordCount::Words24, Language::English),
crate::keys::test::TEST_ENTROPY,
)
.unwrap();
assert_eq!(generated_mnemonic.valid_networks, any_network());
assert_eq!(generated_mnemonic.to_string(), "primary fetch primary fetch primary fetch primary fetch primary fetch primary fetch primary fetch primary fetch primary fetch primary fetch primary fetch primary foster");
}
#[test]
fn test_keys_generate_bip39_random() {
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
Mnemonic::generate((WordCount::Words12, Language::English)).unwrap();
assert_eq!(generated_mnemonic.valid_networks, any_network());
let generated_mnemonic: GeneratedKey<_, miniscript::Segwitv0> =
Mnemonic::generate((WordCount::Words24, Language::English)).unwrap();
assert_eq!(generated_mnemonic.valid_networks, any_network());
}
}

934
src/keys/mod.rs Normal file
View File

@@ -0,0 +1,934 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Key formats
use std::any::TypeId;
use std::collections::HashSet;
use std::marker::PhantomData;
use std::ops::Deref;
use std::str::FromStr;
use bitcoin::secp256k1::{self, Secp256k1, Signing};
use bitcoin::util::bip32;
use bitcoin::{Network, PrivateKey, PublicKey};
use miniscript::descriptor::{Descriptor, DescriptorXKey, Wildcard};
pub use miniscript::descriptor::{
DescriptorPublicKey, DescriptorSecretKey, DescriptorSinglePriv, DescriptorSinglePub, KeyMap,
SortedMultiVec,
};
pub use miniscript::ScriptContext;
use miniscript::{Miniscript, Terminal};
use crate::descriptor::{CheckMiniscript, DescriptorError};
use crate::wallet::utils::SecpCtx;
#[cfg(feature = "keys-bip39")]
#[cfg_attr(docsrs, doc(cfg(feature = "keys-bip39")))]
pub mod bip39;
/// Set of valid networks for a key
pub type ValidNetworks = HashSet<Network>;
/// Create a set containing mainnet, testnet and regtest
pub fn any_network() -> ValidNetworks {
vec![
Network::Bitcoin,
Network::Testnet,
Network::Regtest,
Network::Signet,
]
.into_iter()
.collect()
}
/// Create a set only containing mainnet
pub fn mainnet_network() -> ValidNetworks {
vec![Network::Bitcoin].into_iter().collect()
}
/// Create a set containing testnet and regtest
pub fn test_networks() -> ValidNetworks {
vec![Network::Testnet, Network::Regtest, Network::Signet]
.into_iter()
.collect()
}
/// Compute the intersection of two sets
pub fn merge_networks(a: &ValidNetworks, b: &ValidNetworks) -> ValidNetworks {
a.intersection(b).cloned().collect()
}
/// Container for public or secret keys
#[derive(Debug)]
pub enum DescriptorKey<Ctx: ScriptContext> {
#[doc(hidden)]
Public(DescriptorPublicKey, ValidNetworks, PhantomData<Ctx>),
#[doc(hidden)]
Secret(DescriptorSecretKey, ValidNetworks, PhantomData<Ctx>),
}
impl<Ctx: ScriptContext> DescriptorKey<Ctx> {
/// Create an instance given a public key and a set of valid networks
pub fn from_public(public: DescriptorPublicKey, networks: ValidNetworks) -> Self {
DescriptorKey::Public(public, networks, PhantomData)
}
/// Create an instance given a secret key and a set of valid networks
pub fn from_secret(secret: DescriptorSecretKey, networks: ValidNetworks) -> Self {
DescriptorKey::Secret(secret, networks, PhantomData)
}
/// Override the computed set of valid networks
pub fn override_valid_networks(self, networks: ValidNetworks) -> Self {
match self {
DescriptorKey::Public(key, _, _) => DescriptorKey::Public(key, networks, PhantomData),
DescriptorKey::Secret(key, _, _) => DescriptorKey::Secret(key, networks, PhantomData),
}
}
// This method is used internally by `bdk::fragment!` and `bdk::descriptor!`. It has to be
// public because it is effectively called by external crates, once the macros are expanded,
// but since it is not meant to be part of the public api we hide it from the docs.
#[doc(hidden)]
pub fn extract(
self,
secp: &SecpCtx,
) -> Result<(DescriptorPublicKey, KeyMap, ValidNetworks), KeyError> {
match self {
DescriptorKey::Public(public, valid_networks, _) => {
Ok((public, KeyMap::default(), valid_networks))
}
DescriptorKey::Secret(secret, valid_networks, _) => {
let mut key_map = KeyMap::with_capacity(1);
let public = secret
.as_public(secp)
.map_err(|e| miniscript::Error::Unexpected(e.to_string()))?;
key_map.insert(public.clone(), secret);
Ok((public, key_map, valid_networks))
}
}
}
}
/// Enum representation of the known valid [`ScriptContext`]s
#[derive(Debug, Eq, PartialEq, Copy, Clone)]
pub enum ScriptContextEnum {
/// Legacy scripts
Legacy,
/// Segwitv0 scripts
Segwitv0,
}
impl ScriptContextEnum {
/// Returns whether the script context is [`ScriptContextEnum::Legacy`]
pub fn is_legacy(&self) -> bool {
self == &ScriptContextEnum::Legacy
}
/// Returns whether the script context is [`ScriptContextEnum::Segwitv0`]
pub fn is_segwit_v0(&self) -> bool {
self == &ScriptContextEnum::Segwitv0
}
}
/// Trait that adds extra useful methods to [`ScriptContext`]s
pub trait ExtScriptContext: ScriptContext {
/// Returns the [`ScriptContext`] as a [`ScriptContextEnum`]
fn as_enum() -> ScriptContextEnum;
/// Returns whether the script context is [`Legacy`](miniscript::Legacy)
fn is_legacy() -> bool {
Self::as_enum().is_legacy()
}
/// Returns whether the script context is [`Segwitv0`](miniscript::Segwitv0)
fn is_segwit_v0() -> bool {
Self::as_enum().is_segwit_v0()
}
}
impl<Ctx: ScriptContext + 'static> ExtScriptContext for Ctx {
fn as_enum() -> ScriptContextEnum {
match TypeId::of::<Ctx>() {
t if t == TypeId::of::<miniscript::Legacy>() => ScriptContextEnum::Legacy,
t if t == TypeId::of::<miniscript::Segwitv0>() => ScriptContextEnum::Segwitv0,
_ => unimplemented!("Unknown ScriptContext type"),
}
}
}
/// Trait for objects that can be turned into a public or secret [`DescriptorKey`]
///
/// The generic type `Ctx` is used to define the context in which the key is valid: some key
/// formats, like the mnemonics used by Electrum wallets, encode internally whether the wallet is
/// legacy or segwit. Thus, trying to turn a valid legacy mnemonic into a `DescriptorKey`
/// that would become part of a segwit descriptor should fail.
///
/// For key types that do care about this, the [`ExtScriptContext`] trait provides some useful
/// methods that can be used to check at runtime which `Ctx` is being used.
///
/// For key types that can do this check statically (because they can only work within a
/// single `Ctx`), the "specialized" trait can be implemented to make the compiler handle the type
/// checking.
///
/// Keys also have control over the networks they support: constructing the return object with
/// [`DescriptorKey::from_public`] or [`DescriptorKey::from_secret`] allows to specify a set of
/// [`ValidNetworks`].
///
/// ## Examples
///
/// Key type valid in any context:
///
/// ```
/// use bdk::bitcoin::PublicKey;
///
/// use bdk::keys::{DescriptorKey, IntoDescriptorKey, KeyError, ScriptContext};
///
/// pub struct MyKeyType {
/// pubkey: PublicKey,
/// }
///
/// impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for MyKeyType {
/// fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
/// self.pubkey.into_descriptor_key()
/// }
/// }
/// ```
///
/// Key type that is only valid on mainnet:
///
/// ```
/// use bdk::bitcoin::PublicKey;
///
/// use bdk::keys::{
/// mainnet_network, DescriptorKey, DescriptorPublicKey, DescriptorSinglePub,
/// IntoDescriptorKey, KeyError, ScriptContext,
/// };
///
/// pub struct MyKeyType {
/// pubkey: PublicKey,
/// }
///
/// impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for MyKeyType {
/// fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
/// Ok(DescriptorKey::from_public(
/// DescriptorPublicKey::SinglePub(DescriptorSinglePub {
/// origin: None,
/// key: self.pubkey,
/// }),
/// mainnet_network(),
/// ))
/// }
/// }
/// ```
///
/// Key type that internally encodes in which context it's valid. The context is checked at runtime:
///
/// ```
/// use bdk::bitcoin::PublicKey;
///
/// use bdk::keys::{DescriptorKey, ExtScriptContext, IntoDescriptorKey, KeyError, ScriptContext};
///
/// pub struct MyKeyType {
/// is_legacy: bool,
/// pubkey: PublicKey,
/// }
///
/// impl<Ctx: ScriptContext + 'static> IntoDescriptorKey<Ctx> for MyKeyType {
/// fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
/// if Ctx::is_legacy() == self.is_legacy {
/// self.pubkey.into_descriptor_key()
/// } else {
/// Err(KeyError::InvalidScriptContext)
/// }
/// }
/// }
/// ```
///
/// Key type that can only work within [`miniscript::Segwitv0`] context. Only the specialized version
/// of the trait is implemented.
///
/// This example deliberately fails to compile, to demonstrate how the compiler can catch when keys
/// are misused. In this case, the "segwit-only" key is used to build a `pkh()` descriptor, which
/// makes the compiler (correctly) fail.
///
/// ```compile_fail
/// use bdk::bitcoin::PublicKey;
/// use std::str::FromStr;
///
/// use bdk::keys::{DescriptorKey, IntoDescriptorKey, KeyError};
///
/// pub struct MySegwitOnlyKeyType {
/// pubkey: PublicKey,
/// }
///
/// impl IntoDescriptorKey<bdk::miniscript::Segwitv0> for MySegwitOnlyKeyType {
/// fn into_descriptor_key(self) -> Result<DescriptorKey<bdk::miniscript::Segwitv0>, KeyError> {
/// self.pubkey.into_descriptor_key()
/// }
/// }
///
/// let key = MySegwitOnlyKeyType {
/// pubkey: PublicKey::from_str("...")?,
/// };
/// let (descriptor, _, _) = bdk::descriptor!(pkh(key))?;
/// // ^^^^^ changing this to `wpkh` would make it compile
///
/// # Ok::<_, Box<dyn std::error::Error>>(())
/// ```
pub trait IntoDescriptorKey<Ctx: ScriptContext>: Sized {
/// Turn the key into a [`DescriptorKey`] within the requested [`ScriptContext`]
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError>;
}
/// Enum for extended keys that can be either `xprv` or `xpub`
///
/// An instance of [`ExtendedKey`] can be constructed from an [`ExtendedPrivKey`](bip32::ExtendedPrivKey)
/// or an [`ExtendedPubKey`](bip32::ExtendedPubKey) by using the `From` trait.
///
/// Defaults to the [`Legacy`](miniscript::Legacy) context.
pub enum ExtendedKey<Ctx: ScriptContext = miniscript::Legacy> {
/// A private extended key, aka an `xprv`
Private((bip32::ExtendedPrivKey, PhantomData<Ctx>)),
/// A public extended key, aka an `xpub`
Public((bip32::ExtendedPubKey, PhantomData<Ctx>)),
}
impl<Ctx: ScriptContext> ExtendedKey<Ctx> {
/// Return whether or not the key contains the private data
pub fn has_secret(&self) -> bool {
match self {
ExtendedKey::Private(_) => true,
ExtendedKey::Public(_) => false,
}
}
/// Transform the [`ExtendedKey`] into an [`ExtendedPrivKey`](bip32::ExtendedPrivKey) for the
/// given [`Network`], if the key contains the private data
pub fn into_xprv(self, network: Network) -> Option<bip32::ExtendedPrivKey> {
match self {
ExtendedKey::Private((mut xprv, _)) => {
xprv.network = network;
Some(xprv)
}
ExtendedKey::Public(_) => None,
}
}
/// Transform the [`ExtendedKey`] into an [`ExtendedPubKey`](bip32::ExtendedPubKey) for the
/// given [`Network`]
pub fn into_xpub<C: Signing>(
self,
network: bitcoin::Network,
secp: &Secp256k1<C>,
) -> bip32::ExtendedPubKey {
let mut xpub = match self {
ExtendedKey::Private((xprv, _)) => bip32::ExtendedPubKey::from_private(secp, &xprv),
ExtendedKey::Public((xpub, _)) => xpub,
};
xpub.network = network;
xpub
}
}
impl<Ctx: ScriptContext> From<bip32::ExtendedPubKey> for ExtendedKey<Ctx> {
fn from(xpub: bip32::ExtendedPubKey) -> Self {
ExtendedKey::Public((xpub, PhantomData))
}
}
impl<Ctx: ScriptContext> From<bip32::ExtendedPrivKey> for ExtendedKey<Ctx> {
fn from(xprv: bip32::ExtendedPrivKey) -> Self {
ExtendedKey::Private((xprv, PhantomData))
}
}
/// Trait for keys that can be derived.
///
/// When extra metadata are provided, a [`DerivableKey`] can be transformed into a
/// [`DescriptorKey`]: the trait [`IntoDescriptorKey`] is automatically implemented
/// for `(DerivableKey, DerivationPath)` and
/// `(DerivableKey, KeySource, DerivationPath)` tuples.
///
/// For key types that don't encode any indication about the path to use (like bip39), it's
/// generally recommended to implemented this trait instead of [`IntoDescriptorKey`]. The same
/// rules regarding script context and valid networks apply.
///
/// ## Examples
///
/// Key types that can be directly converted into an [`ExtendedPrivKey`] or
/// an [`ExtendedPubKey`] can implement only the required `into_extended_key()` method.
///
/// ```
/// use bdk::bitcoin;
/// use bdk::bitcoin::util::bip32;
/// use bdk::keys::{DerivableKey, ExtendedKey, KeyError, ScriptContext};
///
/// struct MyCustomKeyType {
/// key_data: bitcoin::PrivateKey,
/// chain_code: Vec<u8>,
/// network: bitcoin::Network,
/// }
///
/// impl<Ctx: ScriptContext> DerivableKey<Ctx> for MyCustomKeyType {
/// fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
/// let xprv = bip32::ExtendedPrivKey {
/// network: self.network,
/// depth: 0,
/// parent_fingerprint: bip32::Fingerprint::default(),
/// private_key: self.key_data,
/// chain_code: bip32::ChainCode::from(self.chain_code.as_ref()),
/// child_number: bip32::ChildNumber::Normal { index: 0 },
/// };
///
/// xprv.into_extended_key()
/// }
/// }
/// ```
///
/// Types that don't internally encode the [`Network`](bitcoin::Network) in which they are valid need some extra
/// steps to override the set of valid networks, otherwise only the network specified in the
/// [`ExtendedPrivKey`] or [`ExtendedPubKey`] will be considered valid.
///
/// ```
/// use bdk::bitcoin;
/// use bdk::bitcoin::util::bip32;
/// use bdk::keys::{
/// any_network, DerivableKey, DescriptorKey, ExtendedKey, KeyError, ScriptContext,
/// };
///
/// struct MyCustomKeyType {
/// key_data: bitcoin::PrivateKey,
/// chain_code: Vec<u8>,
/// }
///
/// impl<Ctx: ScriptContext> DerivableKey<Ctx> for MyCustomKeyType {
/// fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
/// let xprv = bip32::ExtendedPrivKey {
/// network: bitcoin::Network::Bitcoin, // pick an arbitrary network here
/// depth: 0,
/// parent_fingerprint: bip32::Fingerprint::default(),
/// private_key: self.key_data,
/// chain_code: bip32::ChainCode::from(self.chain_code.as_ref()),
/// child_number: bip32::ChildNumber::Normal { index: 0 },
/// };
///
/// xprv.into_extended_key()
/// }
///
/// fn into_descriptor_key(
/// self,
/// source: Option<bip32::KeySource>,
/// derivation_path: bip32::DerivationPath,
/// ) -> Result<DescriptorKey<Ctx>, KeyError> {
/// let descriptor_key = self
/// .into_extended_key()?
/// .into_descriptor_key(source, derivation_path)?;
///
/// // Override the set of valid networks here
/// Ok(descriptor_key.override_valid_networks(any_network()))
/// }
/// }
/// ```
///
/// [`DerivationPath`]: (bip32::DerivationPath)
/// [`ExtendedPrivKey`]: (bip32::ExtendedPrivKey)
/// [`ExtendedPubKey`]: (bip32::ExtendedPubKey)
pub trait DerivableKey<Ctx: ScriptContext = miniscript::Legacy>: Sized {
/// Consume `self` and turn it into an [`ExtendedKey`]
///
/// This can be used to get direct access to `xprv`s and `xpub`s for types that implement this trait,
/// like [`Mnemonic`](bip39::Mnemonic) when the `keys-bip39` feature is enabled.
#[cfg_attr(
feature = "keys-bip39",
doc = r##"
```rust
use bdk::bitcoin::Network;
use bdk::keys::{DerivableKey, ExtendedKey};
use bdk::keys::bip39::{Mnemonic, Language};
# fn main() -> Result<(), Box<dyn std::error::Error>> {
let xkey: ExtendedKey =
Mnemonic::parse_in(
Language::English,
"jelly crash boy whisper mouse ecology tuna soccer memory million news short",
)?
.into_extended_key()?;
let xprv = xkey.into_xprv(Network::Bitcoin).unwrap();
# Ok(()) }
```
"##
)]
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError>;
/// Consume `self` and turn it into a [`DescriptorKey`] by adding the extra metadata, such as
/// key origin and derivation path
fn into_descriptor_key(
self,
origin: Option<bip32::KeySource>,
derivation_path: bip32::DerivationPath,
) -> Result<DescriptorKey<Ctx>, KeyError> {
match self.into_extended_key()? {
ExtendedKey::Private((xprv, _)) => DescriptorSecretKey::XPrv(DescriptorXKey {
origin,
xkey: xprv,
derivation_path,
wildcard: Wildcard::Unhardened,
})
.into_descriptor_key(),
ExtendedKey::Public((xpub, _)) => DescriptorPublicKey::XPub(DescriptorXKey {
origin,
xkey: xpub,
derivation_path,
wildcard: Wildcard::Unhardened,
})
.into_descriptor_key(),
}
}
}
/// Identity conversion
impl<Ctx: ScriptContext> DerivableKey<Ctx> for ExtendedKey<Ctx> {
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
Ok(self)
}
}
impl<Ctx: ScriptContext> DerivableKey<Ctx> for bip32::ExtendedPubKey {
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
Ok(self.into())
}
}
impl<Ctx: ScriptContext> DerivableKey<Ctx> for bip32::ExtendedPrivKey {
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
Ok(self.into())
}
}
/// Output of a [`GeneratableKey`] key generation
pub struct GeneratedKey<K, Ctx: ScriptContext> {
key: K,
valid_networks: ValidNetworks,
phantom: PhantomData<Ctx>,
}
impl<K, Ctx: ScriptContext> GeneratedKey<K, Ctx> {
fn new(key: K, valid_networks: ValidNetworks) -> Self {
GeneratedKey {
key,
valid_networks,
phantom: PhantomData,
}
}
/// Consumes `self` and returns the key
pub fn into_key(self) -> K {
self.key
}
}
impl<K, Ctx: ScriptContext> Deref for GeneratedKey<K, Ctx> {
type Target = K;
fn deref(&self) -> &Self::Target {
&self.key
}
}
// Make generated "derivable" keys themselves "derivable". Also make sure they are assigned the
// right `valid_networks`.
impl<Ctx, K> DerivableKey<Ctx> for GeneratedKey<K, Ctx>
where
Ctx: ScriptContext,
K: DerivableKey<Ctx>,
{
fn into_extended_key(self) -> Result<ExtendedKey<Ctx>, KeyError> {
self.key.into_extended_key()
}
fn into_descriptor_key(
self,
origin: Option<bip32::KeySource>,
derivation_path: bip32::DerivationPath,
) -> Result<DescriptorKey<Ctx>, KeyError> {
let descriptor_key = self.key.into_descriptor_key(origin, derivation_path)?;
Ok(descriptor_key.override_valid_networks(self.valid_networks))
}
}
// Make generated keys directly usable in descriptors, and make sure they get assigned the right
// `valid_networks`.
impl<Ctx, K> IntoDescriptorKey<Ctx> for GeneratedKey<K, Ctx>
where
Ctx: ScriptContext,
K: IntoDescriptorKey<Ctx>,
{
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
let desc_key = self.key.into_descriptor_key()?;
Ok(desc_key.override_valid_networks(self.valid_networks))
}
}
/// Trait for keys that can be generated
///
/// The same rules about [`ScriptContext`] and [`ValidNetworks`] from [`IntoDescriptorKey`] apply.
///
/// This trait is particularly useful when combined with [`DerivableKey`]: if `Self`
/// implements it, the returned [`GeneratedKey`] will also implement it. The same is true for
/// [`IntoDescriptorKey`]: the generated keys can be directly used in descriptors if `Self` is also
/// [`IntoDescriptorKey`].
pub trait GeneratableKey<Ctx: ScriptContext>: Sized {
/// Type specifying the amount of entropy required e.g. `[u8;32]`
type Entropy: AsMut<[u8]> + Default;
/// Extra options required by the `generate_with_entropy`
type Options;
/// Returned error in case of failure
type Error: std::fmt::Debug;
/// Generate a key given the extra options and the entropy
fn generate_with_entropy(
options: Self::Options,
entropy: Self::Entropy,
) -> Result<GeneratedKey<Self, Ctx>, Self::Error>;
/// Generate a key given the options with a random entropy
fn generate(options: Self::Options) -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
use rand::{thread_rng, Rng};
let mut entropy = Self::Entropy::default();
thread_rng().fill(entropy.as_mut());
Self::generate_with_entropy(options, entropy)
}
}
/// Trait that allows generating a key with the default options
///
/// This trait is automatically implemented if the [`GeneratableKey::Options`] implements [`Default`].
pub trait GeneratableDefaultOptions<Ctx>: GeneratableKey<Ctx>
where
Ctx: ScriptContext,
<Self as GeneratableKey<Ctx>>::Options: Default,
{
/// Generate a key with the default options and a given entropy
fn generate_with_entropy_default(
entropy: Self::Entropy,
) -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
Self::generate_with_entropy(Default::default(), entropy)
}
/// Generate a key with the default options and a random entropy
fn generate_default() -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
Self::generate(Default::default())
}
}
/// Automatic implementation of [`GeneratableDefaultOptions`] for [`GeneratableKey`]s where
/// `Options` implements `Default`
impl<Ctx, K> GeneratableDefaultOptions<Ctx> for K
where
Ctx: ScriptContext,
K: GeneratableKey<Ctx>,
<K as GeneratableKey<Ctx>>::Options: Default,
{
}
impl<Ctx: ScriptContext> GeneratableKey<Ctx> for bip32::ExtendedPrivKey {
type Entropy = [u8; 32];
type Options = ();
type Error = bip32::Error;
fn generate_with_entropy(
_: Self::Options,
entropy: Self::Entropy,
) -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
// pick a arbitrary network here, but say that we support all of them
let xprv = bip32::ExtendedPrivKey::new_master(Network::Bitcoin, entropy.as_ref())?;
Ok(GeneratedKey::new(xprv, any_network()))
}
}
/// Options for generating a [`PrivateKey`]
///
/// Defaults to creating compressed keys, which save on-chain bytes and fees
#[derive(Debug, Copy, Clone)]
pub struct PrivateKeyGenerateOptions {
/// Whether the generated key should be "compressed" or not
pub compressed: bool,
}
impl Default for PrivateKeyGenerateOptions {
fn default() -> Self {
PrivateKeyGenerateOptions { compressed: true }
}
}
impl<Ctx: ScriptContext> GeneratableKey<Ctx> for PrivateKey {
type Entropy = [u8; secp256k1::constants::SECRET_KEY_SIZE];
type Options = PrivateKeyGenerateOptions;
type Error = bip32::Error;
fn generate_with_entropy(
options: Self::Options,
entropy: Self::Entropy,
) -> Result<GeneratedKey<Self, Ctx>, Self::Error> {
// pick a arbitrary network here, but say that we support all of them
let key = secp256k1::SecretKey::from_slice(&entropy)?;
let private_key = PrivateKey {
compressed: options.compressed,
network: Network::Bitcoin,
key,
};
Ok(GeneratedKey::new(private_key, any_network()))
}
}
impl<Ctx: ScriptContext, T: DerivableKey<Ctx>> IntoDescriptorKey<Ctx>
for (T, bip32::DerivationPath)
{
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
self.0.into_descriptor_key(None, self.1)
}
}
impl<Ctx: ScriptContext, T: DerivableKey<Ctx>> IntoDescriptorKey<Ctx>
for (T, bip32::KeySource, bip32::DerivationPath)
{
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
self.0.into_descriptor_key(Some(self.1), self.2)
}
}
fn expand_multi_keys<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
pks: Vec<Pk>,
secp: &SecpCtx,
) -> Result<(Vec<DescriptorPublicKey>, KeyMap, ValidNetworks), KeyError> {
let (pks, key_maps_networks): (Vec<_>, Vec<_>) = pks
.into_iter()
.map(|key| key.into_descriptor_key()?.extract(secp))
.collect::<Result<Vec<_>, _>>()?
.into_iter()
.map(|(a, b, c)| (a, (b, c)))
.unzip();
let (key_map, valid_networks) = key_maps_networks.into_iter().fold(
(KeyMap::default(), any_network()),
|(mut keys_acc, net_acc), (key, net)| {
keys_acc.extend(key.into_iter());
let net_acc = merge_networks(&net_acc, &net);
(keys_acc, net_acc)
},
);
Ok((pks, key_map, valid_networks))
}
// Used internally by `bdk::fragment!` to build `pk_k()` fragments
#[doc(hidden)]
pub fn make_pk<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
descriptor_key: Pk,
secp: &SecpCtx,
) -> Result<(Miniscript<DescriptorPublicKey, Ctx>, KeyMap, ValidNetworks), DescriptorError> {
let (key, key_map, valid_networks) = descriptor_key.into_descriptor_key()?.extract(secp)?;
let minisc = Miniscript::from_ast(Terminal::PkK(key))?;
minisc.check_miniscript()?;
Ok((minisc, key_map, valid_networks))
}
// Used internally by `bdk::fragment!` to build `pk_h()` fragments
#[doc(hidden)]
pub fn make_pkh<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
descriptor_key: Pk,
secp: &SecpCtx,
) -> Result<(Miniscript<DescriptorPublicKey, Ctx>, KeyMap, ValidNetworks), DescriptorError> {
let (key, key_map, valid_networks) = descriptor_key.into_descriptor_key()?.extract(secp)?;
let minisc = Miniscript::from_ast(Terminal::PkH(key))?;
minisc.check_miniscript()?;
Ok((minisc, key_map, valid_networks))
}
// Used internally by `bdk::fragment!` to build `multi()` fragments
#[doc(hidden)]
pub fn make_multi<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
thresh: usize,
pks: Vec<Pk>,
secp: &SecpCtx,
) -> Result<(Miniscript<DescriptorPublicKey, Ctx>, KeyMap, ValidNetworks), DescriptorError> {
let (pks, key_map, valid_networks) = expand_multi_keys(pks, secp)?;
let minisc = Miniscript::from_ast(Terminal::Multi(thresh, pks))?;
minisc.check_miniscript()?;
Ok((minisc, key_map, valid_networks))
}
// Used internally by `bdk::descriptor!` to build `sortedmulti()` fragments
#[doc(hidden)]
pub fn make_sortedmulti<Pk, Ctx, F>(
thresh: usize,
pks: Vec<Pk>,
build_desc: F,
secp: &SecpCtx,
) -> Result<(Descriptor<DescriptorPublicKey>, KeyMap, ValidNetworks), DescriptorError>
where
Pk: IntoDescriptorKey<Ctx>,
Ctx: ScriptContext,
F: Fn(
usize,
Vec<DescriptorPublicKey>,
) -> Result<(Descriptor<DescriptorPublicKey>, PhantomData<Ctx>), DescriptorError>,
{
let (pks, key_map, valid_networks) = expand_multi_keys(pks, secp)?;
let descriptor = build_desc(thresh, pks)?.0;
Ok((descriptor, key_map, valid_networks))
}
/// The "identity" conversion is used internally by some `bdk::fragment`s
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for DescriptorKey<Ctx> {
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
Ok(self)
}
}
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for DescriptorPublicKey {
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
let networks = match self {
DescriptorPublicKey::SinglePub(_) => any_network(),
DescriptorPublicKey::XPub(DescriptorXKey { xkey, .. })
if xkey.network == Network::Bitcoin =>
{
mainnet_network()
}
_ => test_networks(),
};
Ok(DescriptorKey::from_public(self, networks))
}
}
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for PublicKey {
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
DescriptorPublicKey::SinglePub(DescriptorSinglePub {
key: self,
origin: None,
})
.into_descriptor_key()
}
}
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for DescriptorSecretKey {
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
let networks = match &self {
DescriptorSecretKey::SinglePriv(sk) if sk.key.network == Network::Bitcoin => {
mainnet_network()
}
DescriptorSecretKey::XPrv(DescriptorXKey { xkey, .. })
if xkey.network == Network::Bitcoin =>
{
mainnet_network()
}
_ => test_networks(),
};
Ok(DescriptorKey::from_secret(self, networks))
}
}
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for &'_ str {
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
DescriptorSecretKey::from_str(self)
.map_err(|e| KeyError::Message(e.to_string()))?
.into_descriptor_key()
}
}
impl<Ctx: ScriptContext> IntoDescriptorKey<Ctx> for PrivateKey {
fn into_descriptor_key(self) -> Result<DescriptorKey<Ctx>, KeyError> {
DescriptorSecretKey::SinglePriv(DescriptorSinglePriv {
key: self,
origin: None,
})
.into_descriptor_key()
}
}
/// Errors thrown while working with [`keys`](crate::keys)
#[derive(Debug)]
pub enum KeyError {
/// The key cannot exist in the given script context
InvalidScriptContext,
/// The key is not valid for the given network
InvalidNetwork,
/// The key has an invalid checksum
InvalidChecksum,
/// Custom error message
Message(String),
/// BIP32 error
Bip32(bitcoin::util::bip32::Error),
/// Miniscript error
Miniscript(miniscript::Error),
}
impl_error!(miniscript::Error, Miniscript, KeyError);
impl_error!(bitcoin::util::bip32::Error, Bip32, KeyError);
impl std::fmt::Display for KeyError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?}", self)
}
}
impl std::error::Error for KeyError {}
#[cfg(test)]
pub mod test {
use bitcoin::util::bip32;
use super::*;
pub const TEST_ENTROPY: [u8; 32] = [0xAA; 32];
#[test]
fn test_keys_generate_xprv() {
let generated_xprv: GeneratedKey<_, miniscript::Segwitv0> =
bip32::ExtendedPrivKey::generate_with_entropy_default(TEST_ENTROPY).unwrap();
assert_eq!(generated_xprv.valid_networks, any_network());
assert_eq!(generated_xprv.to_string(), "xprv9s21ZrQH143K4Xr1cJyqTvuL2FWR8eicgY9boWqMBv8MDVUZ65AXHnzBrK1nyomu6wdcabRgmGTaAKawvhAno1V5FowGpTLVx3jxzE5uk3Q");
}
#[test]
fn test_keys_generate_wif() {
let generated_wif: GeneratedKey<_, miniscript::Segwitv0> =
bitcoin::PrivateKey::generate_with_entropy_default(TEST_ENTROPY).unwrap();
assert_eq!(generated_wif.valid_networks, any_network());
assert_eq!(
generated_wif.to_string(),
"L2wTu6hQrnDMiFNWA5na6jB12ErGQqtXwqpSL7aWquJaZG8Ai3ch"
);
}
}

View File

@@ -1,3 +1,203 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
// rustdoc will warn if there are missing docs
#![warn(missing_docs)]
// only enables the `doc_cfg` feature when
// the `docsrs` configuration attribute is defined
#![cfg_attr(docsrs, feature(doc_cfg))]
//! A modern, lightweight, descriptor-based wallet library written in Rust.
//!
//! # About
//!
//! The BDK library aims to be the core building block for Bitcoin wallets of any kind.
//!
//! * It uses [Miniscript](https://github.com/rust-bitcoin/rust-miniscript) to support descriptors with generalized conditions. This exact same library can be used to build
//! single-sig wallets, multisigs, timelocked contracts and more.
//! * It supports multiple blockchain backends and databases, allowing developers to choose exactly what's right for their projects.
//! * It is built to be cross-platform: the core logic works on desktop, mobile, and even WebAssembly.
//! * It is very easy to extend: developers can implement customized logic for blockchain backends, databases, signers, coin selection, and more, without having to fork and modify this library.
//!
//! # A Tour of BDK
//!
//! BDK consists of a number of modules that provide a range of functionality
//! essential for implementing descriptor based Bitcoin wallet applications in Rust. In this
//! section, we will take a brief tour of BDK, summarizing the major APIs and
//! their uses.
//!
//! The easiest way to get started is to add bdk to your dependencies with the default features.
//! The default features include a simple key-value database ([`sled`](sled)) to cache
//! blockchain data and an [electrum](https://docs.rs/electrum-client/) blockchain client to
//! interact with the bitcoin P2P network.
//!
//! ```toml
//! bdk = "0.15.0"
//! ```
#![cfg_attr(
feature = "electrum",
doc = r##"
## Sync the balance of a descriptor
### Example
```no_run
use bdk::Wallet;
use bdk::database::MemoryDatabase;
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
use bdk::electrum_client::Client;
fn main() -> Result<(), bdk::Error> {
let client = Client::new("ssl://electrum.blockstream.info:60002")?;
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
bitcoin::Network::Testnet,
MemoryDatabase::default(),
ElectrumBlockchain::from(client)
)?;
wallet.sync(noop_progress(), None)?;
println!("Descriptor balance: {} SAT", wallet.get_balance()?);
Ok(())
}
```
"##
)]
//!
//! ## Generate a few addresses
//!
//! ### Example
//! ```
//! use bdk::{Wallet};
//! use bdk::database::MemoryDatabase;
//! use bdk::wallet::AddressIndex::New;
//!
//! fn main() -> Result<(), bdk::Error> {
//! let wallet = Wallet::new_offline(
//! "wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
//! Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
//! bitcoin::Network::Testnet,
//! MemoryDatabase::default(),
//! )?;
//!
//! println!("Address #0: {}", wallet.get_address(New)?);
//! println!("Address #1: {}", wallet.get_address(New)?);
//! println!("Address #2: {}", wallet.get_address(New)?);
//!
//! Ok(())
//! }
//! ```
#![cfg_attr(
feature = "electrum",
doc = r##"
## Create a transaction
### Example
```no_run
use bdk::{FeeRate, Wallet};
use bdk::database::MemoryDatabase;
use bdk::blockchain::{noop_progress, ElectrumBlockchain};
use bdk::electrum_client::Client;
use bitcoin::consensus::serialize;
use bdk::wallet::AddressIndex::New;
fn main() -> Result<(), bdk::Error> {
let client = Client::new("ssl://electrum.blockstream.info:60002")?;
let wallet = Wallet::new(
"wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/0/*)",
Some("wpkh([c258d2e4/84h/1h/0h]tpubDDYkZojQFQjht8Tm4jsS3iuEmKjTiEGjG6KnuFNKKJb5A6ZUCUZKdvLdSDWofKi4ToRCwb9poe1XdqfUnP4jaJjCB2Zwv11ZLgSbnZSNecE/1/*)"),
bitcoin::Network::Testnet,
MemoryDatabase::default(),
ElectrumBlockchain::from(client)
)?;
wallet.sync(noop_progress(), None)?;
let send_to = wallet.get_address(New)?;
let (psbt, details) = {
let mut builder = wallet.build_tx();
builder
.add_recipient(send_to.script_pubkey(), 50_000)
.enable_rbf()
.do_not_spend_change()
.fee_rate(FeeRate::from_sat_per_vb(5.0));
builder.finish()?
};
println!("Transaction details: {:#?}", details);
println!("Unsigned PSBT: {}", &psbt);
Ok(())
}
```
"##
)]
//!
//! ## Sign a transaction
//!
//! ### Example
//! ```no_run
//! use std::str::FromStr;
//!
//! use bitcoin::util::psbt::PartiallySignedTransaction as Psbt;
//!
//! use bdk::{Wallet, SignOptions};
//! use bdk::database::MemoryDatabase;
//!
//! fn main() -> Result<(), bdk::Error> {
//! let wallet = Wallet::new_offline(
//! "wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/0/*)",
//! Some("wpkh([c258d2e4/84h/1h/0h]tprv8griRPhA7342zfRyB6CqeKF8CJDXYu5pgnj1cjL1u2ngKcJha5jjTRimG82ABzJQ4MQe71CV54xfn25BbhCNfEGGJZnxvCDQCd6JkbvxW6h/1/*)"),
//! bitcoin::Network::Testnet,
//! MemoryDatabase::default(),
//! )?;
//!
//! let psbt = "...";
//! let mut psbt = Psbt::from_str(psbt)?;
//!
//! let finalized = wallet.sign(&mut psbt, SignOptions::default())?;
//!
//! Ok(())
//! }
//! ```
//!
//! # Feature flags
//!
//! BDK uses a set of [feature flags](https://doc.rust-lang.org/cargo/reference/manifest.html#the-features-section)
//! to reduce the amount of compiled code by allowing projects to only enable the features they need.
//! By default, BDK enables two internal features, `key-value-db` and `electrum`.
//!
//! If you are new to BDK we recommended that you use the default features which will enable
//! basic descriptor wallet functionality. More advanced users can disable the `default` features
//! (`--no-default-features`) and build the BDK library with only the features you need.
//! Below is a list of the available feature flags and the additional functionality they provide.
//!
//! * `all-keys`: all features for working with bitcoin keys
//! * `async-interface`: async functions in bdk traits
//! * `keys-bip39`: [BIP-39](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki) mnemonic codes for generating deterministic keys
//!
//! ## Internal features
//!
//! These features do not expose any new API, but influence internal implementation aspects of
//! BDK.
//!
//! * `compact_filters`: [`compact_filters`](crate::blockchain::compact_filters) client protocol for interacting with the bitcoin P2P network
//! * `electrum`: [`electrum`](crate::blockchain::electrum) client protocol for interacting with electrum servers
//! * `esplora`: [`esplora`](crate::blockchain::esplora) client protocol for interacting with blockstream [electrs](https://github.com/Blockstream/electrs) servers
//! * `key-value-db`: key value [`database`](crate::database) based on [`sled`](crate::sled) for caching blockchain data
pub extern crate bitcoin;
extern crate log;
pub extern crate miniscript;
@@ -5,38 +205,79 @@ extern crate serde;
#[macro_use]
extern crate serde_json;
#[cfg(test)]
#[macro_use]
extern crate lazy_static;
#[cfg(all(feature = "reqwest", feature = "ureq"))]
compile_error!("Features reqwest and ureq are mutually exclusive and cannot be enabled together");
#[cfg(all(feature = "async-interface", feature = "electrum"))]
compile_error!(
"Features async-interface and electrum are mutually exclusive and cannot be enabled together"
);
#[cfg(all(feature = "async-interface", feature = "ureq"))]
compile_error!(
"Features async-interface and ureq are mutually exclusive and cannot be enabled together"
);
#[cfg(all(feature = "async-interface", feature = "compact_filters"))]
compile_error!(
"Features async-interface and compact_filters are mutually exclusive and cannot be enabled together"
);
#[cfg(feature = "keys-bip39")]
extern crate bip39;
#[cfg(any(target_arch = "wasm32", feature = "async-interface"))]
#[macro_use]
extern crate async_trait;
#[macro_use]
extern crate bdk_macros;
#[cfg(feature = "compact_filters")]
extern crate lazy_static;
#[cfg(feature = "rpc")]
pub extern crate bitcoincore_rpc;
#[cfg(feature = "electrum")]
pub extern crate electrum_client;
#[cfg(feature = "electrum")]
pub use electrum_client::client::Client;
#[cfg(feature = "esplora")]
pub extern crate reqwest;
#[cfg(feature = "esplora")]
pub use blockchain::esplora::EsploraBlockchain;
#[cfg(feature = "key-value-db")]
pub extern crate sled;
#[cfg(feature = "cli-utils")]
pub mod cli;
#[cfg(feature = "sqlite")]
pub extern crate rusqlite;
#[allow(unused_imports)]
#[macro_use]
pub mod error;
pub(crate) mod error;
pub mod blockchain;
pub mod database;
pub mod descriptor;
pub mod psbt;
pub mod signer;
pub mod types;
#[cfg(feature = "test-md-docs")]
mod doctest;
pub mod keys;
pub(crate) mod psbt;
pub(crate) mod types;
pub mod wallet;
pub use descriptor::ExtendedDescriptor;
pub use wallet::{OfflineWallet, Wallet};
pub use descriptor::template;
pub use descriptor::HdKeyPaths;
pub use error::Error;
pub use types::*;
pub use wallet::address_validator;
pub use wallet::signer;
pub use wallet::signer::SignOptions;
pub use wallet::tx_builder::TxBuilder;
pub use wallet::Wallet;
/// Get the version of BDK at runtime
pub fn version() -> &'static str {
env!("CARGO_PKG_VERSION", "unknown")
}
// We should consider putting this under a feature flag but we need the macro in doctests so we need
// to wait until https://github.com/rust-lang/rust/issues/67295 is fixed.
//
// Stuff in here is too rough to document atm
#[doc(hidden)]
pub mod testutils;

View File

@@ -1,271 +1,122 @@
use std::collections::BTreeMap;
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use bitcoin::hashes::{hash160, Hash};
use bitcoin::util::bip143::SighashComponents;
use bitcoin::util::bip32::{DerivationPath, ExtendedPrivKey, Fingerprint};
use bitcoin::util::psbt;
use bitcoin::{PrivateKey, PublicKey, Script, SigHashType, Transaction};
use bitcoin::util::psbt::PartiallySignedTransaction as Psbt;
use bitcoin::TxOut;
use bitcoin::secp256k1::{self, All, Message, Secp256k1};
#[allow(unused_imports)]
use log::{debug, error, info, trace};
use miniscript::{BitcoinSig, MiniscriptKey, Satisfier};
use crate::descriptor::ExtendedDescriptor;
use crate::error::Error;
use crate::signer::Signer;
pub mod utils;
pub struct PSBTSatisfier<'a> {
input: &'a psbt::Input,
assume_height_reached: bool,
create_height: Option<u32>,
current_height: Option<u32>,
pub trait PsbtUtils {
fn get_utxo_for(&self, input_index: usize) -> Option<TxOut>;
}
impl<'a> PSBTSatisfier<'a> {
pub fn new(
input: &'a psbt::Input,
assume_height_reached: bool,
create_height: Option<u32>,
current_height: Option<u32>,
) -> Self {
PSBTSatisfier {
input,
assume_height_reached,
create_height,
current_height,
impl PsbtUtils for Psbt {
#[allow(clippy::all)] // We want to allow `manual_map` but it is too new.
fn get_utxo_for(&self, input_index: usize) -> Option<TxOut> {
let tx = &self.global.unsigned_tx;
if input_index >= tx.input.len() {
return None;
}
}
}
impl<'a> PSBTSatisfier<'a> {
fn parse_sig(rawsig: &Vec<u8>) -> Option<BitcoinSig> {
let (flag, sig) = rawsig.split_last().unwrap();
let flag = bitcoin::SigHashType::from_u32(*flag as u32);
let sig = match secp256k1::Signature::from_der(sig) {
Ok(sig) => sig,
Err(..) => return None,
};
Some((sig, flag))
}
}
// TODO: also support hash preimages through the "unknown" section of PSBT
impl<'a> Satisfier<bitcoin::PublicKey> for PSBTSatisfier<'a> {
// from https://docs.rs/miniscript/0.12.0/src/miniscript/psbt/mod.rs.html#96
fn lookup_sig(&self, pk: &bitcoin::PublicKey) -> Option<BitcoinSig> {
debug!("lookup_sig: {}", pk);
if let Some(rawsig) = self.input.partial_sigs.get(pk) {
Self::parse_sig(&rawsig)
if let Some(input) = self.inputs.get(input_index) {
if let Some(wit_utxo) = &input.witness_utxo {
Some(wit_utxo.clone())
} else if let Some(in_tx) = &input.non_witness_utxo {
Some(in_tx.output[tx.input[input_index].previous_output.vout as usize].clone())
} else {
None
}
} else {
None
}
}
fn lookup_pkh_pk(&self, hash: &hash160::Hash) -> Option<bitcoin::PublicKey> {
debug!("lookup_pkh_pk: {}", hash);
for (pk, _) in &self.input.partial_sigs {
if &pk.to_pubkeyhash() == hash {
return Some(*pk);
}
}
None
}
fn lookup_pkh_sig(&self, hash: &hash160::Hash) -> Option<(bitcoin::PublicKey, BitcoinSig)> {
debug!("lookup_pkh_sig: {}", hash);
for (pk, sig) in &self.input.partial_sigs {
if &pk.to_pubkeyhash() == hash {
return match Self::parse_sig(&sig) {
Some(bitcoinsig) => Some((*pk, bitcoinsig)),
None => None,
};
}
}
None
}
fn check_older(&self, height: u32) -> bool {
// TODO: also check if `nSequence` right
debug!("check_older: {}", height);
if let Some(current_height) = self.current_height {
// TODO: test >= / >
current_height as u64 >= self.create_height.unwrap_or(0) as u64 + height as u64
} else {
self.assume_height_reached
}
}
fn check_after(&self, height: u32) -> bool {
// TODO: also check if `nLockTime` is right
debug!("check_after: {}", height);
if let Some(current_height) = self.current_height {
current_height > height
} else {
self.assume_height_reached
}
}
}
#[derive(Debug)]
pub struct PSBTSigner<'a> {
tx: &'a Transaction,
secp: Secp256k1<All>,
#[cfg(test)]
mod test {
use crate::bitcoin::TxIn;
use crate::psbt::Psbt;
use crate::wallet::AddressIndex;
use crate::wallet::{get_funded_wallet, test::get_test_wpkh};
use crate::SignOptions;
use std::str::FromStr;
// psbt: &'b psbt::PartiallySignedTransaction,
extended_keys: BTreeMap<Fingerprint, ExtendedPrivKey>,
private_keys: BTreeMap<PublicKey, PrivateKey>,
}
// from bip 174
const PSBT_STR: &str = "cHNidP8BAKACAAAAAqsJSaCMWvfEm4IS9Bfi8Vqz9cM9zxU4IagTn4d6W3vkAAAAAAD+////qwlJoIxa98SbghL0F+LxWrP1wz3PFTghqBOfh3pbe+QBAAAAAP7///8CYDvqCwAAAAAZdqkUdopAu9dAy+gdmI5x3ipNXHE5ax2IrI4kAAAAAAAAGXapFG9GILVT+glechue4O/p+gOcykWXiKwAAAAAAAEHakcwRAIgR1lmF5fAGwNrJZKJSGhiGDR9iYZLcZ4ff89X0eURZYcCIFMJ6r9Wqk2Ikf/REf3xM286KdqGbX+EhtdVRs7tr5MZASEDXNxh/HupccC1AaZGoqg7ECy0OIEhfKaC3Ibi1z+ogpIAAQEgAOH1BQAAAAAXqRQ1RebjO4MsRwUPJNPuuTycA5SLx4cBBBYAFIXRNTfy4mVAWjTbr6nj3aAfuCMIAAAA";
impl<'a> PSBTSigner<'a> {
pub fn from_descriptor(tx: &'a Transaction, desc: &ExtendedDescriptor) -> Result<Self, Error> {
let secp = Secp256k1::gen_new();
let mut extended_keys = BTreeMap::new();
for xprv in desc.get_xprv() {
let fing = xprv.fingerprint(&secp);
extended_keys.insert(fing, xprv);
}
let mut private_keys = BTreeMap::new();
for privkey in desc.get_secret_keys() {
let pubkey = privkey.public_key(&secp);
private_keys.insert(pubkey, privkey);
}
Ok(PSBTSigner {
tx,
secp,
extended_keys,
private_keys,
})
#[test]
#[should_panic(expected = "InputIndexOutOfRange")]
fn test_psbt_malformed_psbt_input_legacy() {
let psbt_bip = Psbt::from_str(PSBT_STR).unwrap();
let (wallet, _, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New).unwrap();
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
psbt.inputs.push(psbt_bip.inputs[0].clone());
let options = SignOptions {
trust_witness_utxo: true,
..Default::default()
};
let _ = wallet.sign(&mut psbt, options).unwrap();
}
pub fn extend(&mut self, mut other: PSBTSigner) -> Result<(), Error> {
if self.tx.txid() != other.tx.txid() {
return Err(Error::DifferentTransactions);
}
self.extended_keys.append(&mut other.extended_keys);
self.private_keys.append(&mut other.private_keys);
Ok(())
#[test]
#[should_panic(expected = "InputIndexOutOfRange")]
fn test_psbt_malformed_psbt_input_segwit() {
let psbt_bip = Psbt::from_str(PSBT_STR).unwrap();
let (wallet, _, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New).unwrap();
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
psbt.inputs.push(psbt_bip.inputs[1].clone());
let options = SignOptions {
trust_witness_utxo: true,
..Default::default()
};
let _ = wallet.sign(&mut psbt, options).unwrap();
}
// TODO: temporary
pub fn all_public_keys(&self) -> impl IntoIterator<Item = &PublicKey> {
self.private_keys.keys()
}
}
impl<'a> Signer for PSBTSigner<'a> {
fn sig_legacy_from_fingerprint(
&self,
index: usize,
sighash: SigHashType,
fingerprint: &Fingerprint,
path: &DerivationPath,
script: &Script,
) -> Result<Option<BitcoinSig>, Error> {
self.extended_keys
.get(fingerprint)
.map_or(Ok(None), |xprv| {
let privkey = xprv.derive_priv(&self.secp, path)?;
// let derived_pubkey = secp256k1::PublicKey::from_secret_key(&self.secp, &privkey.private_key.key);
let hash = self.tx.signature_hash(index, script, sighash.as_u32());
let signature = self.secp.sign(
&Message::from_slice(&hash.into_inner()[..])?,
&privkey.private_key.key,
);
Ok(Some((signature, sighash)))
})
}
fn sig_legacy_from_pubkey(
&self,
index: usize,
sighash: SigHashType,
public_key: &PublicKey,
script: &Script,
) -> Result<Option<BitcoinSig>, Error> {
self.private_keys
.get(public_key)
.map_or(Ok(None), |privkey| {
let hash = self.tx.signature_hash(index, script, sighash.as_u32());
let signature = self
.secp
.sign(&Message::from_slice(&hash.into_inner()[..])?, &privkey.key);
Ok(Some((signature, sighash)))
})
}
fn sig_segwit_from_fingerprint(
&self,
index: usize,
sighash: SigHashType,
fingerprint: &Fingerprint,
path: &DerivationPath,
script: &Script,
value: u64,
) -> Result<Option<BitcoinSig>, Error> {
self.extended_keys
.get(fingerprint)
.map_or(Ok(None), |xprv| {
let privkey = xprv.derive_priv(&self.secp, path)?;
let hash = SighashComponents::new(self.tx).sighash_all(
&self.tx.input[index],
script,
value,
);
let signature = self.secp.sign(
&Message::from_slice(&hash.into_inner()[..])?,
&privkey.private_key.key,
);
Ok(Some((signature, sighash)))
})
}
fn sig_segwit_from_pubkey(
&self,
index: usize,
sighash: SigHashType,
public_key: &PublicKey,
script: &Script,
value: u64,
) -> Result<Option<BitcoinSig>, Error> {
self.private_keys
.get(public_key)
.map_or(Ok(None), |privkey| {
let hash = SighashComponents::new(self.tx).sighash_all(
&self.tx.input[index],
script,
value,
);
let signature = self
.secp
.sign(&Message::from_slice(&hash.into_inner()[..])?, &privkey.key);
Ok(Some((signature, sighash)))
})
#[test]
#[should_panic(expected = "InputIndexOutOfRange")]
fn test_psbt_malformed_tx_input() {
let (wallet, _, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New).unwrap();
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
psbt.global.unsigned_tx.input.push(TxIn::default());
let options = SignOptions {
trust_witness_utxo: true,
..Default::default()
};
let _ = wallet.sign(&mut psbt, options).unwrap();
}
#[test]
fn test_psbt_sign_with_finalized() {
let psbt_bip = Psbt::from_str(PSBT_STR).unwrap();
let (wallet, _, _) = get_funded_wallet(get_test_wpkh());
let send_to = wallet.get_address(AddressIndex::New).unwrap();
let mut builder = wallet.build_tx();
builder.add_recipient(send_to.script_pubkey(), 10_000);
let (mut psbt, _) = builder.finish().unwrap();
// add a finalized input
psbt.inputs.push(psbt_bip.inputs[0].clone());
psbt.global
.unsigned_tx
.input
.push(psbt_bip.global.unsigned_tx.input[0].clone());
let _ = wallet.sign(&mut psbt, SignOptions::default()).unwrap();
}
}

View File

@@ -1,28 +0,0 @@
use bitcoin::util::psbt::PartiallySignedTransaction as PSBT;
use bitcoin::TxOut;
pub trait PSBTUtils {
fn get_utxo_for(&self, input_index: usize) -> Option<TxOut>;
}
impl PSBTUtils for PSBT {
fn get_utxo_for(&self, input_index: usize) -> Option<TxOut> {
let tx = &self.global.unsigned_tx;
if input_index >= tx.input.len() {
return None;
}
if let Some(input) = self.inputs.get(input_index) {
if let Some(wit_utxo) = &input.witness_utxo {
Some(wit_utxo.clone())
} else if let Some(in_tx) = &input.non_witness_utxo {
Some(in_tx.output[tx.input[input_index].previous_output.vout as usize].clone())
} else {
None
}
} else {
None
}
}
}

View File

@@ -1,87 +0,0 @@
use bitcoin::util::bip32::{DerivationPath, Fingerprint};
use bitcoin::{PublicKey, Script, SigHashType};
use miniscript::miniscript::satisfy::BitcoinSig;
use crate::error::Error;
pub trait Signer {
fn sig_legacy_from_fingerprint(
&self,
index: usize,
sighash: SigHashType,
fingerprint: &Fingerprint,
path: &DerivationPath,
script: &Script,
) -> Result<Option<BitcoinSig>, Error>;
fn sig_legacy_from_pubkey(
&self,
index: usize,
sighash: SigHashType,
public_key: &PublicKey,
script: &Script,
) -> Result<Option<BitcoinSig>, Error>;
fn sig_segwit_from_fingerprint(
&self,
index: usize,
sighash: SigHashType,
fingerprint: &Fingerprint,
path: &DerivationPath,
script: &Script,
value: u64,
) -> Result<Option<BitcoinSig>, Error>;
fn sig_segwit_from_pubkey(
&self,
index: usize,
sighash: SigHashType,
public_key: &PublicKey,
script: &Script,
value: u64,
) -> Result<Option<BitcoinSig>, Error>;
}
#[allow(dead_code)]
impl dyn Signer {
fn sig_legacy_from_fingerprint(
&self,
_index: usize,
_sighash: SigHashType,
_fingerprint: &Fingerprint,
_path: &DerivationPath,
_script: &Script,
) -> Result<Option<BitcoinSig>, Error> {
Ok(None)
}
fn sig_legacy_from_pubkey(
&self,
_index: usize,
_sighash: SigHashType,
_public_key: &PublicKey,
_script: &Script,
) -> Result<Option<BitcoinSig>, Error> {
Ok(None)
}
fn sig_segwit_from_fingerprint(
&self,
_index: usize,
_sighash: SigHashType,
_fingerprint: &Fingerprint,
_path: &DerivationPath,
_script: &Script,
_value: u64,
) -> Result<Option<BitcoinSig>, Error> {
Ok(None)
}
fn sig_segwit_from_pubkey(
&self,
_index: usize,
_sighash: SigHashType,
_public_key: &PublicKey,
_script: &Script,
_value: u64,
) -> Result<Option<BitcoinSig>, Error> {
Ok(None)
}
}

File diff suppressed because it is too large Load Diff

231
src/testutils/mod.rs Normal file
View File

@@ -0,0 +1,231 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
#![allow(missing_docs)]
#[cfg(test)]
#[cfg(feature = "test-blockchains")]
pub mod blockchain_tests;
use bitcoin::secp256k1::{Secp256k1, Verification};
use bitcoin::{Address, PublicKey};
use miniscript::descriptor::DescriptorPublicKey;
use miniscript::{Descriptor, MiniscriptKey, TranslatePk};
#[derive(Clone, Debug)]
pub struct TestIncomingOutput {
pub value: u64,
pub to_address: String,
}
impl TestIncomingOutput {
pub fn new(value: u64, to_address: Address) -> Self {
Self {
value,
to_address: to_address.to_string(),
}
}
}
#[derive(Clone, Debug)]
pub struct TestIncomingTx {
pub output: Vec<TestIncomingOutput>,
pub min_confirmations: Option<u64>,
pub locktime: Option<i64>,
pub replaceable: Option<bool>,
}
impl TestIncomingTx {
pub fn new(
output: Vec<TestIncomingOutput>,
min_confirmations: Option<u64>,
locktime: Option<i64>,
replaceable: Option<bool>,
) -> Self {
Self {
output,
min_confirmations,
locktime,
replaceable,
}
}
pub fn add_output(&mut self, output: TestIncomingOutput) {
self.output.push(output);
}
}
#[doc(hidden)]
pub trait TranslateDescriptor {
// derive and translate a `Descriptor<DescriptorPublicKey>` into a `Descriptor<PublicKey>`
fn derive_translated<C: Verification>(
&self,
secp: &Secp256k1<C>,
index: u32,
) -> Descriptor<PublicKey>;
}
impl TranslateDescriptor for Descriptor<DescriptorPublicKey> {
fn derive_translated<C: Verification>(
&self,
secp: &Secp256k1<C>,
index: u32,
) -> Descriptor<PublicKey> {
let translate = |key: &DescriptorPublicKey| -> PublicKey {
match key {
DescriptorPublicKey::XPub(xpub) => {
xpub.xkey
.derive_pub(secp, &xpub.derivation_path)
.expect("hardened derivation steps")
.public_key
}
DescriptorPublicKey::SinglePub(key) => key.key,
}
};
self.derive(index)
.translate_pk_infallible(|pk| translate(pk), |pkh| translate(pkh).to_pubkeyhash())
}
}
#[doc(hidden)]
#[macro_export]
macro_rules! testutils {
( @external $descriptors:expr, $child:expr ) => ({
use $crate::bitcoin::secp256k1::Secp256k1;
use $crate::miniscript::descriptor::{Descriptor, DescriptorPublicKey, DescriptorTrait};
use $crate::testutils::TranslateDescriptor;
let secp = Secp256k1::new();
let parsed = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, &$descriptors.0).expect("Failed to parse descriptor in `testutils!(@external)`").0;
parsed.derive_translated(&secp, $child).address(bitcoin::Network::Regtest).expect("No address form")
});
( @internal $descriptors:expr, $child:expr ) => ({
use $crate::bitcoin::secp256k1::Secp256k1;
use $crate::miniscript::descriptor::{Descriptor, DescriptorPublicKey, DescriptorTrait};
use $crate::testutils::TranslateDescriptor;
let secp = Secp256k1::new();
let parsed = Descriptor::<DescriptorPublicKey>::parse_descriptor(&secp, &$descriptors.1.expect("Missing internal descriptor")).expect("Failed to parse descriptor in `testutils!(@internal)`").0;
parsed.derive_translated(&secp, $child).address($crate::bitcoin::Network::Regtest).expect("No address form")
});
( @e $descriptors:expr, $child:expr ) => ({ testutils!(@external $descriptors, $child) });
( @i $descriptors:expr, $child:expr ) => ({ testutils!(@internal $descriptors, $child) });
( @tx ( $( ( $( $addr:tt )* ) => $amount:expr ),+ ) $( ( @locktime $locktime:expr ) )? $( ( @confirmations $confirmations:expr ) )? $( ( @replaceable $replaceable:expr ) )? ) => ({
let outs = vec![$( $crate::testutils::TestIncomingOutput::new($amount, testutils!( $($addr)* ))),+];
let locktime = None::<i64>$(.or(Some($locktime)))?;
let min_confirmations = None::<u64>$(.or(Some($confirmations)))?;
let replaceable = None::<bool>$(.or(Some($replaceable)))?;
$crate::testutils::TestIncomingTx::new(outs, min_confirmations, locktime, replaceable)
});
( @literal $key:expr ) => ({
let key = $key.to_string();
(key, None::<String>, None::<String>)
});
( @generate_xprv $( $external_path:expr )? $( ,$internal_path:expr )? ) => ({
use rand::Rng;
let mut seed = [0u8; 32];
rand::thread_rng().fill(&mut seed[..]);
let key = $crate::bitcoin::util::bip32::ExtendedPrivKey::new_master(
$crate::bitcoin::Network::Testnet,
&seed,
);
let external_path = None::<String>$(.or(Some($external_path.to_string())))?;
let internal_path = None::<String>$(.or(Some($internal_path.to_string())))?;
(key.unwrap().to_string(), external_path, internal_path)
});
( @generate_wif ) => ({
use rand::Rng;
let mut key = [0u8; $crate::bitcoin::secp256k1::constants::SECRET_KEY_SIZE];
rand::thread_rng().fill(&mut key[..]);
($crate::bitcoin::PrivateKey {
compressed: true,
network: $crate::bitcoin::Network::Testnet,
key: $crate::bitcoin::secp256k1::SecretKey::from_slice(&key).unwrap(),
}.to_string(), None::<String>, None::<String>)
});
( @keys ( $( $alias:expr => ( $( $key_type:tt )* ) ),+ ) ) => ({
let mut map = std::collections::HashMap::new();
$(
let alias: &str = $alias;
map.insert(alias, testutils!( $($key_type)* ));
)+
map
});
( @descriptors ( $external_descriptor:expr ) $( ( $internal_descriptor:expr ) )? $( ( @keys $( $keys:tt )* ) )* ) => ({
use std::str::FromStr;
use std::collections::HashMap;
use $crate::miniscript::descriptor::Descriptor;
use $crate::miniscript::TranslatePk;
#[allow(unused_assignments, unused_mut)]
let mut keys: HashMap<&'static str, (String, Option<String>, Option<String>)> = HashMap::new();
$(
keys = testutils!{ @keys $( $keys )* };
)*
let external: Descriptor<String> = FromStr::from_str($external_descriptor).unwrap();
let external: Descriptor<String> = external.translate_pk_infallible::<_, _>(|k| {
if let Some((key, ext_path, _)) = keys.get(&k.as_str()) {
format!("{}{}", key, ext_path.as_ref().unwrap_or(&"".into()))
} else {
k.clone()
}
}, |kh| {
if let Some((key, ext_path, _)) = keys.get(&kh.as_str()) {
format!("{}{}", key, ext_path.as_ref().unwrap_or(&"".into()))
} else {
kh.clone()
}
});
let external = external.to_string();
let internal = None::<String>$(.or({
let string_internal: Descriptor<String> = FromStr::from_str($internal_descriptor).unwrap();
let string_internal: Descriptor<String> = string_internal.translate_pk_infallible::<_, _>(|k| {
if let Some((key, _, int_path)) = keys.get(&k.as_str()) {
format!("{}{}", key, int_path.as_ref().unwrap_or(&"".into()))
} else {
k.clone()
}
}, |kh| {
if let Some((key, _, int_path)) = keys.get(&kh.as_str()) {
format!("{}{}", key, int_path.as_ref().unwrap_or(&"".into()))
} else {
kh.clone()
}
});
Some(string_internal.to_string())
}))?;
(external, internal)
})
}

View File

@@ -1,47 +1,259 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use std::convert::AsRef;
use std::ops::Sub;
use bitcoin::blockdata::transaction::{OutPoint, Transaction, TxOut};
use bitcoin::hash_types::Txid;
use bitcoin::{hash_types::Txid, util::psbt};
use serde::{Deserialize, Serialize};
// TODO serde flatten?
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq)]
pub enum ScriptType {
/// Types of keychains
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum KeychainKind {
/// External
External = 0,
/// Internal, usually used for change outputs
Internal = 1,
}
impl ScriptType {
impl KeychainKind {
/// Return [`KeychainKind`] as a byte
pub fn as_byte(&self) -> u8 {
match self {
ScriptType::External => 'e' as u8,
ScriptType::Internal => 'i' as u8,
KeychainKind::External => b'e',
KeychainKind::Internal => b'i',
}
}
}
impl AsRef<[u8]> for ScriptType {
impl AsRef<[u8]> for KeychainKind {
fn as_ref(&self) -> &[u8] {
match self {
ScriptType::External => b"e",
ScriptType::Internal => b"i",
KeychainKind::External => b"e",
KeychainKind::Internal => b"i",
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
pub struct UTXO {
pub outpoint: OutPoint,
pub txout: TxOut,
/// Fee rate
#[derive(Debug, Copy, Clone, PartialEq, PartialOrd)]
// Internally stored as satoshi/vbyte
pub struct FeeRate(f32);
impl FeeRate {
/// Create a new instance of [`FeeRate`] given a float fee rate in btc/kvbytes
pub fn from_btc_per_kvb(btc_per_kvb: f32) -> Self {
FeeRate(btc_per_kvb * 1e5)
}
/// Create a new instance of [`FeeRate`] given a float fee rate in satoshi/vbyte
pub const fn from_sat_per_vb(sat_per_vb: f32) -> Self {
FeeRate(sat_per_vb)
}
/// Create a new [`FeeRate`] with the default min relay fee value
pub const fn default_min_relay_fee() -> Self {
FeeRate(1.0)
}
/// Calculate fee rate from `fee` and weight units (`wu`).
pub fn from_wu(fee: u64, wu: usize) -> FeeRate {
Self::from_vb(fee, wu.vbytes())
}
/// Calculate fee rate from `fee` and `vbytes`.
pub fn from_vb(fee: u64, vbytes: usize) -> FeeRate {
let rate = fee as f32 / vbytes as f32;
Self::from_sat_per_vb(rate)
}
/// Return the value as satoshi/vbyte
pub fn as_sat_vb(&self) -> f32 {
self.0
}
/// Calculate absolute fee in Satoshis using size in weight units.
pub fn fee_wu(&self, wu: usize) -> u64 {
self.fee_vb(wu.vbytes())
}
/// Calculate absolute fee in Satoshis using size in virtual bytes.
pub fn fee_vb(&self, vbytes: usize) -> u64 {
(self.as_sat_vb() * vbytes as f32).ceil() as u64
}
}
impl std::default::Default for FeeRate {
fn default() -> Self {
FeeRate::default_min_relay_fee()
}
}
impl Sub for FeeRate {
type Output = Self;
fn sub(self, other: FeeRate) -> Self::Output {
FeeRate(self.0 - other.0)
}
}
/// Trait implemented by types that can be used to measure weight units.
pub trait Vbytes {
/// Convert weight units to virtual bytes.
fn vbytes(self) -> usize;
}
impl Vbytes for usize {
fn vbytes(self) -> usize {
// ref: https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki#transaction-size-calculations
(self as f32 / 4.0).ceil() as usize
}
}
/// An unspent output owned by a [`Wallet`].
///
/// [`Wallet`]: crate::Wallet
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Hash)]
pub struct LocalUtxo {
/// Reference to a transaction output
pub outpoint: OutPoint,
/// Transaction output
pub txout: TxOut,
/// Type of keychain
pub keychain: KeychainKind,
}
/// A [`Utxo`] with its `satisfaction_weight`.
#[derive(Debug, Clone, PartialEq)]
pub struct WeightedUtxo {
/// The weight of the witness data and `scriptSig` expressed in [weight units]. This is used to
/// properly maintain the feerate when adding this input to a transaction during coin selection.
///
/// [weight units]: https://en.bitcoin.it/wiki/Weight_units
pub satisfaction_weight: usize,
/// The UTXO
pub utxo: Utxo,
}
#[derive(Debug, Clone, PartialEq)]
/// An unspent transaction output (UTXO).
pub enum Utxo {
/// A UTXO owned by the local wallet.
Local(LocalUtxo),
/// A UTXO owned by another wallet.
Foreign {
/// The location of the output.
outpoint: OutPoint,
/// The information about the input we require to add it to a PSBT.
// Box it to stop the type being too big.
psbt_input: Box<psbt::Input>,
},
}
impl Utxo {
/// Get the location of the UTXO
pub fn outpoint(&self) -> OutPoint {
match &self {
Utxo::Local(local) => local.outpoint,
Utxo::Foreign { outpoint, .. } => *outpoint,
}
}
/// Get the `TxOut` of the UTXO
pub fn txout(&self) -> &TxOut {
match &self {
Utxo::Local(local) => &local.txout,
Utxo::Foreign {
outpoint,
psbt_input,
} => {
if let Some(prev_tx) = &psbt_input.non_witness_utxo {
return &prev_tx.output[outpoint.vout as usize];
}
if let Some(txout) = &psbt_input.witness_utxo {
return txout;
}
unreachable!("Foreign UTXOs will always have one of these set")
}
}
}
}
/// A wallet transaction
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Default)]
pub struct TransactionDetails {
/// Optional transaction
pub transaction: Option<Transaction>,
/// Transaction id
pub txid: Txid,
pub timestamp: u64,
/// Received value (sats)
pub received: u64,
/// Sent value (sats)
pub sent: u64,
pub height: Option<u32>,
/// Fee value (sats) if available.
/// The availability of the fee depends on the backend. It's never `None` with an Electrum
/// Server backend, but it could be `None` with a Bitcoin RPC node without txindex that receive
/// funds while offline.
pub fee: Option<u64>,
/// If the transaction is confirmed, contains height and timestamp of the block containing the
/// transaction, unconfirmed transaction contains `None`.
pub confirmation_time: Option<BlockTime>,
/// Whether the tx has been verified against the consensus rules
///
/// Confirmed txs are considered "verified" by default, while unconfirmed txs are checked to
/// ensure an unstrusted [`Blockchain`](crate::blockchain::Blockchain) backend can't trick the
/// wallet into using an invalid tx as an RBF template.
///
/// The check is only performed when the `verify` feature is enabled.
#[serde(default = "bool::default")] // default to `false` if not specified
pub verified: bool,
}
/// Block height and timestamp of a block
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Default)]
pub struct BlockTime {
/// confirmation block height
pub height: u32,
/// confirmation block timestamp
pub timestamp: u64,
}
/// **DEPRECATED**: Confirmation time of a transaction
///
/// The structure has been renamed to `BlockTime`
#[deprecated(note = "This structure has been renamed to `BlockTime`")]
pub type ConfirmationTime = BlockTime;
impl BlockTime {
/// Returns `Some` `BlockTime` if both `height` and `timestamp` are `Some`
pub fn new(height: Option<u32>, timestamp: Option<u64>) -> Option<Self> {
match (height, timestamp) {
(Some(height), Some(timestamp)) => Some(BlockTime { height, timestamp }),
_ => None,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn can_store_feerate_in_const() {
const _MY_RATE: FeeRate = FeeRate::from_sat_per_vb(10.0);
const _MIN_RELAY: FeeRate = FeeRate::default_min_relay_fee();
}
}

View File

@@ -0,0 +1,154 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Address validation callbacks
//!
//! The typical usage of those callbacks is for displaying the newly-generated address on a
//! hardware wallet, so that the user can cross-check its correctness.
//!
//! More generally speaking though, these callbacks can also be used to "do something" every time
//! an address is generated, without necessarily checking or validating it.
//!
//! An address validator can be attached to a [`Wallet`](super::Wallet) by using the
//! [`Wallet::add_address_validator`](super::Wallet::add_address_validator) method, and
//! whenever a new address is generated (either explicitly by the user with
//! [`Wallet::get_address`](super::Wallet::get_address) or internally to create a change
//! address) all the attached validators will be polled, in sequence. All of them must complete
//! successfully to continue.
//!
//! ## Example
//!
//! ```
//! # use std::sync::Arc;
//! # use bitcoin::*;
//! # use bdk::address_validator::*;
//! # use bdk::database::*;
//! # use bdk::*;
//! # use bdk::wallet::AddressIndex::New;
//! #[derive(Debug)]
//! struct PrintAddressAndContinue;
//!
//! impl AddressValidator for PrintAddressAndContinue {
//! fn validate(
//! &self,
//! keychain: KeychainKind,
//! hd_keypaths: &HdKeyPaths,
//! script: &Script
//! ) -> Result<(), AddressValidatorError> {
//! let address = Address::from_script(script, Network::Testnet)
//! .as_ref()
//! .map(Address::to_string)
//! .unwrap_or(script.to_string());
//! println!("New address of type {:?}: {}", keychain, address);
//! println!("HD keypaths: {:#?}", hd_keypaths);
//!
//! Ok(())
//! }
//! }
//!
//! let descriptor = "wpkh(tpubD6NzVbkrYhZ4Xferm7Pz4VnjdcDPFyjVu5K4iZXQ4pVN8Cks4pHVowTBXBKRhX64pkRyJZJN5xAKj4UDNnLPb5p2sSKXhewoYx5GbTdUFWq/*)";
//! let mut wallet = Wallet::new_offline(descriptor, None, Network::Testnet, MemoryDatabase::default())?;
//! wallet.add_address_validator(Arc::new(PrintAddressAndContinue));
//!
//! let address = wallet.get_address(New)?;
//! println!("Address: {}", address);
//! # Ok::<(), bdk::Error>(())
//! ```
use std::fmt;
use bitcoin::Script;
use crate::descriptor::HdKeyPaths;
use crate::types::KeychainKind;
/// Errors that can be returned to fail the validation of an address
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum AddressValidatorError {
/// User rejected the address
UserRejected,
/// Network connection error
ConnectionError,
/// Network request timeout error
TimeoutError,
/// Invalid script
InvalidScript,
/// A custom error message
Message(String),
}
impl fmt::Display for AddressValidatorError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
impl std::error::Error for AddressValidatorError {}
/// Trait to build address validators
///
/// All the address validators attached to a wallet with [`Wallet::add_address_validator`](super::Wallet::add_address_validator) will be polled
/// every time an address (external or internal) is generated by the wallet. Errors returned in the
/// validator will be propagated up to the original caller that triggered the address generation.
///
/// For a usage example see [this module](crate::address_validator)'s documentation.
pub trait AddressValidator: Send + Sync + fmt::Debug {
/// Validate or inspect an address
fn validate(
&self,
keychain: KeychainKind,
hd_keypaths: &HdKeyPaths,
script: &Script,
) -> Result<(), AddressValidatorError>;
}
#[cfg(test)]
mod test {
use std::sync::Arc;
use super::*;
use crate::wallet::AddressIndex::New;
use crate::wallet::{get_funded_wallet, test::get_test_wpkh};
#[derive(Debug)]
struct TestValidator;
impl AddressValidator for TestValidator {
fn validate(
&self,
_keychain: KeychainKind,
_hd_keypaths: &HdKeyPaths,
_script: &bitcoin::Script,
) -> Result<(), AddressValidatorError> {
Err(AddressValidatorError::InvalidScript)
}
}
#[test]
#[should_panic(expected = "InvalidScript")]
fn test_address_validator_external() {
let (mut wallet, _, _) = get_funded_wallet(get_test_wpkh());
wallet.add_address_validator(Arc::new(TestValidator));
wallet.get_address(New).unwrap();
}
#[test]
#[should_panic(expected = "InvalidScript")]
fn test_address_validator_internal() {
let (mut wallet, descriptors, _) = get_funded_wallet(get_test_wpkh());
wallet.add_address_validator(Arc::new(TestValidator));
let addr = crate::testutils!(@external descriptors, 10);
let mut builder = wallet.build_tx();
builder.add_recipient(addr.script_pubkey(), 25_000);
builder.finish().unwrap();
}
}

1059
src/wallet/coin_selection.rs Normal file

File diff suppressed because it is too large Load Diff

351
src/wallet/export.rs Normal file
View File

@@ -0,0 +1,351 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Wallet export
//!
//! This modules implements the wallet export format used by [FullyNoded](https://github.com/Fonta1n3/FullyNoded/blob/10b7808c8b929b171cca537fb50522d015168ac9/Docs/Wallets/Wallet-Export-Spec.md).
//!
//! ## Examples
//!
//! ### Import from JSON
//!
//! ```
//! # use std::str::FromStr;
//! # use bitcoin::*;
//! # use bdk::database::*;
//! # use bdk::wallet::export::*;
//! # use bdk::*;
//! let import = r#"{
//! "descriptor": "wpkh([c258d2e4\/84h\/1h\/0h]tpubDD3ynpHgJQW8VvWRzQ5WFDCrs4jqVFGHB3vLC3r49XHJSqP8bHKdK4AriuUKLccK68zfzowx7YhmDN8SiSkgCDENUFx9qVw65YyqM78vyVe\/0\/*)",
//! "blockheight":1782088,
//! "label":"testnet"
//! }"#;
//!
//! let import = WalletExport::from_str(import)?;
//! let wallet = Wallet::new_offline(
//! &import.descriptor(),
//! import.change_descriptor().as_ref(),
//! Network::Testnet,
//! MemoryDatabase::default(),
//! )?;
//! # Ok::<_, bdk::Error>(())
//! ```
//!
//! ### Export a `Wallet`
//! ```
//! # use bitcoin::*;
//! # use bdk::database::*;
//! # use bdk::wallet::export::*;
//! # use bdk::*;
//! let wallet = Wallet::new_offline(
//! "wpkh([c258d2e4/84h/1h/0h]tpubDD3ynpHgJQW8VvWRzQ5WFDCrs4jqVFGHB3vLC3r49XHJSqP8bHKdK4AriuUKLccK68zfzowx7YhmDN8SiSkgCDENUFx9qVw65YyqM78vyVe/0/*)",
//! Some("wpkh([c258d2e4/84h/1h/0h]tpubDD3ynpHgJQW8VvWRzQ5WFDCrs4jqVFGHB3vLC3r49XHJSqP8bHKdK4AriuUKLccK68zfzowx7YhmDN8SiSkgCDENUFx9qVw65YyqM78vyVe/1/*)"),
//! Network::Testnet,
//! MemoryDatabase::default()
//! )?;
//! let export = WalletExport::export_wallet(&wallet, "exported wallet", true)
//! .map_err(ToString::to_string)
//! .map_err(bdk::Error::Generic)?;
//!
//! println!("Exported: {}", export.to_string());
//! # Ok::<_, bdk::Error>(())
//! ```
use std::str::FromStr;
use serde::{Deserialize, Serialize};
use miniscript::descriptor::{ShInner, WshInner};
use miniscript::{Descriptor, DescriptorPublicKey, ScriptContext, Terminal};
use crate::database::BatchDatabase;
use crate::wallet::Wallet;
/// Structure that contains the export of a wallet
///
/// For a usage example see [this module](crate::wallet::export)'s documentation.
#[derive(Debug, Serialize, Deserialize)]
pub struct WalletExport {
descriptor: String,
/// Earliest block to rescan when looking for the wallet's transactions
pub blockheight: u32,
/// Arbitrary label for the wallet
pub label: String,
}
impl ToString for WalletExport {
fn to_string(&self) -> String {
serde_json::to_string(self).unwrap()
}
}
impl FromStr for WalletExport {
type Err = serde_json::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
serde_json::from_str(s)
}
}
fn remove_checksum(s: String) -> String {
s.splitn(2, '#').next().map(String::from).unwrap()
}
impl WalletExport {
/// Export a wallet
///
/// This function returns an error if it determines that the `wallet`'s descriptor(s) are not
/// supported by Bitcoin Core or don't follow the standard derivation paths defined by BIP44
/// and others.
///
/// If `include_blockheight` is `true`, this function will look into the `wallet`'s database
/// for the oldest transaction it knows and use that as the earliest block to rescan.
///
/// If the database is empty or `include_blockheight` is false, the `blockheight` field
/// returned will be `0`.
pub fn export_wallet<B, D: BatchDatabase>(
wallet: &Wallet<B, D>,
label: &str,
include_blockheight: bool,
) -> Result<Self, &'static str> {
let descriptor = wallet
.descriptor
.to_string_with_secret(&wallet.signers.as_key_map(wallet.secp_ctx()));
let descriptor = remove_checksum(descriptor);
Self::is_compatible_with_core(&descriptor)?;
let blockheight = match wallet.database.borrow().iter_txs(false) {
_ if !include_blockheight => 0,
Err(_) => 0,
Ok(txs) => {
let mut heights = txs
.into_iter()
.map(|tx| tx.confirmation_time.map(|c| c.height).unwrap_or(0))
.collect::<Vec<_>>();
heights.sort_unstable();
*heights.last().unwrap_or(&0)
}
};
let export = WalletExport {
descriptor,
label: label.into(),
blockheight,
};
let desc_to_string = |d: &Descriptor<DescriptorPublicKey>| {
let descriptor =
d.to_string_with_secret(&wallet.change_signers.as_key_map(wallet.secp_ctx()));
remove_checksum(descriptor)
};
if export.change_descriptor() != wallet.change_descriptor.as_ref().map(desc_to_string) {
return Err("Incompatible change descriptor");
}
Ok(export)
}
fn is_compatible_with_core(descriptor: &str) -> Result<(), &'static str> {
fn check_ms<Ctx: ScriptContext>(
terminal: &Terminal<String, Ctx>,
) -> Result<(), &'static str> {
if let Terminal::Multi(_, _) = terminal {
Ok(())
} else {
Err("The descriptor contains operators not supported by Bitcoin Core")
}
}
// pkh(), wpkh(), sh(wpkh()) are always fine, as well as multi() and sortedmulti()
match Descriptor::<String>::from_str(descriptor).map_err(|_| "Invalid descriptor")? {
Descriptor::Pkh(_) | Descriptor::Wpkh(_) => Ok(()),
Descriptor::Sh(sh) => match sh.as_inner() {
ShInner::Wpkh(_) => Ok(()),
ShInner::SortedMulti(_) => Ok(()),
ShInner::Wsh(wsh) => match wsh.as_inner() {
WshInner::SortedMulti(_) => Ok(()),
WshInner::Ms(ms) => check_ms(&ms.node),
},
ShInner::Ms(ms) => check_ms(&ms.node),
},
Descriptor::Wsh(wsh) => match wsh.as_inner() {
WshInner::SortedMulti(_) => Ok(()),
WshInner::Ms(ms) => check_ms(&ms.node),
},
_ => Err("The descriptor is not compatible with Bitcoin Core"),
}
}
/// Return the external descriptor
pub fn descriptor(&self) -> String {
self.descriptor.clone()
}
/// Return the internal descriptor, if present
pub fn change_descriptor(&self) -> Option<String> {
let replaced = self.descriptor.replace("/0/*", "/1/*");
if replaced != self.descriptor {
Some(replaced)
} else {
None
}
}
}
#[cfg(test)]
mod test {
use std::str::FromStr;
use bitcoin::{Network, Txid};
use super::*;
use crate::database::{memory::MemoryDatabase, BatchOperations};
use crate::types::TransactionDetails;
use crate::wallet::Wallet;
use crate::BlockTime;
fn get_test_db() -> MemoryDatabase {
let mut db = MemoryDatabase::new();
db.set_tx(&TransactionDetails {
transaction: None,
txid: Txid::from_str(
"4ddff1fa33af17f377f62b72357b43107c19110a8009b36fb832af505efed98a",
)
.unwrap(),
received: 100_000,
sent: 0,
fee: Some(500),
confirmation_time: Some(BlockTime {
timestamp: 12345678,
height: 5000,
}),
verified: true,
})
.unwrap();
db
}
#[test]
fn test_export_bip44() {
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/1/*)";
let wallet = Wallet::new_offline(
descriptor,
Some(change_descriptor),
Network::Bitcoin,
get_test_db(),
)
.unwrap();
let export = WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
assert_eq!(export.descriptor(), descriptor);
assert_eq!(export.change_descriptor(), Some(change_descriptor.into()));
assert_eq!(export.blockheight, 5000);
assert_eq!(export.label, "Test Label");
}
#[test]
#[should_panic(expected = "Incompatible change descriptor")]
fn test_export_no_change() {
// This wallet explicitly doesn't have a change descriptor. It should be impossible to
// export, because exporting this kind of external descriptor normally implies the
// existence of an internal descriptor
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
let wallet =
Wallet::new_offline(descriptor, None, Network::Bitcoin, get_test_db()).unwrap();
WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
}
#[test]
#[should_panic(expected = "Incompatible change descriptor")]
fn test_export_incompatible_change() {
// This wallet has a change descriptor, but the derivation path is not in the "standard"
// bip44/49/etc format
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/50'/0'/1/*)";
let wallet = Wallet::new_offline(
descriptor,
Some(change_descriptor),
Network::Bitcoin,
get_test_db(),
)
.unwrap();
WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
}
#[test]
fn test_export_multi() {
let descriptor = "wsh(multi(2,\
[73756c7f/48'/0'/0'/2']tpubDCKxNyM3bLgbEX13Mcd8mYxbVg9ajDkWXMh29hMWBurKfVmBfWAM96QVP3zaUcN51HvkZ3ar4VwP82kC8JZhhux8vFQoJintSpVBwpFvyU3/0/*,\
[f9f62194/48'/0'/0'/2']tpubDDp3ZSH1yCwusRppH7zgSxq2t1VEUyXSeEp8E5aFS8m43MknUjiF1bSLo3CGWAxbDyhF1XowA5ukPzyJZjznYk3kYi6oe7QxtX2euvKWsk4/0/*,\
[c98b1535/48'/0'/0'/2']tpubDCDi5W4sP6zSnzJeowy8rQDVhBdRARaPhK1axABi8V1661wEPeanpEXj4ZLAUEoikVtoWcyK26TKKJSecSfeKxwHCcRrge9k1ybuiL71z4a/0/*\
))";
let change_descriptor = "wsh(multi(2,\
[73756c7f/48'/0'/0'/2']tpubDCKxNyM3bLgbEX13Mcd8mYxbVg9ajDkWXMh29hMWBurKfVmBfWAM96QVP3zaUcN51HvkZ3ar4VwP82kC8JZhhux8vFQoJintSpVBwpFvyU3/1/*,\
[f9f62194/48'/0'/0'/2']tpubDDp3ZSH1yCwusRppH7zgSxq2t1VEUyXSeEp8E5aFS8m43MknUjiF1bSLo3CGWAxbDyhF1XowA5ukPzyJZjznYk3kYi6oe7QxtX2euvKWsk4/1/*,\
[c98b1535/48'/0'/0'/2']tpubDCDi5W4sP6zSnzJeowy8rQDVhBdRARaPhK1axABi8V1661wEPeanpEXj4ZLAUEoikVtoWcyK26TKKJSecSfeKxwHCcRrge9k1ybuiL71z4a/1/*\
))";
let wallet = Wallet::new_offline(
descriptor,
Some(change_descriptor),
Network::Testnet,
get_test_db(),
)
.unwrap();
let export = WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
assert_eq!(export.descriptor(), descriptor);
assert_eq!(export.change_descriptor(), Some(change_descriptor.into()));
assert_eq!(export.blockheight, 5000);
assert_eq!(export.label, "Test Label");
}
#[test]
fn test_export_to_json() {
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/1/*)";
let wallet = Wallet::new_offline(
descriptor,
Some(change_descriptor),
Network::Bitcoin,
get_test_db(),
)
.unwrap();
let export = WalletExport::export_wallet(&wallet, "Test Label", true).unwrap();
assert_eq!(export.to_string(), "{\"descriptor\":\"wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44\'/0\'/0\'/0/*)\",\"blockheight\":5000,\"label\":\"Test Label\"}");
}
#[test]
fn test_export_from_json() {
let descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/0/*)";
let change_descriptor = "wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44'/0'/0'/1/*)";
let import_str = "{\"descriptor\":\"wpkh(xprv9s21ZrQH143K4CTb63EaMxja1YiTnSEWKMbn23uoEnAzxjdUJRQkazCAtzxGm4LSoTSVTptoV9RbchnKPW9HxKtZumdyxyikZFDLhogJ5Uj/44\'/0\'/0\'/0/*)\",\"blockheight\":5000,\"label\":\"Test Label\"}";
let export = WalletExport::from_str(import_str).unwrap();
assert_eq!(export.descriptor(), descriptor);
assert_eq!(export.change_descriptor(), Some(change_descriptor.into()));
assert_eq!(export.blockheight, 5000);
assert_eq!(export.label, "Test Label");
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,52 +0,0 @@
use std::io::{self, Error, ErrorKind, Read, Write};
#[derive(Clone, Debug)]
pub struct OfflineStream;
impl Read for OfflineStream {
fn read(&mut self, _buf: &mut [u8]) -> io::Result<usize> {
Err(Error::new(
ErrorKind::NotConnected,
"Trying to read from an OfflineStream",
))
}
}
impl Write for OfflineStream {
fn write(&mut self, _buf: &[u8]) -> io::Result<usize> {
Err(Error::new(
ErrorKind::NotConnected,
"Trying to read from an OfflineStream",
))
}
fn flush(&mut self) -> io::Result<()> {
Err(Error::new(
ErrorKind::NotConnected,
"Trying to read from an OfflineStream",
))
}
}
// #[cfg(any(feature = "electrum", feature = "default"))]
// use electrum_client::Client;
//
// #[cfg(any(feature = "electrum", feature = "default"))]
// impl OfflineStream {
// fn new_client() -> {
// use std::io::bufreader;
//
// let stream = OfflineStream{};
// let buf_reader = BufReader::new(stream.clone());
//
// Client {
// stream,
// buf_reader,
// headers: VecDeque::new(),
// script_notifications: BTreeMap::new(),
//
// #[cfg(feature = "debug-calls")]
// calls: 0,
// }
// }
// }

766
src/wallet/signer.rs Normal file
View File

@@ -0,0 +1,766 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Generalized signers
//!
//! This module provides the ability to add customized signers to a [`Wallet`](super::Wallet)
//! through the [`Wallet::add_signer`](super::Wallet::add_signer) function.
//!
//! ```
//! # use std::sync::Arc;
//! # use std::str::FromStr;
//! # use bitcoin::secp256k1::{Secp256k1, All};
//! # use bitcoin::*;
//! # use bitcoin::util::psbt;
//! # use bdk::signer::*;
//! # use bdk::database::*;
//! # use bdk::*;
//! # #[derive(Debug)]
//! # struct CustomHSM;
//! # impl CustomHSM {
//! # fn sign_input(&self, _psbt: &mut psbt::PartiallySignedTransaction, _input: usize) -> Result<(), SignerError> {
//! # Ok(())
//! # }
//! # fn connect() -> Self {
//! # CustomHSM
//! # }
//! # fn get_id(&self) -> SignerId {
//! # SignerId::Dummy(0)
//! # }
//! # }
//! #[derive(Debug)]
//! struct CustomSigner {
//! device: CustomHSM,
//! }
//!
//! impl CustomSigner {
//! fn connect() -> Self {
//! CustomSigner { device: CustomHSM::connect() }
//! }
//! }
//!
//! impl Signer for CustomSigner {
//! fn sign(
//! &self,
//! psbt: &mut psbt::PartiallySignedTransaction,
//! input_index: Option<usize>,
//! _secp: &Secp256k1<All>,
//! ) -> Result<(), SignerError> {
//! let input_index = input_index.ok_or(SignerError::InputIndexOutOfRange)?;
//! self.device.sign_input(psbt, input_index)?;
//!
//! Ok(())
//! }
//!
//! fn id(&self, _secp: &Secp256k1<All>) -> SignerId {
//! self.device.get_id()
//! }
//!
//! fn sign_whole_tx(&self) -> bool {
//! false
//! }
//! }
//!
//! let custom_signer = CustomSigner::connect();
//!
//! let descriptor = "wpkh(tpubD6NzVbkrYhZ4Xferm7Pz4VnjdcDPFyjVu5K4iZXQ4pVN8Cks4pHVowTBXBKRhX64pkRyJZJN5xAKj4UDNnLPb5p2sSKXhewoYx5GbTdUFWq/*)";
//! let mut wallet = Wallet::new_offline(descriptor, None, Network::Testnet, MemoryDatabase::default())?;
//! wallet.add_signer(
//! KeychainKind::External,
//! SignerOrdering(200),
//! Arc::new(custom_signer)
//! );
//!
//! # Ok::<_, bdk::Error>(())
//! ```
use std::cmp::Ordering;
use std::collections::BTreeMap;
use std::fmt;
use std::ops::Bound::Included;
use std::sync::Arc;
use bitcoin::blockdata::opcodes;
use bitcoin::blockdata::script::Builder as ScriptBuilder;
use bitcoin::hashes::{hash160, Hash};
use bitcoin::secp256k1::{Message, Secp256k1};
use bitcoin::util::bip32::{ChildNumber, DerivationPath, ExtendedPrivKey, Fingerprint};
use bitcoin::util::{bip143, psbt};
use bitcoin::{PrivateKey, Script, SigHash, SigHashType};
use miniscript::descriptor::{DescriptorSecretKey, DescriptorSinglePriv, DescriptorXKey, KeyMap};
use miniscript::{Legacy, MiniscriptKey, Segwitv0};
use super::utils::SecpCtx;
use crate::descriptor::XKeyUtils;
/// Identifier of a signer in the `SignersContainers`. Used as a key to find the right signer among
/// multiple of them
#[derive(Debug, Clone, Ord, PartialOrd, PartialEq, Eq, Hash)]
pub enum SignerId {
/// Bitcoin HASH160 (RIPEMD160 after SHA256) hash of an ECDSA public key
PkHash(hash160::Hash),
/// The fingerprint of a BIP32 extended key
Fingerprint(Fingerprint),
/// Dummy identifier
Dummy(u64),
}
impl From<hash160::Hash> for SignerId {
fn from(hash: hash160::Hash) -> SignerId {
SignerId::PkHash(hash)
}
}
impl From<Fingerprint> for SignerId {
fn from(fing: Fingerprint) -> SignerId {
SignerId::Fingerprint(fing)
}
}
/// Signing error
#[derive(Debug, PartialEq, Eq, Clone)]
pub enum SignerError {
/// The private key is missing for the required public key
MissingKey,
/// The private key in use has the right fingerprint but derives differently than expected
InvalidKey,
/// The user canceled the operation
UserCanceled,
/// Input index is out of range
InputIndexOutOfRange,
/// The `non_witness_utxo` field of the transaction is required to sign this input
MissingNonWitnessUtxo,
/// The `non_witness_utxo` specified is invalid
InvalidNonWitnessUtxo,
/// The `witness_utxo` field of the transaction is required to sign this input
MissingWitnessUtxo,
/// The `witness_script` field of the transaction is required to sign this input
MissingWitnessScript,
/// The fingerprint and derivation path are missing from the psbt input
MissingHdKeypath,
/// The psbt contains a non-`SIGHASH_ALL` sighash in one of its input and the user hasn't
/// explicitly allowed them
///
/// To enable signing transactions with non-standard sighashes set
/// [`SignOptions::allow_all_sighashes`] to `true`.
NonStandardSighash,
}
impl fmt::Display for SignerError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
impl std::error::Error for SignerError {}
/// Trait for signers
///
/// This trait can be implemented to provide customized signers to the wallet. For an example see
/// [`this module`](crate::wallet::signer)'s documentation.
pub trait Signer: fmt::Debug + Send + Sync {
/// Sign a PSBT
///
/// The `input_index` argument is only provided if the wallet doesn't declare to sign the whole
/// transaction in one go (see [`Signer::sign_whole_tx`]). Otherwise its value is `None` and
/// can be ignored.
fn sign(
&self,
psbt: &mut psbt::PartiallySignedTransaction,
input_index: Option<usize>,
secp: &SecpCtx,
) -> Result<(), SignerError>;
/// Return whether or not the signer signs the whole transaction in one go instead of every
/// input individually
fn sign_whole_tx(&self) -> bool;
/// Return the [`SignerId`] for this signer
///
/// The [`SignerId`] can be used to lookup a signer in the [`Wallet`](crate::Wallet)'s signers map or to
/// compare two signers.
fn id(&self, secp: &SecpCtx) -> SignerId;
/// Return the secret key for the signer
///
/// This is used internally to reconstruct the original descriptor that may contain secrets.
/// External signers that are meant to keep key isolated should just return `None` here (which
/// is the default for this method, if not overridden).
fn descriptor_secret_key(&self) -> Option<DescriptorSecretKey> {
None
}
}
impl Signer for DescriptorXKey<ExtendedPrivKey> {
fn sign(
&self,
psbt: &mut psbt::PartiallySignedTransaction,
input_index: Option<usize>,
secp: &SecpCtx,
) -> Result<(), SignerError> {
let input_index = input_index.unwrap();
if input_index >= psbt.inputs.len() {
return Err(SignerError::InputIndexOutOfRange);
}
if psbt.inputs[input_index].final_script_sig.is_some()
|| psbt.inputs[input_index].final_script_witness.is_some()
{
return Ok(());
}
let (public_key, full_path) = match psbt.inputs[input_index]
.bip32_derivation
.iter()
.filter_map(|(pk, &(fingerprint, ref path))| {
if self.matches(&(fingerprint, path.clone()), secp).is_some() {
Some((pk, path))
} else {
None
}
})
.next()
{
Some((pk, full_path)) => (pk, full_path.clone()),
None => return Ok(()),
};
let derived_key = match self.origin.clone() {
Some((_fingerprint, origin_path)) => {
let deriv_path = DerivationPath::from(
&full_path.into_iter().cloned().collect::<Vec<ChildNumber>>()
[origin_path.len()..],
);
self.xkey.derive_priv(secp, &deriv_path).unwrap()
}
None => self.xkey.derive_priv(secp, &full_path).unwrap(),
};
if &derived_key.private_key.public_key(secp) != public_key {
Err(SignerError::InvalidKey)
} else {
derived_key.private_key.sign(psbt, Some(input_index), secp)
}
}
fn sign_whole_tx(&self) -> bool {
false
}
fn id(&self, secp: &SecpCtx) -> SignerId {
SignerId::from(self.root_fingerprint(secp))
}
fn descriptor_secret_key(&self) -> Option<DescriptorSecretKey> {
Some(DescriptorSecretKey::XPrv(self.clone()))
}
}
impl Signer for PrivateKey {
fn sign(
&self,
psbt: &mut psbt::PartiallySignedTransaction,
input_index: Option<usize>,
secp: &SecpCtx,
) -> Result<(), SignerError> {
let input_index = input_index.unwrap();
if input_index >= psbt.inputs.len() || input_index >= psbt.global.unsigned_tx.input.len() {
return Err(SignerError::InputIndexOutOfRange);
}
if psbt.inputs[input_index].final_script_sig.is_some()
|| psbt.inputs[input_index].final_script_witness.is_some()
{
return Ok(());
}
let pubkey = self.public_key(secp);
if psbt.inputs[input_index].partial_sigs.contains_key(&pubkey) {
return Ok(());
}
// FIXME: use the presence of `witness_utxo` as an indication that we should make a bip143
// sig. Does this make sense? Should we add an extra argument to explicitly switch between
// these? The original idea was to declare sign() as sign<Ctx: ScriptContex>() and use Ctx,
// but that violates the rules for trait-objects, so we can't do it.
let (hash, sighash) = match psbt.inputs[input_index].witness_utxo {
Some(_) => Segwitv0::sighash(psbt, input_index)?,
None => Legacy::sighash(psbt, input_index)?,
};
let signature = secp.sign(
&Message::from_slice(&hash.into_inner()[..]).unwrap(),
&self.key,
);
let mut final_signature = Vec::with_capacity(75);
final_signature.extend_from_slice(&signature.serialize_der());
final_signature.push(sighash.as_u32() as u8);
psbt.inputs[input_index]
.partial_sigs
.insert(pubkey, final_signature);
Ok(())
}
fn sign_whole_tx(&self) -> bool {
false
}
fn id(&self, secp: &SecpCtx) -> SignerId {
SignerId::from(self.public_key(secp).to_pubkeyhash())
}
fn descriptor_secret_key(&self) -> Option<DescriptorSecretKey> {
Some(DescriptorSecretKey::SinglePriv(DescriptorSinglePriv {
key: *self,
origin: None,
}))
}
}
/// Defines the order in which signers are called
///
/// The default value is `100`. Signers with an ordering above that will be called later,
/// and they will thus see the partial signatures added to the transaction once they get to sign
/// themselves.
#[derive(Debug, Clone, PartialOrd, PartialEq, Ord, Eq)]
pub struct SignerOrdering(pub usize);
impl std::default::Default for SignerOrdering {
fn default() -> Self {
SignerOrdering(100)
}
}
#[derive(Debug, Clone)]
struct SignersContainerKey {
id: SignerId,
ordering: SignerOrdering,
}
impl From<(SignerId, SignerOrdering)> for SignersContainerKey {
fn from(tuple: (SignerId, SignerOrdering)) -> Self {
SignersContainerKey {
id: tuple.0,
ordering: tuple.1,
}
}
}
/// Container for multiple signers
#[derive(Debug, Default, Clone)]
pub struct SignersContainer(BTreeMap<SignersContainerKey, Arc<dyn Signer>>);
impl SignersContainer {
/// Create a map of public keys to secret keys
pub fn as_key_map(&self, secp: &SecpCtx) -> KeyMap {
self.0
.values()
.filter_map(|signer| signer.descriptor_secret_key())
.filter_map(|secret| secret.as_public(secp).ok().map(|public| (public, secret)))
.collect()
}
}
impl From<KeyMap> for SignersContainer {
fn from(keymap: KeyMap) -> SignersContainer {
let secp = Secp256k1::new();
let mut container = SignersContainer::new();
for (_, secret) in keymap {
match secret {
DescriptorSecretKey::SinglePriv(private_key) => container.add_external(
SignerId::from(private_key.key.public_key(&secp).to_pubkeyhash()),
SignerOrdering::default(),
Arc::new(private_key.key),
),
DescriptorSecretKey::XPrv(xprv) => container.add_external(
SignerId::from(xprv.root_fingerprint(&secp)),
SignerOrdering::default(),
Arc::new(xprv),
),
};
}
container
}
}
impl SignersContainer {
/// Default constructor
pub fn new() -> Self {
SignersContainer(Default::default())
}
/// Adds an external signer to the container for the specified id. Optionally returns the
/// signer that was previously in the container, if any
pub fn add_external(
&mut self,
id: SignerId,
ordering: SignerOrdering,
signer: Arc<dyn Signer>,
) -> Option<Arc<dyn Signer>> {
self.0.insert((id, ordering).into(), signer)
}
/// Removes a signer from the container and returns it
pub fn remove(&mut self, id: SignerId, ordering: SignerOrdering) -> Option<Arc<dyn Signer>> {
self.0.remove(&(id, ordering).into())
}
/// Returns the list of identifiers of all the signers in the container
pub fn ids(&self) -> Vec<&SignerId> {
self.0
.keys()
.map(|SignersContainerKey { id, .. }| id)
.collect()
}
/// Returns the list of signers in the container, sorted by lowest to highest `ordering`
pub fn signers(&self) -> Vec<&Arc<dyn Signer>> {
self.0.values().collect()
}
/// Finds the signer with lowest ordering for a given id in the container.
pub fn find(&self, id: SignerId) -> Option<&Arc<dyn Signer>> {
self.0
.range((
Included(&(id.clone(), SignerOrdering(0)).into()),
Included(&(id.clone(), SignerOrdering(usize::MAX)).into()),
))
.filter(|(k, _)| k.id == id)
.map(|(_, v)| v)
.next()
}
}
/// Options for a software signer
///
/// Adjust the behavior of our software signers and the way a transaction is finalized
#[derive(Debug, Clone)]
pub struct SignOptions {
/// Whether the signer should trust the `witness_utxo`, if the `non_witness_utxo` hasn't been
/// provided
///
/// Defaults to `false` to mitigate the "SegWit bug" which chould trick the wallet into
/// paying a fee larger than expected.
///
/// Some wallets, especially if relatively old, might not provide the `non_witness_utxo` for
/// SegWit transactions in the PSBT they generate: in those cases setting this to `true`
/// should correctly produce a signature, at the expense of an increased trust in the creator
/// of the PSBT.
///
/// For more details see: <https://blog.trezor.io/details-of-firmware-updates-for-trezor-one-version-1-9-1-and-trezor-model-t-version-2-3-1-1eba8f60f2dd>
pub trust_witness_utxo: bool,
/// Whether the wallet should assume a specific height has been reached when trying to finalize
/// a transaction
///
/// The wallet will only "use" a timelock to satisfy the spending policy of an input if the
/// timelock height has already been reached. This option allows overriding the "current height" to let the
/// wallet use timelocks in the future to spend a coin.
pub assume_height: Option<u32>,
/// Whether the signer should use the `sighash_type` set in the PSBT when signing, no matter
/// what its value is
///
/// Defaults to `false` which will only allow signing using `SIGHASH_ALL`.
pub allow_all_sighashes: bool,
}
impl Default for SignOptions {
fn default() -> Self {
SignOptions {
trust_witness_utxo: false,
assume_height: None,
allow_all_sighashes: false,
}
}
}
pub(crate) trait ComputeSighash {
fn sighash(
psbt: &psbt::PartiallySignedTransaction,
input_index: usize,
) -> Result<(SigHash, SigHashType), SignerError>;
}
impl ComputeSighash for Legacy {
fn sighash(
psbt: &psbt::PartiallySignedTransaction,
input_index: usize,
) -> Result<(SigHash, SigHashType), SignerError> {
if input_index >= psbt.inputs.len() || input_index >= psbt.global.unsigned_tx.input.len() {
return Err(SignerError::InputIndexOutOfRange);
}
let psbt_input = &psbt.inputs[input_index];
let tx_input = &psbt.global.unsigned_tx.input[input_index];
let sighash = psbt_input.sighash_type.unwrap_or(SigHashType::All);
let script = match psbt_input.redeem_script {
Some(ref redeem_script) => redeem_script.clone(),
None => {
let non_witness_utxo = psbt_input
.non_witness_utxo
.as_ref()
.ok_or(SignerError::MissingNonWitnessUtxo)?;
let prev_out = non_witness_utxo
.output
.get(tx_input.previous_output.vout as usize)
.ok_or(SignerError::InvalidNonWitnessUtxo)?;
prev_out.script_pubkey.clone()
}
};
Ok((
psbt.global
.unsigned_tx
.signature_hash(input_index, &script, sighash.as_u32()),
sighash,
))
}
}
fn p2wpkh_script_code(script: &Script) -> Script {
ScriptBuilder::new()
.push_opcode(opcodes::all::OP_DUP)
.push_opcode(opcodes::all::OP_HASH160)
.push_slice(&script[2..])
.push_opcode(opcodes::all::OP_EQUALVERIFY)
.push_opcode(opcodes::all::OP_CHECKSIG)
.into_script()
}
impl ComputeSighash for Segwitv0 {
fn sighash(
psbt: &psbt::PartiallySignedTransaction,
input_index: usize,
) -> Result<(SigHash, SigHashType), SignerError> {
if input_index >= psbt.inputs.len() || input_index >= psbt.global.unsigned_tx.input.len() {
return Err(SignerError::InputIndexOutOfRange);
}
let psbt_input = &psbt.inputs[input_index];
let tx_input = &psbt.global.unsigned_tx.input[input_index];
let sighash = psbt_input.sighash_type.unwrap_or(SigHashType::All);
// Always try first with the non-witness utxo
let utxo = if let Some(prev_tx) = &psbt_input.non_witness_utxo {
// Check the provided prev-tx
if prev_tx.txid() != tx_input.previous_output.txid {
return Err(SignerError::InvalidNonWitnessUtxo);
}
// The output should be present, if it's missing the `non_witness_utxo` is invalid
prev_tx
.output
.get(tx_input.previous_output.vout as usize)
.ok_or(SignerError::InvalidNonWitnessUtxo)?
} else if let Some(witness_utxo) = &psbt_input.witness_utxo {
// Fallback to the witness_utxo. If we aren't allowed to use it, signing should fail
// before we get to this point
witness_utxo
} else {
// Nothing has been provided
return Err(SignerError::MissingNonWitnessUtxo);
};
let value = utxo.value;
let script = match psbt_input.witness_script {
Some(ref witness_script) => witness_script.clone(),
None => {
if utxo.script_pubkey.is_v0_p2wpkh() {
p2wpkh_script_code(&utxo.script_pubkey)
} else if psbt_input
.redeem_script
.as_ref()
.map(Script::is_v0_p2wpkh)
.unwrap_or(false)
{
p2wpkh_script_code(psbt_input.redeem_script.as_ref().unwrap())
} else {
return Err(SignerError::MissingWitnessScript);
}
}
};
Ok((
bip143::SigHashCache::new(&psbt.global.unsigned_tx).signature_hash(
input_index,
&script,
value,
sighash,
),
sighash,
))
}
}
impl PartialOrd for SignersContainerKey {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for SignersContainerKey {
fn cmp(&self, other: &Self) -> Ordering {
self.ordering
.cmp(&other.ordering)
.then(self.id.cmp(&other.id))
}
}
impl PartialEq for SignersContainerKey {
fn eq(&self, other: &Self) -> bool {
self.id == other.id && self.ordering == other.ordering
}
}
impl Eq for SignersContainerKey {}
#[cfg(test)]
mod signers_container_tests {
use super::*;
use crate::descriptor;
use crate::descriptor::IntoWalletDescriptor;
use crate::keys::{DescriptorKey, IntoDescriptorKey};
use bitcoin::secp256k1::{All, Secp256k1};
use bitcoin::util::bip32;
use bitcoin::util::psbt::PartiallySignedTransaction;
use bitcoin::Network;
use miniscript::ScriptContext;
use std::str::FromStr;
fn is_equal(this: &Arc<dyn Signer>, that: &Arc<DummySigner>) -> bool {
let secp = Secp256k1::new();
this.id(&secp) == that.id(&secp)
}
// Signers added with the same ordering (like `Ordering::default`) created from `KeyMap`
// should be preserved and not overwritten.
// This happens usually when a set of signers is created from a descriptor with private keys.
#[test]
fn signers_with_same_ordering() {
let secp = Secp256k1::new();
let (prvkey1, _, _) = setup_keys(TPRV0_STR);
let (prvkey2, _, _) = setup_keys(TPRV1_STR);
let desc = descriptor!(sh(multi(2, prvkey1, prvkey2))).unwrap();
let (_, keymap) = desc
.into_wallet_descriptor(&secp, Network::Testnet)
.unwrap();
let signers = SignersContainer::from(keymap);
assert_eq!(signers.ids().len(), 2);
let signers = signers.signers();
assert_eq!(signers.len(), 2);
}
#[test]
fn signers_sorted_by_ordering() {
let mut signers = SignersContainer::new();
let signer1 = Arc::new(DummySigner { number: 1 });
let signer2 = Arc::new(DummySigner { number: 2 });
let signer3 = Arc::new(DummySigner { number: 3 });
// Mixed order insertions verifies we are not inserting at head or tail.
signers.add_external(SignerId::Dummy(2), SignerOrdering(2), signer2.clone());
signers.add_external(SignerId::Dummy(1), SignerOrdering(1), signer1.clone());
signers.add_external(SignerId::Dummy(3), SignerOrdering(3), signer3.clone());
// Check that signers are sorted from lowest to highest ordering
let signers = signers.signers();
assert!(is_equal(signers[0], &signer1));
assert!(is_equal(signers[1], &signer2));
assert!(is_equal(signers[2], &signer3));
}
#[test]
fn find_signer_by_id() {
let mut signers = SignersContainer::new();
let signer1 = Arc::new(DummySigner { number: 1 });
let signer2 = Arc::new(DummySigner { number: 2 });
let signer3 = Arc::new(DummySigner { number: 3 });
let signer4 = Arc::new(DummySigner { number: 3 }); // Same ID as `signer3` but will use lower ordering.
let id1 = SignerId::Dummy(1);
let id2 = SignerId::Dummy(2);
let id3 = SignerId::Dummy(3);
let id_nonexistent = SignerId::Dummy(999);
signers.add_external(id1.clone(), SignerOrdering(1), signer1.clone());
signers.add_external(id2.clone(), SignerOrdering(2), signer2.clone());
signers.add_external(id3.clone(), SignerOrdering(3), signer3.clone());
assert!(matches!(signers.find(id1), Some(signer) if is_equal(signer, &signer1)));
assert!(matches!(signers.find(id2), Some(signer) if is_equal(signer, &signer2)));
assert!(matches!(signers.find(id3.clone()), Some(signer) if is_equal(signer, &signer3)));
// The `signer4` has the same ID as `signer3` but lower ordering.
// It should be found by `id3` instead of `signer3`.
signers.add_external(id3.clone(), SignerOrdering(2), signer4.clone());
assert!(matches!(signers.find(id3), Some(signer) if is_equal(signer, &signer4)));
// Can't find anything with ID that doesn't exist
assert!(matches!(signers.find(id_nonexistent), None));
}
#[derive(Debug, Clone, Copy)]
struct DummySigner {
number: u64,
}
impl Signer for DummySigner {
fn sign(
&self,
_psbt: &mut PartiallySignedTransaction,
_input_index: Option<usize>,
_secp: &SecpCtx,
) -> Result<(), SignerError> {
Ok(())
}
fn id(&self, _secp: &SecpCtx) -> SignerId {
SignerId::Dummy(self.number)
}
fn sign_whole_tx(&self) -> bool {
true
}
}
const TPRV0_STR:&str = "tprv8ZgxMBicQKsPdZXrcHNLf5JAJWFAoJ2TrstMRdSKtEggz6PddbuSkvHKM9oKJyFgZV1B7rw8oChspxyYbtmEXYyg1AjfWbL3ho3XHDpHRZf";
const TPRV1_STR:&str = "tprv8ZgxMBicQKsPdpkqS7Eair4YxjcuuvDPNYmKX3sCniCf16tHEVrjjiSXEkFRnUH77yXc6ZcwHHcLNfjdi5qUvw3VDfgYiH5mNsj5izuiu2N";
const PATH: &str = "m/44'/1'/0'/0";
fn setup_keys<Ctx: ScriptContext>(
tprv: &str,
) -> (DescriptorKey<Ctx>, DescriptorKey<Ctx>, Fingerprint) {
let secp: Secp256k1<All> = Secp256k1::new();
let path = bip32::DerivationPath::from_str(PATH).unwrap();
let tprv = bip32::ExtendedPrivKey::from_str(tprv).unwrap();
let tpub = bip32::ExtendedPubKey::from_private(&secp, &tprv);
let fingerprint = tprv.fingerprint(&secp);
let prvkey = (tprv, path.clone()).into_descriptor_key().unwrap();
let pubkey = (tpub, path).into_descriptor_key().unwrap();
(prvkey, pubkey, fingerprint)
}
}

73
src/wallet/time.rs Normal file
View File

@@ -0,0 +1,73 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Cross-platform time
//!
//! This module provides a function to get the current timestamp that works on all the platforms
//! supported by the library.
//!
//! It can be useful to compare it with the timestamps found in
//! [`TransactionDetails`](crate::types::TransactionDetails).
use std::time::Duration;
#[cfg(target_arch = "wasm32")]
use js_sys::Date;
#[cfg(not(target_arch = "wasm32"))]
use std::time::{Instant as SystemInstant, SystemTime, UNIX_EPOCH};
/// Return the current timestamp in seconds
#[cfg(not(target_arch = "wasm32"))]
pub fn get_timestamp() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs()
}
/// Return the current timestamp in seconds
#[cfg(target_arch = "wasm32")]
pub fn get_timestamp() -> u64 {
let millis = Date::now();
(millis / 1000.0) as u64
}
#[cfg(not(target_arch = "wasm32"))]
pub(crate) struct Instant(SystemInstant);
#[cfg(target_arch = "wasm32")]
pub(crate) struct Instant(Duration);
impl Instant {
#[cfg(not(target_arch = "wasm32"))]
pub fn new() -> Self {
Instant(SystemInstant::now())
}
#[cfg(target_arch = "wasm32")]
pub fn new() -> Self {
let millis = Date::now();
let secs = millis / 1000.0;
let nanos = (millis % 1000.0) * 1e6;
Instant(Duration::new(secs as u64, nanos as u32))
}
#[cfg(not(target_arch = "wasm32"))]
pub fn elapsed(&self) -> Duration {
self.0.elapsed()
}
#[cfg(target_arch = "wasm32")]
pub fn elapsed(&self) -> Duration {
let now = Instant::new();
now.0.checked_sub(self.0).unwrap_or(Duration::new(0, 0))
}
}

893
src/wallet/tx_builder.rs Normal file
View File

@@ -0,0 +1,893 @@
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Transaction builder
//!
//! ## Example
//!
//! ```
//! # use std::str::FromStr;
//! # use bitcoin::*;
//! # use bdk::*;
//! # use bdk::wallet::tx_builder::CreateTx;
//! # let to_address = Address::from_str("2N4eQYCbKUHCCTUjBJeHcJp9ok6J2GZsTDt").unwrap();
//! # let wallet = doctest_wallet!();
//! // create a TxBuilder from a wallet
//! let mut tx_builder = wallet.build_tx();
//!
//! tx_builder
//! // Create a transaction with one output to `to_address` of 50_000 satoshi
//! .add_recipient(to_address.script_pubkey(), 50_000)
//! // With a custom fee rate of 5.0 satoshi/vbyte
//! .fee_rate(FeeRate::from_sat_per_vb(5.0))
//! // Only spend non-change outputs
//! .do_not_spend_change()
//! // Turn on RBF signaling
//! .enable_rbf();
//! let (psbt, tx_details) = tx_builder.finish()?;
//! # Ok::<(), bdk::Error>(())
//! ```
use std::collections::BTreeMap;
use std::collections::HashSet;
use std::default::Default;
use std::marker::PhantomData;
use bitcoin::util::psbt::{self, PartiallySignedTransaction as Psbt};
use bitcoin::{OutPoint, Script, SigHashType, Transaction};
use miniscript::descriptor::DescriptorTrait;
use super::coin_selection::{CoinSelectionAlgorithm, DefaultCoinSelectionAlgorithm};
use crate::{database::BatchDatabase, Error, Utxo, Wallet};
use crate::{
types::{FeeRate, KeychainKind, LocalUtxo, WeightedUtxo},
TransactionDetails,
};
/// Context in which the [`TxBuilder`] is valid
pub trait TxBuilderContext: std::fmt::Debug + Default + Clone {}
/// Marker type to indicate the [`TxBuilder`] is being used to create a new transaction (as opposed
/// to bumping the fee of an existing one).
#[derive(Debug, Default, Clone)]
pub struct CreateTx;
impl TxBuilderContext for CreateTx {}
/// Marker type to indicate the [`TxBuilder`] is being used to bump the fee of an existing transaction.
#[derive(Debug, Default, Clone)]
pub struct BumpFee;
impl TxBuilderContext for BumpFee {}
/// A transaction builder
///
/// A `TxBuilder` is created by calling [`build_tx`] or [`build_fee_bump`] on a wallet. After
/// assigning it, you set options on it until finally calling [`finish`] to consume the builder and
/// generate the transaction.
///
/// Each option setting method on `TxBuilder` takes and returns `&mut self` so you can chain calls
/// as in the following example:
///
/// ```
/// # use bdk::*;
/// # use bdk::wallet::tx_builder::*;
/// # use bitcoin::*;
/// # use core::str::FromStr;
/// # let wallet = doctest_wallet!();
/// # let addr1 = Address::from_str("2N4eQYCbKUHCCTUjBJeHcJp9ok6J2GZsTDt").unwrap();
/// # let addr2 = addr1.clone();
/// // chaining
/// let (psbt1, details) = {
/// let mut builder = wallet.build_tx();
/// builder
/// .ordering(TxOrdering::Untouched)
/// .add_recipient(addr1.script_pubkey(), 50_000)
/// .add_recipient(addr2.script_pubkey(), 50_000);
/// builder.finish()?
/// };
///
/// // non-chaining
/// let (psbt2, details) = {
/// let mut builder = wallet.build_tx();
/// builder.ordering(TxOrdering::Untouched);
/// for addr in &[addr1, addr2] {
/// builder.add_recipient(addr.script_pubkey(), 50_000);
/// }
/// builder.finish()?
/// };
///
/// assert_eq!(
/// psbt1.global.unsigned_tx.output[..2],
/// psbt2.global.unsigned_tx.output[..2]
/// );
/// # Ok::<(), bdk::Error>(())
/// ```
///
/// At the moment [`coin_selection`] is an exception to the rule as it consumes `self`.
/// This means it is usually best to call [`coin_selection`] on the return value of `build_tx` before assigning it.
///
/// For further examples see [this module](super::tx_builder)'s documentation;
///
/// [`build_tx`]: Wallet::build_tx
/// [`build_fee_bump`]: Wallet::build_fee_bump
/// [`finish`]: Self::finish
/// [`coin_selection`]: Self::coin_selection
#[derive(Debug)]
pub struct TxBuilder<'a, B, D, Cs, Ctx> {
pub(crate) wallet: &'a Wallet<B, D>,
pub(crate) params: TxParams,
pub(crate) coin_selection: Cs,
pub(crate) phantom: PhantomData<Ctx>,
}
/// The parameters for transaction creation sans coin selection algorithm.
//TODO: TxParams should eventually be exposed publicly.
#[derive(Default, Debug, Clone)]
pub(crate) struct TxParams {
pub(crate) recipients: Vec<(Script, u64)>,
pub(crate) drain_wallet: bool,
pub(crate) drain_to: Option<Script>,
pub(crate) fee_policy: Option<FeePolicy>,
pub(crate) internal_policy_path: Option<BTreeMap<String, Vec<usize>>>,
pub(crate) external_policy_path: Option<BTreeMap<String, Vec<usize>>>,
pub(crate) utxos: Vec<WeightedUtxo>,
pub(crate) unspendable: HashSet<OutPoint>,
pub(crate) manually_selected_only: bool,
pub(crate) sighash: Option<SigHashType>,
pub(crate) ordering: TxOrdering,
pub(crate) locktime: Option<u32>,
pub(crate) rbf: Option<RbfValue>,
pub(crate) version: Option<Version>,
pub(crate) change_policy: ChangeSpendPolicy,
pub(crate) only_witness_utxo: bool,
pub(crate) add_global_xpubs: bool,
pub(crate) include_output_redeem_witness_script: bool,
pub(crate) bumping_fee: Option<PreviousFee>,
}
#[derive(Clone, Copy, Debug)]
pub(crate) struct PreviousFee {
pub absolute: u64,
pub rate: f32,
}
#[derive(Debug, Clone, Copy)]
pub(crate) enum FeePolicy {
FeeRate(FeeRate),
FeeAmount(u64),
}
impl std::default::Default for FeePolicy {
fn default() -> Self {
FeePolicy::FeeRate(FeeRate::default_min_relay_fee())
}
}
impl<'a, Cs: Clone, Ctx, B, D> Clone for TxBuilder<'a, B, D, Cs, Ctx> {
fn clone(&self) -> Self {
TxBuilder {
wallet: self.wallet,
params: self.params.clone(),
coin_selection: self.coin_selection.clone(),
phantom: PhantomData,
}
}
}
// methods supported by both contexts, for any CoinSelectionAlgorithm
impl<'a, B, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>, Ctx: TxBuilderContext>
TxBuilder<'a, B, D, Cs, Ctx>
{
/// Set a custom fee rate
pub fn fee_rate(&mut self, fee_rate: FeeRate) -> &mut Self {
self.params.fee_policy = Some(FeePolicy::FeeRate(fee_rate));
self
}
/// Set an absolute fee
pub fn fee_absolute(&mut self, fee_amount: u64) -> &mut Self {
self.params.fee_policy = Some(FeePolicy::FeeAmount(fee_amount));
self
}
/// Set the policy path to use while creating the transaction for a given keychain.
///
/// This method accepts a map where the key is the policy node id (see
/// [`Policy::id`](crate::descriptor::Policy::id)) and the value is the list of the indexes of
/// the items that are intended to be satisfied from the policy node (see
/// [`SatisfiableItem::Thresh::items`](crate::descriptor::policy::SatisfiableItem::Thresh::items)).
///
/// ## Example
///
/// An example of when the policy path is needed is the following descriptor:
/// `wsh(thresh(2,pk(A),sj:and_v(v:pk(B),n:older(6)),snj:and_v(v:pk(C),after(630000))))`,
/// derived from the miniscript policy `thresh(2,pk(A),and(pk(B),older(6)),and(pk(C),after(630000)))`.
/// It declares three descriptor fragments, and at the top level it uses `thresh()` to
/// ensure that at least two of them are satisfied. The individual fragments are:
///
/// 1. `pk(A)`
/// 2. `and(pk(B),older(6))`
/// 3. `and(pk(C),after(630000))`
///
/// When those conditions are combined in pairs, it's clear that the transaction needs to be created
/// differently depending on how the user intends to satisfy the policy afterwards:
///
/// * If fragments `1` and `2` are used, the transaction will need to use a specific
/// `n_sequence` in order to spend an `OP_CSV` branch.
/// * If fragments `1` and `3` are used, the transaction will need to use a specific `locktime`
/// in order to spend an `OP_CLTV` branch.
/// * If fragments `2` and `3` are used, the transaction will need both.
///
/// When the spending policy is represented as a tree (see
/// [`Wallet::policies`](super::Wallet::policies)), every node
/// is assigned a unique identifier that can be used in the policy path to specify which of
/// the node's children the user intends to satisfy: for instance, assuming the `thresh()`
/// root node of this example has an id of `aabbccdd`, the policy path map would look like:
///
/// `{ "aabbccdd" => [0, 1] }`
///
/// where the key is the node's id, and the value is a list of the children that should be
/// used, in no particular order.
///
/// If a particularly complex descriptor has multiple ambiguous thresholds in its structure,
/// multiple entries can be added to the map, one for each node that requires an explicit path.
///
/// ```
/// # use std::str::FromStr;
/// # use std::collections::BTreeMap;
/// # use bitcoin::*;
/// # use bdk::*;
/// # let to_address = Address::from_str("2N4eQYCbKUHCCTUjBJeHcJp9ok6J2GZsTDt").unwrap();
/// # let wallet = doctest_wallet!();
/// let mut path = BTreeMap::new();
/// path.insert("aabbccdd".to_string(), vec![0, 1]);
///
/// let builder = wallet
/// .build_tx()
/// .add_recipient(to_address.script_pubkey(), 50_000)
/// .policy_path(path, KeychainKind::External);
///
/// # Ok::<(), bdk::Error>(())
/// ```
pub fn policy_path(
&mut self,
policy_path: BTreeMap<String, Vec<usize>>,
keychain: KeychainKind,
) -> &mut Self {
let to_update = match keychain {
KeychainKind::Internal => &mut self.params.internal_policy_path,
KeychainKind::External => &mut self.params.external_policy_path,
};
*to_update = Some(policy_path);
self
}
/// Add the list of outpoints to the internal list of UTXOs that **must** be spent.
///
/// If an error occurs while adding any of the UTXOs then none of them are added and the error is returned.
///
/// These have priority over the "unspendable" utxos, meaning that if a utxo is present both in
/// the "utxos" and the "unspendable" list, it will be spent.
pub fn add_utxos(&mut self, outpoints: &[OutPoint]) -> Result<&mut Self, Error> {
let utxos = outpoints
.iter()
.map(|outpoint| self.wallet.get_utxo(*outpoint)?.ok_or(Error::UnknownUtxo))
.collect::<Result<Vec<_>, _>>()?;
for utxo in utxos {
let descriptor = self.wallet.get_descriptor_for_keychain(utxo.keychain);
let satisfaction_weight = descriptor.max_satisfaction_weight().unwrap();
self.params.utxos.push(WeightedUtxo {
satisfaction_weight,
utxo: Utxo::Local(utxo),
});
}
Ok(self)
}
/// Add a utxo to the internal list of utxos that **must** be spent
///
/// These have priority over the "unspendable" utxos, meaning that if a utxo is present both in
/// the "utxos" and the "unspendable" list, it will be spent.
pub fn add_utxo(&mut self, outpoint: OutPoint) -> Result<&mut Self, Error> {
self.add_utxos(&[outpoint])
}
/// Add a foreign UTXO i.e. a UTXO not owned by this wallet.
///
/// At a minimum to add a foreign UTXO we need:
///
/// 1. `outpoint`: To add it to the raw transaction.
/// 2. `psbt_input`: To know the value.
/// 3. `satisfaction_weight`: To know how much weight/vbytes the input will add to the transaction for fee calculation.
///
/// There are several security concerns about adding foreign UTXOs that application
/// developers should consider. First, how do you know the value of the input is correct? If a
/// `non_witness_utxo` is provided in the `psbt_input` then this method implicitly verifies the
/// value by checking it against the transaction. If only a `witness_utxo` is provided then this
/// method doesn't verify the value but just takes it as a given -- it is up to you to check
/// that whoever sent you the `input_psbt` was not lying!
///
/// Secondly, you must somehow provide `satisfaction_weight` of the input. Depending on your
/// application it may be important that this be known precisely. If not, a malicious
/// counterparty may fool you into putting in a value that is too low, giving the transaction a
/// lower than expected feerate. They could also fool you into putting a value that is too high
/// causing you to pay a fee that is too high. The party who is broadcasting the transaction can
/// of course check the real input weight matches the expected weight prior to broadcasting.
///
/// To guarantee the `satisfaction_weight` is correct, you can require the party providing the
/// `psbt_input` provide a miniscript descriptor for the input so you can check it against the
/// `script_pubkey` and then ask it for the [`max_satisfaction_weight`].
///
/// This is an **EXPERIMENTAL** feature, API and other major changes are expected.
///
/// # Errors
///
/// This method returns errors in the following circumstances:
///
/// 1. The `psbt_input` does not contain a `witness_utxo` or `non_witness_utxo`.
/// 2. The data in `non_witness_utxo` does not match what is in `outpoint`.
///
/// Note unless you set [`only_witness_utxo`] any `psbt_input` you pass to this method must
/// have `non_witness_utxo` set otherwise you will get an error when [`finish`] is called.
///
/// [`only_witness_utxo`]: Self::only_witness_utxo
/// [`finish`]: Self::finish
/// [`max_satisfaction_weight`]: miniscript::Descriptor::max_satisfaction_weight
pub fn add_foreign_utxo(
&mut self,
outpoint: OutPoint,
psbt_input: psbt::Input,
satisfaction_weight: usize,
) -> Result<&mut Self, Error> {
if psbt_input.witness_utxo.is_none() {
match psbt_input.non_witness_utxo.as_ref() {
Some(tx) => {
if tx.txid() != outpoint.txid {
return Err(Error::Generic(
"Foreign utxo outpoint does not match PSBT input".into(),
));
}
if tx.output.len() <= outpoint.vout as usize {
return Err(Error::InvalidOutpoint(outpoint));
}
}
None => {
return Err(Error::Generic(
"Foreign utxo missing witness_utxo or non_witness_utxo".into(),
))
}
}
}
self.params.utxos.push(WeightedUtxo {
satisfaction_weight,
utxo: Utxo::Foreign {
outpoint,
psbt_input: Box::new(psbt_input),
},
});
Ok(self)
}
/// Only spend utxos added by [`add_utxo`].
///
/// The wallet will **not** add additional utxos to the transaction even if they are needed to
/// make the transaction valid.
///
/// [`add_utxo`]: Self::add_utxo
pub fn manually_selected_only(&mut self) -> &mut Self {
self.params.manually_selected_only = true;
self
}
/// Replace the internal list of unspendable utxos with a new list
///
/// It's important to note that the "must-be-spent" utxos added with [`TxBuilder::add_utxo`]
/// have priority over these. See the docs of the two linked methods for more details.
pub fn unspendable(&mut self, unspendable: Vec<OutPoint>) -> &mut Self {
self.params.unspendable = unspendable.into_iter().collect();
self
}
/// Add a utxo to the internal list of unspendable utxos
///
/// It's important to note that the "must-be-spent" utxos added with [`TxBuilder::add_utxo`]
/// have priority over this. See the docs of the two linked methods for more details.
pub fn add_unspendable(&mut self, unspendable: OutPoint) -> &mut Self {
self.params.unspendable.insert(unspendable);
self
}
/// Sign with a specific sig hash
///
/// **Use this option very carefully**
pub fn sighash(&mut self, sighash: SigHashType) -> &mut Self {
self.params.sighash = Some(sighash);
self
}
/// Choose the ordering for inputs and outputs of the transaction
pub fn ordering(&mut self, ordering: TxOrdering) -> &mut Self {
self.params.ordering = ordering;
self
}
/// Use a specific nLockTime while creating the transaction
///
/// This can cause conflicts if the wallet's descriptors contain an "after" (OP_CLTV) operator.
pub fn nlocktime(&mut self, locktime: u32) -> &mut Self {
self.params.locktime = Some(locktime);
self
}
/// Build a transaction with a specific version
///
/// The `version` should always be greater than `0` and greater than `1` if the wallet's
/// descriptors contain an "older" (OP_CSV) operator.
pub fn version(&mut self, version: i32) -> &mut Self {
self.params.version = Some(Version(version));
self
}
/// Do not spend change outputs
///
/// This effectively adds all the change outputs to the "unspendable" list. See
/// [`TxBuilder::unspendable`].
pub fn do_not_spend_change(&mut self) -> &mut Self {
self.params.change_policy = ChangeSpendPolicy::ChangeForbidden;
self
}
/// Only spend change outputs
///
/// This effectively adds all the non-change outputs to the "unspendable" list. See
/// [`TxBuilder::unspendable`].
pub fn only_spend_change(&mut self) -> &mut Self {
self.params.change_policy = ChangeSpendPolicy::OnlyChange;
self
}
/// Set a specific [`ChangeSpendPolicy`]. See [`TxBuilder::do_not_spend_change`] and
/// [`TxBuilder::only_spend_change`] for some shortcuts.
pub fn change_policy(&mut self, change_policy: ChangeSpendPolicy) -> &mut Self {
self.params.change_policy = change_policy;
self
}
/// Only Fill-in the [`psbt::Input::witness_utxo`](bitcoin::util::psbt::Input::witness_utxo) field when spending from
/// SegWit descriptors.
///
/// This reduces the size of the PSBT, but some signers might reject them due to the lack of
/// the `non_witness_utxo`.
pub fn only_witness_utxo(&mut self) -> &mut Self {
self.params.only_witness_utxo = true;
self
}
/// Fill-in the [`psbt::Output::redeem_script`](bitcoin::util::psbt::Output::redeem_script) and
/// [`psbt::Output::witness_script`](bitcoin::util::psbt::Output::witness_script) fields.
///
/// This is useful for signers which always require it, like ColdCard hardware wallets.
pub fn include_output_redeem_witness_script(&mut self) -> &mut Self {
self.params.include_output_redeem_witness_script = true;
self
}
/// Fill-in the `PSBT_GLOBAL_XPUB` field with the extended keys contained in both the external
/// and internal descriptors
///
/// This is useful for offline signers that take part to a multisig. Some hardware wallets like
/// BitBox and ColdCard are known to require this.
pub fn add_global_xpubs(&mut self) -> &mut Self {
self.params.add_global_xpubs = true;
self
}
/// Spend all the available inputs. This respects filters like [`TxBuilder::unspendable`] and the change policy.
pub fn drain_wallet(&mut self) -> &mut Self {
self.params.drain_wallet = true;
self
}
/// Choose the coin selection algorithm
///
/// Overrides the [`DefaultCoinSelectionAlgorithm`](super::coin_selection::DefaultCoinSelectionAlgorithm).
///
/// Note that this function consumes the builder and returns it so it is usually best to put this as the first call on the builder.
pub fn coin_selection<P: CoinSelectionAlgorithm<D>>(
self,
coin_selection: P,
) -> TxBuilder<'a, B, D, P, Ctx> {
TxBuilder {
wallet: self.wallet,
params: self.params,
coin_selection,
phantom: PhantomData,
}
}
/// Finish the building the transaction.
///
/// Returns the [`BIP174`] "PSBT" and summary details about the transaction.
///
/// [`BIP174`]: https://github.com/bitcoin/bips/blob/master/bip-0174.mediawiki
pub fn finish(self) -> Result<(Psbt, TransactionDetails), Error> {
self.wallet.create_tx(self.coin_selection, self.params)
}
/// Enable signaling RBF
///
/// This will use the default nSequence value of `0xFFFFFFFD`.
pub fn enable_rbf(&mut self) -> &mut Self {
self.params.rbf = Some(RbfValue::Default);
self
}
/// Enable signaling RBF with a specific nSequence value
///
/// This can cause conflicts if the wallet's descriptors contain an "older" (OP_CSV) operator
/// and the given `nsequence` is lower than the CSV value.
///
/// If the `nsequence` is higher than `0xFFFFFFFD` an error will be thrown, since it would not
/// be a valid nSequence to signal RBF.
pub fn enable_rbf_with_sequence(&mut self, nsequence: u32) -> &mut Self {
self.params.rbf = Some(RbfValue::Value(nsequence));
self
}
}
impl<'a, B, D: BatchDatabase, Cs: CoinSelectionAlgorithm<D>> TxBuilder<'a, B, D, Cs, CreateTx> {
/// Replace the recipients already added with a new list
pub fn set_recipients(&mut self, recipients: Vec<(Script, u64)>) -> &mut Self {
self.params.recipients = recipients;
self
}
/// Add a recipient to the internal list
pub fn add_recipient(&mut self, script_pubkey: Script, amount: u64) -> &mut Self {
self.params.recipients.push((script_pubkey, amount));
self
}
/// Add data as an output, using OP_RETURN
pub fn add_data(&mut self, data: &[u8]) -> &mut Self {
let script = Script::new_op_return(data);
self.add_recipient(script, 0u64);
self
}
/// Sets the address to *drain* excess coins to.
///
/// Usually, when there are excess coins they are sent to a change address generated by the
/// wallet. This option replaces the usual change address with an arbitrary `script_pubkey` of
/// your choosing. Just as with a change output, if the drain output is not needed (the excess
/// coins are too small) it will not be included in the resulting transaction. The only
/// difference is that it is valid to use `drain_to` without setting any ordinary recipients
/// with [`add_recipient`] (but it is perfectly fine to add recipients as well).
///
/// When bumping the fees of a transaction made with this option, you probably want to
/// use [`allow_shrinking`] to allow this output to be reduced to pay for the extra fees.
///
/// # Example
///
/// `drain_to` is very useful for draining all the coins in a wallet with [`drain_wallet`] to a
/// single address.
///
/// ```
/// # use std::str::FromStr;
/// # use bitcoin::*;
/// # use bdk::*;
/// # use bdk::wallet::tx_builder::CreateTx;
/// # let to_address = Address::from_str("2N4eQYCbKUHCCTUjBJeHcJp9ok6J2GZsTDt").unwrap();
/// # let wallet = doctest_wallet!();
/// let mut tx_builder = wallet.build_tx();
///
/// tx_builder
/// // Spend all outputs in this wallet.
/// .drain_wallet()
/// // Send the excess (which is all the coins minus the fee) to this address.
/// .drain_to(to_address.script_pubkey())
/// .fee_rate(FeeRate::from_sat_per_vb(5.0))
/// .enable_rbf();
/// let (psbt, tx_details) = tx_builder.finish()?;
/// # Ok::<(), bdk::Error>(())
/// ```
///
/// [`allow_shrinking`]: Self::allow_shrinking
/// [`add_recipient`]: Self::add_recipient
/// [`drain_wallet`]: Self::drain_wallet
pub fn drain_to(&mut self, script_pubkey: Script) -> &mut Self {
self.params.drain_to = Some(script_pubkey);
self
}
}
// methods supported only by bump_fee
impl<'a, B, D: BatchDatabase> TxBuilder<'a, B, D, DefaultCoinSelectionAlgorithm, BumpFee> {
/// Explicitly tells the wallet that it is allowed to reduce the fee of the output matching this
/// `script_pubkey` in order to bump the transaction fee. Without specifying this the wallet
/// will attempt to find a change output to shrink instead.
///
/// **Note** that the output may shrink to below the dust limit and therefore be removed. If it is
/// preserved then it is currently not guaranteed to be in the same position as it was
/// originally.
///
/// Returns an `Err` if `script_pubkey` can't be found among the recipients of the
/// transaction we are bumping.
pub fn allow_shrinking(&mut self, script_pubkey: Script) -> Result<&mut Self, Error> {
match self
.params
.recipients
.iter()
.position(|(recipient_script, _)| *recipient_script == script_pubkey)
{
Some(position) => {
self.params.recipients.remove(position);
self.params.drain_to = Some(script_pubkey);
Ok(self)
}
None => Err(Error::Generic(format!(
"{} was not in the original transaction",
script_pubkey
))),
}
}
}
/// Ordering of the transaction's inputs and outputs
#[derive(Debug, Ord, PartialOrd, Eq, PartialEq, Hash, Clone, Copy)]
pub enum TxOrdering {
/// Randomized (default)
Shuffle,
/// Unchanged
Untouched,
/// BIP69 / Lexicographic
Bip69Lexicographic,
}
impl Default for TxOrdering {
fn default() -> Self {
TxOrdering::Shuffle
}
}
impl TxOrdering {
/// Sort transaction inputs and outputs by [`TxOrdering`] variant
pub fn sort_tx(&self, tx: &mut Transaction) {
match self {
TxOrdering::Untouched => {}
TxOrdering::Shuffle => {
use rand::seq::SliceRandom;
#[cfg(test)]
use rand::SeedableRng;
#[cfg(not(test))]
let mut rng = rand::thread_rng();
#[cfg(test)]
let mut rng = rand::rngs::StdRng::seed_from_u64(0);
tx.output.shuffle(&mut rng);
}
TxOrdering::Bip69Lexicographic => {
tx.input.sort_unstable_by_key(|txin| {
(txin.previous_output.txid, txin.previous_output.vout)
});
tx.output
.sort_unstable_by_key(|txout| (txout.value, txout.script_pubkey.clone()));
}
}
}
}
/// Transaction version
///
/// Has a default value of `1`
#[derive(Debug, Ord, PartialOrd, Eq, PartialEq, Hash, Clone, Copy)]
pub(crate) struct Version(pub(crate) i32);
impl Default for Version {
fn default() -> Self {
Version(1)
}
}
/// RBF nSequence value
///
/// Has a default value of `0xFFFFFFFD`
#[derive(Debug, Ord, PartialOrd, Eq, PartialEq, Hash, Clone, Copy)]
pub(crate) enum RbfValue {
Default,
Value(u32),
}
impl RbfValue {
pub(crate) fn get_value(&self) -> u32 {
match self {
RbfValue::Default => 0xFFFFFFFD,
RbfValue::Value(v) => *v,
}
}
}
/// Policy regarding the use of change outputs when creating a transaction
#[derive(Debug, Ord, PartialOrd, Eq, PartialEq, Hash, Clone, Copy)]
pub enum ChangeSpendPolicy {
/// Use both change and non-change outputs (default)
ChangeAllowed,
/// Only use change outputs (see [`TxBuilder::only_spend_change`])
OnlyChange,
/// Only use non-change outputs (see [`TxBuilder::do_not_spend_change`])
ChangeForbidden,
}
impl Default for ChangeSpendPolicy {
fn default() -> Self {
ChangeSpendPolicy::ChangeAllowed
}
}
impl ChangeSpendPolicy {
pub(crate) fn is_satisfied_by(&self, utxo: &LocalUtxo) -> bool {
match self {
ChangeSpendPolicy::ChangeAllowed => true,
ChangeSpendPolicy::OnlyChange => utxo.keychain == KeychainKind::Internal,
ChangeSpendPolicy::ChangeForbidden => utxo.keychain == KeychainKind::External,
}
}
}
#[cfg(test)]
mod test {
const ORDERING_TEST_TX: &str = "0200000003c26f3eb7932f7acddc5ddd26602b77e7516079b03090a16e2c2f54\
85d1fd600f0100000000ffffffffc26f3eb7932f7acddc5ddd26602b77e75160\
79b03090a16e2c2f5485d1fd600f0000000000ffffffff571fb3e02278217852\
dd5d299947e2b7354a639adc32ec1fa7b82cfb5dec530e0500000000ffffffff\
03e80300000000000002aaeee80300000000000001aa200300000000000001ff\
00000000";
macro_rules! ordering_test_tx {
() => {
deserialize::<bitcoin::Transaction>(&Vec::<u8>::from_hex(ORDERING_TEST_TX).unwrap())
.unwrap()
};
}
use bitcoin::consensus::deserialize;
use bitcoin::hashes::hex::FromHex;
use super::*;
#[test]
fn test_output_ordering_default_shuffle() {
assert_eq!(TxOrdering::default(), TxOrdering::Shuffle);
}
#[test]
fn test_output_ordering_untouched() {
let original_tx = ordering_test_tx!();
let mut tx = original_tx.clone();
TxOrdering::Untouched.sort_tx(&mut tx);
assert_eq!(original_tx, tx);
}
#[test]
fn test_output_ordering_shuffle() {
let original_tx = ordering_test_tx!();
let mut tx = original_tx.clone();
TxOrdering::Shuffle.sort_tx(&mut tx);
assert_eq!(original_tx.input, tx.input);
assert_ne!(original_tx.output, tx.output);
}
#[test]
fn test_output_ordering_bip69() {
use std::str::FromStr;
let original_tx = ordering_test_tx!();
let mut tx = original_tx;
TxOrdering::Bip69Lexicographic.sort_tx(&mut tx);
assert_eq!(
tx.input[0].previous_output,
bitcoin::OutPoint::from_str(
"0e53ec5dfb2cb8a71fec32dc9a634a35b7e24799295ddd5278217822e0b31f57:5"
)
.unwrap()
);
assert_eq!(
tx.input[1].previous_output,
bitcoin::OutPoint::from_str(
"0f60fdd185542f2c6ea19030b0796051e7772b6026dd5ddccd7a2f93b73e6fc2:0"
)
.unwrap()
);
assert_eq!(
tx.input[2].previous_output,
bitcoin::OutPoint::from_str(
"0f60fdd185542f2c6ea19030b0796051e7772b6026dd5ddccd7a2f93b73e6fc2:1"
)
.unwrap()
);
assert_eq!(tx.output[0].value, 800);
assert_eq!(tx.output[1].script_pubkey, From::from(vec![0xAA]));
assert_eq!(tx.output[2].script_pubkey, From::from(vec![0xAA, 0xEE]));
}
fn get_test_utxos() -> Vec<LocalUtxo> {
vec![
LocalUtxo {
outpoint: OutPoint {
txid: Default::default(),
vout: 0,
},
txout: Default::default(),
keychain: KeychainKind::External,
},
LocalUtxo {
outpoint: OutPoint {
txid: Default::default(),
vout: 1,
},
txout: Default::default(),
keychain: KeychainKind::Internal,
},
]
}
#[test]
fn test_change_spend_policy_default() {
let change_spend_policy = ChangeSpendPolicy::default();
let filtered = get_test_utxos()
.into_iter()
.filter(|u| change_spend_policy.is_satisfied_by(u))
.count();
assert_eq!(filtered, 2);
}
#[test]
fn test_change_spend_policy_no_internal() {
let change_spend_policy = ChangeSpendPolicy::ChangeForbidden;
let filtered = get_test_utxos()
.into_iter()
.filter(|u| change_spend_policy.is_satisfied_by(u))
.collect::<Vec<_>>();
assert_eq!(filtered.len(), 1);
assert_eq!(filtered[0].keychain, KeychainKind::External);
}
#[test]
fn test_change_spend_policy_only_internal() {
let change_spend_policy = ChangeSpendPolicy::OnlyChange;
let filtered = get_test_utxos()
.into_iter()
.filter(|u| change_spend_policy.is_satisfied_by(u))
.collect::<Vec<_>>();
assert_eq!(filtered.len(), 1);
assert_eq!(filtered[0].keychain, KeychainKind::Internal);
}
#[test]
fn test_default_tx_version_1() {
let version = Version::default();
assert_eq!(version.0, 1);
}
}

View File

@@ -1,10 +1,39 @@
// De-facto standard "dust limit" (even though it should change based on the output type)
const DUST_LIMIT_SATOSHI: u64 = 546;
// Bitcoin Dev Kit
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
use bitcoin::secp256k1::{All, Secp256k1};
use miniscript::{MiniscriptKey, Satisfier, ToPublicKey};
// De-facto standard "dust limit" (even though it should change based on the output type)
pub const DUST_LIMIT_SATOSHI: u64 = 546;
// MSB of the nSequence. If set there's no consensus-constraint, so it must be disabled when
// spending using CSV in order to enforce CSV rules
pub(crate) const SEQUENCE_LOCKTIME_DISABLE_FLAG: u32 = 1 << 31;
// When nSequence is lower than this flag the timelock is interpreted as block-height-based,
// otherwise it's time-based
pub(crate) const SEQUENCE_LOCKTIME_TYPE_FLAG: u32 = 1 << 22;
// Mask for the bits used to express the timelock
pub(crate) const SEQUENCE_LOCKTIME_MASK: u32 = 0x0000FFFF;
// Threshold for nLockTime to be considered a block-height-based timelock rather than time-based
pub(crate) const BLOCKS_TIMELOCK_THRESHOLD: u32 = 500000000;
/// Trait to check if a value is below the dust limit
// we implement this trait to make sure we don't mess up the comparison with off-by-one like a <
// instead of a <= etc. The constant value for the dust limit is not public on purpose, to
// encourage the usage of this trait.
pub trait IsDust {
/// Check whether or not a value is below dust limit
fn is_dust(&self) -> bool;
}
@@ -14,35 +43,190 @@ impl IsDust for u64 {
}
}
pub struct ChunksIterator<I: Iterator> {
iter: I,
size: usize,
pub struct After {
pub current_height: Option<u32>,
pub assume_height_reached: bool,
}
impl<I: Iterator> ChunksIterator<I> {
pub fn new(iter: I, size: usize) -> Self {
ChunksIterator { iter, size }
impl After {
pub(crate) fn new(current_height: Option<u32>, assume_height_reached: bool) -> After {
After {
current_height,
assume_height_reached,
}
}
}
impl<I: Iterator> Iterator for ChunksIterator<I> {
type Item = Vec<<I as std::iter::Iterator>::Item>;
pub(crate) fn check_nsequence_rbf(rbf: u32, csv: u32) -> bool {
// This flag cannot be set in the nSequence when spending using OP_CSV
if rbf & SEQUENCE_LOCKTIME_DISABLE_FLAG != 0 {
return false;
}
fn next(&mut self) -> Option<Self::Item> {
let mut v = Vec::new();
for _ in 0..self.size {
let e = self.iter.next();
let mask = SEQUENCE_LOCKTIME_TYPE_FLAG | SEQUENCE_LOCKTIME_MASK;
let rbf = rbf & mask;
let csv = csv & mask;
match e {
None => break,
Some(val) => v.push(val),
}
// Both values should be represented in the same unit (either time-based or
// block-height based)
if (rbf < SEQUENCE_LOCKTIME_TYPE_FLAG) != (csv < SEQUENCE_LOCKTIME_TYPE_FLAG) {
return false;
}
// The value should be at least `csv`
if rbf < csv {
return false;
}
true
}
pub(crate) fn check_nlocktime(nlocktime: u32, required: u32) -> bool {
// Both values should be expressed in the same unit
if (nlocktime < BLOCKS_TIMELOCK_THRESHOLD) != (required < BLOCKS_TIMELOCK_THRESHOLD) {
return false;
}
// The value should be at least `required`
if nlocktime < required {
return false;
}
true
}
impl<Pk: MiniscriptKey + ToPublicKey> Satisfier<Pk> for After {
fn check_after(&self, n: u32) -> bool {
if let Some(current_height) = self.current_height {
current_height >= n
} else {
self.assume_height_reached
}
if v.is_empty() {
return None;
}
Some(v)
}
}
pub struct Older {
pub current_height: Option<u32>,
pub create_height: Option<u32>,
pub assume_height_reached: bool,
}
impl Older {
pub(crate) fn new(
current_height: Option<u32>,
create_height: Option<u32>,
assume_height_reached: bool,
) -> Older {
Older {
current_height,
create_height,
assume_height_reached,
}
}
}
impl<Pk: MiniscriptKey + ToPublicKey> Satisfier<Pk> for Older {
fn check_older(&self, n: u32) -> bool {
if let Some(current_height) = self.current_height {
// TODO: test >= / >
current_height as u64 >= self.create_height.unwrap_or(0) as u64 + n as u64
} else {
self.assume_height_reached
}
}
}
pub(crate) type SecpCtx = Secp256k1<All>;
#[cfg(test)]
mod test {
use super::{
check_nlocktime, check_nsequence_rbf, BLOCKS_TIMELOCK_THRESHOLD,
SEQUENCE_LOCKTIME_TYPE_FLAG,
};
use crate::types::FeeRate;
#[test]
fn test_fee_from_btc_per_kb() {
let fee = FeeRate::from_btc_per_kvb(1e-5);
assert!((fee.as_sat_vb() - 1.0).abs() < 0.0001);
}
#[test]
fn test_fee_from_sats_vbyte() {
let fee = FeeRate::from_sat_per_vb(1.0);
assert!((fee.as_sat_vb() - 1.0).abs() < 0.0001);
}
#[test]
fn test_fee_default_min_relay_fee() {
let fee = FeeRate::default_min_relay_fee();
assert!((fee.as_sat_vb() - 1.0).abs() < 0.0001);
}
#[test]
fn test_check_nsequence_rbf_msb_set() {
let result = check_nsequence_rbf(0x80000000, 5000);
assert!(!result);
}
#[test]
fn test_check_nsequence_rbf_lt_csv() {
let result = check_nsequence_rbf(4000, 5000);
assert!(!result);
}
#[test]
fn test_check_nsequence_rbf_different_unit() {
let result = check_nsequence_rbf(SEQUENCE_LOCKTIME_TYPE_FLAG + 5000, 5000);
assert!(!result);
}
#[test]
fn test_check_nsequence_rbf_mask() {
let result = check_nsequence_rbf(0x3f + 10_000, 5000);
assert!(result);
}
#[test]
fn test_check_nsequence_rbf_same_unit_blocks() {
let result = check_nsequence_rbf(10_000, 5000);
assert!(result);
}
#[test]
fn test_check_nsequence_rbf_same_unit_time() {
let result = check_nsequence_rbf(
SEQUENCE_LOCKTIME_TYPE_FLAG + 10_000,
SEQUENCE_LOCKTIME_TYPE_FLAG + 5000,
);
assert!(result);
}
#[test]
fn test_check_nlocktime_lt_cltv() {
let result = check_nlocktime(4000, 5000);
assert!(!result);
}
#[test]
fn test_check_nlocktime_different_unit() {
let result = check_nlocktime(BLOCKS_TIMELOCK_THRESHOLD + 5000, 5000);
assert!(!result);
}
#[test]
fn test_check_nlocktime_same_unit_blocks() {
let result = check_nlocktime(10_000, 5000);
assert!(result);
}
#[test]
fn test_check_nlocktime_same_unit_time() {
let result = check_nlocktime(
BLOCKS_TIMELOCK_THRESHOLD + 10_000,
BLOCKS_TIMELOCK_THRESHOLD + 5000,
);
assert!(result);
}
}

185
src/wallet/verify.rs Normal file
View File

@@ -0,0 +1,185 @@
// Bitcoin Dev Kit
// Written in 2021 by Alekos Filini <alekos.filini@gmail.com>
//
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
//
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
// You may not use this file except in accordance with one or both of these
// licenses.
//! Verify transactions against the consensus rules
use std::collections::HashMap;
use std::fmt;
use bitcoin::consensus::serialize;
use bitcoin::{OutPoint, Transaction, Txid};
use crate::blockchain::Blockchain;
use crate::database::Database;
use crate::error::Error;
/// Verify a transaction against the consensus rules
///
/// This function uses [`bitcoinconsensus`] to verify transactions by fetching the required data
/// either from the [`Database`] or using the [`Blockchain`].
///
/// Depending on the [capabilities](crate::blockchain::Blockchain::get_capabilities) of the
/// [`Blockchain`] backend, the method could fail when called with old "historical" transactions or
/// with unconfirmed transactions that have been evicted from the backend's memory.
pub fn verify_tx<D: Database, B: Blockchain>(
tx: &Transaction,
database: &D,
blockchain: &B,
) -> Result<(), VerifyError> {
log::debug!("Verifying {}", tx.txid());
let serialized_tx = serialize(tx);
let mut tx_cache = HashMap::<_, Transaction>::new();
for (index, input) in tx.input.iter().enumerate() {
let prev_tx = if let Some(prev_tx) = tx_cache.get(&input.previous_output.txid) {
prev_tx.clone()
} else if let Some(prev_tx) = database.get_raw_tx(&input.previous_output.txid)? {
prev_tx
} else if let Some(prev_tx) = blockchain.get_tx(&input.previous_output.txid)? {
prev_tx
} else {
return Err(VerifyError::MissingInputTx(input.previous_output.txid));
};
let spent_output = prev_tx
.output
.get(input.previous_output.vout as usize)
.ok_or(VerifyError::InvalidInput(input.previous_output))?;
bitcoinconsensus::verify(
&spent_output.script_pubkey.to_bytes(),
spent_output.value,
&serialized_tx,
index,
)?;
// Since we have a local cache we might as well cache stuff from the db, as it will very
// likely decrease latency compared to reading from disk or performing an SQL query.
tx_cache.insert(prev_tx.txid(), prev_tx);
}
Ok(())
}
/// Error during validation of a tx agains the consensus rules
#[derive(Debug)]
pub enum VerifyError {
/// The transaction being spent is not available in the database or the blockchain client
MissingInputTx(Txid),
/// The transaction being spent doesn't have the requested output
InvalidInput(OutPoint),
/// Consensus error
Consensus(bitcoinconsensus::Error),
/// Generic error
///
/// It has to be wrapped in a `Box` since `Error` has a variant that contains this enum
Global(Box<Error>),
}
impl fmt::Display for VerifyError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
impl std::error::Error for VerifyError {}
impl From<Error> for VerifyError {
fn from(other: Error) -> Self {
VerifyError::Global(Box::new(other))
}
}
impl_error!(bitcoinconsensus::Error, Consensus, VerifyError);
#[cfg(test)]
mod test {
use std::collections::HashSet;
use bitcoin::consensus::encode::deserialize;
use bitcoin::hashes::hex::FromHex;
use bitcoin::{Transaction, Txid};
use crate::blockchain::{Blockchain, Capability, Progress};
use crate::database::{BatchDatabase, BatchOperations, MemoryDatabase};
use crate::FeeRate;
use super::*;
struct DummyBlockchain;
impl Blockchain for DummyBlockchain {
fn get_capabilities(&self) -> HashSet<Capability> {
Default::default()
}
fn setup<D: BatchDatabase, P: 'static + Progress>(
&self,
_database: &mut D,
_progress_update: P,
) -> Result<(), Error> {
Ok(())
}
fn get_tx(&self, _txid: &Txid) -> Result<Option<Transaction>, Error> {
Ok(None)
}
fn broadcast(&self, _tx: &Transaction) -> Result<(), Error> {
Ok(())
}
fn get_height(&self) -> Result<u32, Error> {
Ok(42)
}
fn estimate_fee(&self, _target: usize) -> Result<FeeRate, Error> {
Ok(FeeRate::default_min_relay_fee())
}
}
#[test]
fn test_verify_fail_unsigned_tx() {
// https://blockstream.info/tx/95da344585fcf2e5f7d6cbf2c3df2dcce84f9196f7a7bb901a43275cd6eb7c3f
let prev_tx: Transaction = deserialize(&Vec::<u8>::from_hex("020000000101192dea5e66d444380e106f8e53acb171703f00d43fb6b3ae88ca5644bdb7e1000000006b48304502210098328d026ce138411f957966c1cf7f7597ccbb170f5d5655ee3e9f47b18f6999022017c3526fc9147830e1340e04934476a3d1521af5b4de4e98baf49ec4c072079e01210276f847f77ec8dd66d78affd3c318a0ed26d89dab33fa143333c207402fcec352feffffff023d0ac203000000001976a9144bfbaf6afb76cc5771bc6404810d1cc041a6933988aca4b956050000000017a91494d5543c74a3ee98e0cf8e8caef5dc813a0f34b48768cb0700").unwrap()).unwrap();
// https://blockstream.info/tx/aca326a724eda9a461c10a876534ecd5ae7b27f10f26c3862fb996f80ea2d45d
let signed_tx: Transaction = deserialize(&Vec::<u8>::from_hex("02000000013f7cebd65c27431a90bba7f796914fe8cc2ddfc3f2cbd6f7e5f2fc854534da95000000006b483045022100de1ac3bcdfb0332207c4a91f3832bd2c2915840165f876ab47c5f8996b971c3602201c6c053d750fadde599e6f5c4e1963df0f01fc0d97815e8157e3d59fe09ca30d012103699b464d1d8bc9e47d4fb1cdaa89a1c5783d68363c4dbc4b524ed3d857148617feffffff02836d3c01000000001976a914fc25d6d5c94003bf5b0c7b640a248e2c637fcfb088ac7ada8202000000001976a914fbed3d9b11183209a57999d54d59f67c019e756c88ac6acb0700").unwrap()).unwrap();
let mut database = MemoryDatabase::new();
let blockchain = DummyBlockchain;
let mut unsigned_tx = signed_tx.clone();
for input in &mut unsigned_tx.input {
input.script_sig = Default::default();
input.witness = Default::default();
}
let result = verify_tx(&signed_tx, &database, &blockchain);
assert!(result.is_err(), "Should fail with missing input tx");
assert!(
matches!(result, Err(VerifyError::MissingInputTx(txid)) if txid == prev_tx.txid()),
"Error should be a `MissingInputTx` error"
);
// insert the prev_tx
database.set_raw_tx(&prev_tx).unwrap();
let result = verify_tx(&unsigned_tx, &database, &blockchain);
assert!(result.is_err(), "Should fail since the TX is unsigned");
assert!(
matches!(result, Err(VerifyError::Consensus(_))),
"Error should be a `Consensus` error"
);
let result = verify_tx(&signed_tx, &database, &blockchain);
assert!(
result.is_ok(),
"Should work since the TX is correctly signed"
);
}
}

16
static/bdk.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 13 KiB