- Automate User Tokens
- Write a Python Automation
- Write a JVM Automation
- Managing Ledger Access on Daml Hub
Automate User Tokens
With the full support of the administrative User Management and Party Management services on Hub, you are now able to manage your parties and users either via the Hub Console, gRPC API or HTTP JSON API.
With the new User Management features, you have a user corresponding to each Daml Hub account on your ledger. By default each user is only given rights to its associated party, but you can assign additional party readAs and actAs rights to that user and your application then runs with those rights.
Note The processes in this guide were tested with GNU coreutils base64; Mac's default base64 implementation is known to produce different results under some circumstances
Authentication in Daml Hub: Types of tokens
Daml Hub uses three kinds of tokens. This guide focuses on the user token, which is the preferred type of token to use going forward.
- Account Token - preferred mechanism for managing PACs and multiple ledgers
- User Token - preferred mechanism for automation when interaction with Application API
- Ledger Party Token - less-preferred (but still supported) mechanism for automation when interaction with Application API
User tokens
A Personal Access Credential via Account SettingsUI | API Spec should be used to manage and mint user tokens, as described below.
It is advised that a user token be used in place of a multi-party ledger token when a user needs to readAs and/or actAs multiple parties.
The preferred token to use for all automation for applications which interact with the Application API is a user token. A user token corresponds to a specific user on the ledger, which is granted specific actAs and readAs rights. Note that the 'user' in user token refers only to the users created on the ledger, and is distinct from the Daml Hub end user or the end user of any app you may create.
Automate the creation of an account token in Daml Hub
Account tokens in Daml Hub are the preferred way to authenticate when interacting with your ledger. You can capture the token in the UI, then automate the creation of personal access credentials (PACs) that can be used to mint short-lived account tokens for that ledger.
Capture your account token from the ledger UI
To manage all your ledgers on Hub, you should create a PAC which can mint your account token.
For the initial setup of this PAC, Copy Account JWT
from the Account Settings - Profile Page.
Then set this token to the variable ACCOUNT_TOKEN
export ACCOUNT_TOKEN="<token copied from Copy Account JWT>"
Create a PAC
Create a Personal Access Credential via cURL
Create a new credential with scope:site
to create a PAC which can mint account tokens.
curl 'https://hub.daml.com/api/v1/personal-access-credentials' \
-H 'authorization: Bearer '"$ACCOUNT_TOKEN"'' \
-H 'content-type: application/json' \
--data-raw '{"name":"AccountToken","scope":"site","secret_expires_at":1707699015,"token_expires_in":86400}' \
Response
Capture the returned pac_secret and store this securely. This should be treated as a private key, as this is the secret which can subsequently be used to mint user tokens for this user on this ledger.
{
"id_issued_at": 1699923029,
"name": "AccountToken",
"pac_id": "<pac_id>",
"pac_secret": "<pac_secret>",
"scope": "site",
"secret_expires_at": 1707699015,
"token_expires_in": 86400
}
Mint an account token from your PAC
Use the PAC to generate an account token
curl -XPOST https://hub.daml.com/api/v1/personal-access-credentials/token \
-H 'Authorization: Basic <base64_encode("${pac_id}:${pac_secret}")>'
Response
Capture the returned pac_secret and store this securely. This should be treated as a private key, as this is the secret which can subsequently be used to mint user tokens for this user on this ledger.
{
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImRhYmwtY...",
"expires_in": 86400,
"token_type": "Bearer"
}
Automate the creation and use of user tokens in Daml Hub
To create and use user tokens on a given ledger, you must first identify the ledger and ledger owner user, then create a PAC for that ledger owner. You then use the PAC to mint a user token. You can also use cURL to list all existing PACs, or to revoke an existing PAC.
Identify the ledger and ledger owner user
- Create a new ledger on Hub and identify the
<ledgerId>
once created - Go to the Identities tab for that ledger and find the user ID of the ledger owner (typically begins with
auth0
,google-oauth2
, orgoogle-apps
). Alternatively, you can curl the account endpoint,curl -H 'authorization: Bearer $ACCOUNT_TOKEN' https://hub.daml.com/api/v1/account/user
.
Create a PAC to mint a user token
Now that you have identified the ledger and user ID of the ledger owner, you are ready to create a PAC so you can mint tokens for this user as required for your automations.
Personal Access Credentials can either be created through the Hub UI or via cURL commands as detailed below.
Create a Personal Access Credential via cURL
curl -XPOST 'https://hub.daml.com/api/v1/personal-access-credentials' \
-H 'authorization: Bearer '"$ACCOUNT_TOKEN"'' \
--data-raw '
{
"ledger": {
"ledgerId": "xyzabcdefghi",
"user": "auth0|123232323232323abcefgd15"
},
"name": "User PAC for user auth0|123232323232323abcefgd15 on ledger xyzabcdefghi",
"scope": "ledger:data",
"secret_expires_at": 1,
"token_expires_in": 3600
}'
Response
Capture the returned pac_secret and store this securely. This should be treated as a private key, as this is the secret which can subsequently be used to mint user tokens for this user on this ledger.
{
"id_issued_at": 1699919904,
"name": "User PAC for user auth0|123232323232323abcefgd15 on ledger xyzabcdefghi",
"pac_id": "<pac_id>",
"pac_secret": "<pac_secret>",
"scope": "ledger",
"secret_expires_at": 1700524685,
"token_expires_in": 3600
}
Mint a user token for your ledger owner user
curl -XPOST https://hub.daml.com/api/v1/personal-access-credentials/token \
-H 'Authorization: Basic <base64_encode("${pac_id}:${pac_secret}")>'
Associate Daml users with users in your identity provider
This section describes how to manage access to the application from your existing identity and access management (IAM) product. You can:
- allocate parties
- create an end user with ActAs rights
- mint user tokens for the newly-created user and use them to proxy requests for that user
Party allocation
You can allocate a party using the gRPC API or the JSON API.
Allocate a party using the gRPC API (grpcurl)
Allocate Party gRPC Specification
grpcurl -H "Authorization: Bearer $LEDGER_OWNER_TOKEN" -d @ "$SUB_DOMAIN_HOST:443" \
com.daml.ledger.api.v1.admin.PartyManagementService/AllocateParty <<EOM
{
"partyIdHint":"myexampleparty1",
"displayName":"My Example Party 1",
}
EOM
Allocate a party using the JSON API (curl)
curl --fail-with-body --silent --show-error -XPOST \
'https://'"$SUB_DOMAIN_HOST"'/v1/parties/allocate' \
-H 'accept: application/json' \
-H 'content-type: application/json' \
-H 'authorization: Bearer '"$LEDGER_OWNER_TOKEN"'' \
-d '{ "identifierHint" : "myexampleparty1", "displayName": "My Example Party 1" }'
To parse the unique identifier which is returned, pipe the response to jq and parse similar to the code below
jq -r '( first(.[]) | .identifier )
Create an end user with actAs rights to an example party
As with party allocation, this can be accomplished using the gRPC API or the JSON API.
Note: If you’re syncing with users in an external identity provider, the UserID created on Hub should ideally match the subject (primary identity) of the user in your identity provider to avoid additional mapping. In the example below we assume the external identity of the user is external-idp|MyUser1
Let us assume the party created in the allocate step just before this is called myexampleparty1::12207b85d70d88941ab1af9608aaa1c8710ceaec7b3246a192ecbc38918f6d413b06
Create user using the gRPC API (grpcurl)
gRPC Specifications | JSON API
grpcurl -H "Authorization: Bearer $LEDGER_OWNER_TOKEN" -d @ "$SUB_DOMAIN_HOST:443" \
com.daml.ledger.api.v1.admin.UserManagementService/CreateUser <<EOM
{
"user": {
"id" : "external-idp|MyUser1",
"primaryParty" : "myexampleparty1::12207b85d70d88941ab1af9608aaa1c8710ceaec7b3246a192ecbc38918f6d413b06",
}
"rights": [
{
"party": "myexampleparty1::12207b85d70d88941ab1af9608aaa1c8710ceaec7b3246a192ecbc38918f6d413b06",
"type": "CanActAs"
},
]
}
EOM
Create user using the JSON API (curl)
curl --fail-with-body --silent --show-error -XPOST \
'https://'"$SUB_DOMAIN_HOST"'/v1/user/create' \
-H 'accept: application/json' \
-H 'content-type: application/json' \
-H 'authorization: Bearer '"$LEDGER_OWNER_TOKEN"'' \
-d '{
"userId": "external-idp|MyUser1"
"primaryParty": "myexampleparty1::12207b85d70d88941ab1af9608aaa1c8710ceaec7b3246a192ecbc38918f6d413b06",
"rights": [
{
"party": "myexampleparty1::12207b85d70d88941ab1af9608aaa1c8710ceaec7b3246a192ecbc38918f6d413b06",
"type": "CanActAs"
},
]
}'
Mint a user token on behalf of an end user
Create a PAC for the user you have just created for ledger xyzabcdefghi
curl -XPOST 'https://hub.daml.com/api/v1/personal-access-credentials' \
-H 'authorization: Bearer '"$ACCOUNT_TOKEN"'' \
--data-raw '
{
"ledger": {
"ledgerId": "xyzabcdefghi",
"user": "external-idp|MyUser1"
},
"name": "User PAC for user external-idp|MyUser1 on ledger xyzabcdefghi",
"scope": "ledger:data",
"secret_expires_at": 1,
"token_expires_in": 3600
}'
Response
Capture the returned pac_secret
and pac_id
and store this securely in a place where it can be associated with only this user, and retrieved for subsequent token refresh requests. The pac_secret
should be treated as a private key, as this is the secret which can subsequently be used to mint user tokens for this user on this ledger.
{
"id_issued_at": 1699919904,
"name": "User PAC for user external-idp|MyUser1 on ledger xyzabcdefghi",
"pac_id": "<pac_id>",
"pac_secret": "<pac_secret>",
"scope": "ledger",
"secret_expires_at": 1700524685,
"token_expires_in": 3600
}
Use a token to proxy requests for the end user
Getting the token for an end user allows you to proxy requests on their behalf. These are the steps to follow behind the scenes when a user logs in to the end application
- Determine the identity of the end user you want to proxy requests as (either from an external identity provider or from a system lookup)
- Create a User PAC for that specific user as above, and store it permanently as a sensitive credential as per your firm’s security policy associated with this end user
- Using the specific User PAC for the end user, mint a token for that user and return it
curl -XPOST https://hub.daml.com/api/v1/personal-access-credentials/token \
-H 'Authorization: Basic <base64_encode("${pac_id}:${pac_secret}")>'
- Issue streaming or create contract request to ledger as end user passing that minted token
Move from multi-party token to user token for multi-party workflow
The example details the creation and usage of a user with actAs rights to multiple parties.
- Create a party for the primary account holder (PrimaryParty), and capture the unique ID in variable
party1
grpcurl -H "Authorization: Bearer $LEDGER_OWNER_TOKEN" -d @ "$SUB_DOMAIN_HOST:443" \
com.daml.ledger.api.v1.admin.PartyManagementService/AllocateParty <<EOM
{
"partyIdHint":"primaryparty",
"displayName":"Primary Party",
}
EOM
- Create a party for the secondary account holder (secondaryParty), and capture the ID in variable
party2
grpcurl -H "Authorization: Bearer $LEDGER_OWNER_TOKEN" -d @ "$SUB_DOMAIN_HOST:443" \
com.daml.ledger.api.v1.admin.PartyManagementService/AllocateParty <<EOM
{
"partyIdHint":"secondaryParty",
"displayName":"Secondary Party",
}
EOM
- Create an AccountHolder user through the admin API with actAs rights to both parties
grpcurl -H "Authorization: Bearer $LEDGER_OWNER_TOKEN" -d @ "$SUB_DOMAIN_HOST:443" \
com.daml.ledger.api.v1.admin.UserManagementService/CreateUser <<EOM
{
"user": {
"id" : "AccountHolder",
}
"rights": [
{
"party": "$party1",
"type": "CanActAs"
},
{
"party": "$party2",
"type": "CanActAs"
},
]
}
EOM
- Create a PAC for the PrimaryAccountHolder user on ledger xyzabcdefghi, and store the returned
pac_secret
andpac_id
securely.
curl -XPOST 'https://hub.daml.com/api/v1/personal-access-credentials' \
-H 'authorization: Bearer '"$ACCOUNT_TOKEN"'' \
--data-raw '
{
"ledger": {
"ledgerId": "xyzabcdefghi",
"user": "AccountHolder"
},
"name": "User PAC for user AccountHolder on ledger xyzabcdefghi",
"scope": "ledger:data",
"secret_expires_at": 1,
"token_expires_in": 3600
}'
Response
Capture the returned pac_secret
and pac_id
and store this securely in a place where it can be associated with only this user, and retrieved for subsequent token refresh requests. The pac_secret
should be treated as a private key, as this is the secret which can subsequently be used to mint user tokens for this user on this ledger.
{
"id_issued_at": 1699919904,
"name": "User PAC for user AccountHolder on ledger xyzabcdefghi",
"pac_id": "<pac_id>",
"pac_secret": "<pac_secret>",
"scope": "ledger",
"secret_expires_at": 1700524685,
"token_expires_in": 3600
}
- Use the PAC to create a token for the PrimaryAccountHolder user, and capture in variable
user_token
curl -XPOST https://hub.daml.com/api/v1/personal-access-credentials/token \
-H 'Authorization: Basic <base64_encode("${pac_id}:${pac_secret}")>'
- Issue a transaction subscription specifying the user token filtering for both primaryParty and secondaryParty
grpcurl -H "Authorization: Bearer $user_token" \
-d '{
"filter": {
"filtersByParty": {
"$party1": {},
"$party2": {}
}
},
"ledgerId": "xyzabcdefghi"
}' $SUB_DOMAIN_HOST:443 com.daml.ledger.api.v1.TransactionService.GetTransactions
Write a Python Automation
This guide contains a simple example of a Python automation that can be run in Daml Hub. This example can be copied and then adjusted to your application.
The code for this example can be found in digital-asset/hub-automation-example/python. Note that the Daml model in example-model
is for reference and does not need to be copied when using the code as a template.
Please bear in mind that this code is provided for illustrative purposes only, and as such may not be production quality and/or may not fit your use cases.
Build the automation
Run make all
in the project's root directory. This command uses poetry to build and package a .tar.gz
file with the automation. It then copies the file with the correct version and name (as set in the Makefile
) into the root directory. The .tar.gz
file should then be uploaded to Daml Hub to run the automation.
Run the automation locally
Run DAML_LEDGER_PARTY="party::1234" poetry run python3 -m bot
Since localhost:6865
is set as a default, you do not need to set the ledger URL. However, DAML_LEDGER_PARTY
must be set to a party that is allocated on the Daml ledger you are testing with - which is always slightly different on Canton ledgers due to the participant ID. run_local.sh
can be used to dynamically fetch the Alice party and start the automation with that party.
Structure the automation
A Hub Python automation should always be a module named bot
as it is run on Hub with python3 -m bot
.
pyproject.toml
The configuration file for the project that the poetry tool uses to build the automation:
[tool.poetry]
name = "bot"
version = "0.1.0"
description = "Example of a Daml Hub Python Automation"
authors = ["Digital Asset"]
[tool.poetry.dependencies]
python = "^3.9"
dazl = "^7.3.1"
Directory structure
├── bot
│ ├── __init__.py
│ └── pythonbot_example.py
├── poetry.lock
├── pyproject.toml
init.py
The file with the main portion of the code (in pythonbot_example.py
) can have any name, but must be imported in __init__.py
where the main function should be called. This file runs when the automation is initialized:
from .pythonbot_example import main
from asyncio import run
run(main())
Automation Code
Python automations running in Daml Hub generally use the Dazl library to react to incoming Daml contracts.
Package IDs
Dazl recognizes template names in the format package_id:ModuleName.NestedModule:TemplateName:
package_id="d36d2d419030b7c335eeeb138fa43520a81a56326e4755083ba671c1a2063e76"
# Define the names of our templates for later reuse
class Templates:
User = f"{package_id}:User:User"
Alias = f"{package_id}:User:Alias"
Notification = f"{package_id}:User:Notification"
The package ID is the unique identifier of the .dar
of the contracts to follow. Including the package ID ensures that the automation only reacts to templates from the specified Daml model. If the package ID is not included, Dazl streams all templates that have the same name. This is particularly important when a new version of a Daml model is uploaded to the ledger, since the names of the templates may remain the same.
The package ID of a .dar
can be found by running daml damlc -- inspect /path/to/dar | grep "package"
Environment variables
Dazl requires the URL of the Daml ledger to connect to as well as a party to act as. These are always set as environment variables in automations running in Daml Hub, but adding defaults can help with running locally.
# The URL path to the ledger you would like to connect to
url = os.getenv('DAML_LEDGER_URL') or "localhost:6865"
# The party that is running the automation.
party = os.getenv('DAML_LEDGER_PARTY') or "party"
DAML_LEDGER_PARTY
is set as the party that you specified when deploying the automation. Note that this party can only see and operate on contracts that it has access to as a signatory or observer.
Stream
After defining the templates, the example bot in this repository sets up a stream that runs indefinitely. This stream sends a log message when a contract is created or deleted, or when the stream has reached the current state of the ledger. If a Notification contract is created, it automatically exercises the Acknowledge
choice:
# Start up a dazl connection
async with connect(url=url, act_as=Party(party)) as conn:
# Stream all of our templates forever
async with conn.stream_many([Templates.User, Templates.Alias, Templates.Notification]) as stream:
async for event in stream.items():
if isinstance(event, CreateEvent):
logging.info(f"Noticed a {event.contract_id.value_type} contract: {event.payload}")
if str(event.contract_id.value_type) == Templates.Notification:
await conn.exercise(event.contract_id, "Acknowledge", {})
elif isinstance(event, ArchiveEvent):
logging.info(f"Noticed that a {event.contract_id.value_type} contract was deleted")
elif isinstance(event, Boundary):
logging.info(f"Up to date on the current state of the ledger at offset: {event.offset}")
stream.items()
yields a CreateEvent
when a contract is created, an ArchiveEvent
when a contract is archived, and a Boundary
event once the stream has caught up to the current end of the ledger.
The Boundary
can be helpful when starting a stream on a ledger that already has data. The boundary event has an offset parameter that can be passed to conn.stream_many
, after which the stream begins from the offset point.
conn.exercise
is used in this example, but create_and_exercise
, exercise_by_key
and create
commands are also available.
Query
Dazl has another command, query
/query_many
which continues the program once the query is finished instead of continuing to stream. Commands can also be defined and later submitted together with other commands as a single transaction. The following example queries for all current Notification templates, then submits all Acknowledge commands together: # Query only Notification
contracts and build a list of "Acknowledge" commands
commands = []
async with conn.query(Templates.Notification) as stream:
async for event in stream.creates():
commands.append(ExerciseCommand(event.contract_id, "Acknowledge", {}))
# Submit all commands together after the query completes
await conn.submit(commands)
External Connectivity
Enterprise users can connect Python automations on their ledgers to services running on the internet outside of Daml Hub. The outgoing IP address is dynamically set. For incoming connections, Daml Hub provides a webhook URL: http://{ledgerId}.daml.app/pythonbot/{instanceId}/
. This link can be copied from the Status page for the running instance. To accept traffic to that endpoint, you can run a webserver (such as with aiohttp) on the default 0.0.0.0:8080. A request pointed directly to the webhook URL is routed to the root directory of your server.
Write a JVM Automation
This guide outlines how to create a JVM automation that can be run in Daml Hub. JVM automations can be written in any language that can be run on the JVM (most typically, Java and Scala).
Note that JVM automations are only supported for the (Scratchpad service)[https://hub.daml.com/docs/quickstart#scratchpad-service] and the (Participant service)[https://hub.daml.com/docs/quickstart#participant-service].
The code for this example can be found in the open-source Github repo digital-asset/hub-automation-example/java. The example can be copied and adjusted to your application.
Note that the Daml model in example-model
is for reference and does not need to be copied when using the code as a template. The Daml model in example-model
has its templates generated into the source code via daml codegen
- you can either build a model from this example or replace the
codegen and model with your own.
For more examples of how to use the Daml Java Bindings, please refer to https://github.com/digital-asset/ex-java-bindings.
Please bear in mind that this code is provided for illustrative purposes only, and as such may not be production quality and/or may not fit your use cases.
Build the automation
The hub-automation-example
repository has a simple skeleton example of a JVM automation that can be run in Daml Hub.
To build
Pre-requisites • Maven • JDK 11 or higher (JDK 17 preferred)
If you have checked out the full example, you can build everything with the following command:
make all
Running this command uses maven to build and package a 'fat' .jar
file with the automation. It then copies the file with the correct version and name (as set in the Makefile
) into the root directory. The .jar
file is what will be uploaded to Daml Hub to run the automation.
To run locally
# start up a local sandbox
daml sandbox
# start up the jvm automation
./run_local.sh
Structure
pom.xml
Directory structure
├── Makefile
├── README.md
├── pom.xml
├── run_local.sh
└── src
└── main
├── java
│ └── examples
│ └── automation
│ ├── Main.java
│ ├── Processor.java
│ └── codegen
│ ├── da
│ │ ├── internal
│ │ │ └── template
│ │ │ └── Archive.java
│ │ └── types
│ │ └── Tuple2.java
│ └── user
│ ├── Acknowledge.java
│ ├── Alias.java
│ ├── Change.java
│ ├── Follow.java
│ ├── Notification.java
│ └── User.java
└── resources
└── logback.xml
main
public class Main {
public static void main(String[] args) {
String appId = System.getenv("DAML_APPLICATION_ID");
String ledgerId = System.getenv("DAML_LEDGER_ID");
String userId = System.getenv("DAML_USER_ID");
String url[] = System.getenv("DAML_LEDGER_URL").split(":");
String host = url[0];
int port = Integer.parseInt(url[1]);
}
}
Configuration
You can configure an automation that may differ from deployment to deployment by uploading a configuration file when the automation is deployed.
This configuration can be of any type, for example .json
, .toml
, or .yaml
.
To access this configuration file, set the environment variable CONFIG_FILE
to point to a location on the volume where the configuration file is stored. This file can then be parsed by the automation.
For example if a JSON file was uploaded:
package examples.javabot;
import org.json.JSONException;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Optional;
public class ConfigurationParser {
private static final Logger logger = LoggerFactory.getLogger(ConfigurationParser.class.getName());
static void parseConfigFile() {
try {
Optional<String> configFilePath = Optional.ofNullable(System.getenv("CONFIG_FILE"));
if (configFilePath.isPresent()) {
String configContent = Files.readString(Paths.get(configFilePath.get()));
if (!configContent.isBlank()) {
logger.info(configContent);
JSONObject config = new JSONObject(configContent);
logger.info("configFilePath: " + configFilePath);
logger.info("configFileContents" + config.toString(4));
} else {
throw new IOException("No config file content found");
}
}
else {
throw new IOException("No config file content found");
}
} catch (IOException | JSONException e) {
// Catch any file read or JSON parsing errors in case the argument JSON file wasn't uploaded.
// Since this is just an example we don't need to worry about that currently.
logger.warn(e.toString(), e);
}
}
}
To run on Daml Hub
Determine what user and party you want the automation to run as.
Create a new party from the Identities / Party tab and give it a meaningful display name and a hint of your choice. Create a new user from the Identities / User tab and populate the chosen party as the primaryParty for that user.
Once you have built the JAR file that contains all dependencies, start a new service on Daml Hub (a scratchpad or a participant connected to a synchronizer), go to the Deployments
tab, and upload the JAR.
Once the JAR is uploaded, you can start a new instance of the automation and run it as the newly created user.
Select the configuration desired for that deployment following the instructions in the previous section.
External Connectivity
Outbound Connectivity
JVM automations running on participants owned by Enterprise users can connect to services running on the internet outside of Daml Hub. The outgoing IP address is dynamically set.
Inbound Connectivity
For incoming connections, Daml Hub provides a webhook URL ofhttp://{ledgerId}.daml.app/automation/{instanceId}/
. This link can be copied from the Status Page for the running instance.
If you would like to accept traffic to that endpoint, you can run a webserver running on the default 0.0.0.0:8080
. A request pointed directly to the webhook URL will be routed to the root /
of your server.
Reading from the Ledger
JVM automations running on Daml Hub generally use the Daml Java Bindings to react to incoming Daml contracts
Package IDs
The package ID is the unique identifier of the .dar
of the contracts to follow. Including the package ID ensures that the automation only reacts to templates from the specified Daml model. If the package ID is not included, the grpc will stream all templates that have the same name. This is particularly important when a new version of a Daml model is uploaded to the participant, since the names of the templates may remain the same.
The package ID of a .dar
can be found by running daml damlc -- inspect /path/to/dar | grep "package"
Stream
After defining the templates, the example automation in this repository sets up a stream that runs indefinitely. This stream sends a log message when a contract is created or deleted, or when the stream has reached the current state of the participant. If a Notification contract is created, it automatically exercises the Acknowledge
choice:
public void runIndefinitely() {
final var inclusiveFilter = InclusiveFilter
.ofTemplateIds(Set.of(requestIdentifier));
// specify inclusive filter for the party attached to this processor
final var getTransactionsRequest = getGetTransactionsRequest(inclusiveFilter);
// this StreamObserver reacts to transactions and prints a message if an error occurs or the stream gets closed
StreamObserver<GetTransactionsResponse> transactionObserver = new StreamObserver<>() {
@Override
public void onNext(GetTransactionsResponse value) {
value.getTransactionsList().forEach(Processor.this::processTransaction);
}
@Override
public void onError(Throwable t) {
logger.error("{} encountered an error while processing transactions : {}", party, t.toString(), t );
System.exit(0);
}
@Override
public void onCompleted() {
logger.info("{} transactions stream completed", party);
}
};
logger.info("{} starts reading transactions", party);
transactionService.getTransactions(getTransactionsRequest.toProto(), transactionObserver);
}
Querying Ledger
After defining the templates, the example automation can also query the participant for all current Notification templates and then submit all Acknowledge commands together:
Reading from the Participant Query Store (PQS)
A JVM automation can also query the PQS Postgres database if you want to take an action based on a particular state of the ledger.
PQS queries are also the recommended way of reading data if you need to query for archived historical information, rather than relying on the stream of gRPC transactions.
Connecting to Participant Query Store (PQS)
The full JDBC URL connection to the Postgres database is made available to your running JVM automation as the environment variable PQS_JDBC_URL
You can read from this environment variable and then create a Postgres database connection using your JDBC driver of choice.
The example below is given for representational purposes
import org.postgresql.ds.PGSimpleDataSource;
String jdbcUrl = System.getenv("PQS_JDBC_URL");
PqsJdbcConnection pqsJdbcConnection = new PqsJdbcConnection(jdbcUrl);
public class PqsJdbcConnection {
private final PGSimpleDataSource dataSource;
public PqsJdbcConnection(String jdbcUrl) throws ClassNotFoundException {
Class.forName("org.postgresql.Driver");
this.dataSource = new PGSimpleDataSource();
dataSource.setUrl(jdbcUrl);
}
}
Querying Participant Query Store (PQS)
Guidance on how to query the PQS is detailed in the Participant Query Store (PQS) documentation
Example JVM Automation Code Querying PQS
Example JVM automation code, including querying the ODS, can be found in the java folder within hub-automation-example
Managing Ledger Access on Daml Hub
This guide outlines how organizations can manage employees' access to ledgers on Daml Hub and explains how to control access to production ledgers.
Managing Ledger Access
Enterprise account holders in Daml Hub can add ledger collaborators. Ledger collaborators can develop on and operate a ledger from their own Daml Hub accounts. Ledger collaborators can do nearly everything a ledger owner can do, but they cannot delete the ledger, change ledger capacity, add/change subdomains, or add/remove other collaborators. Ledger collaborators must be enterprise users. By adding collaborators to ledgers, users can manage ledger access for employees inside their organization and third parties outside their organization. The documentation for the collaborators feature can be seen here.
Control Access to Production Ledgers
Each Daml Hub Enterprise account has one primary account holder. The enterprise primary account holder can create high capacity ledgers for the organization – it is strongly advised that customers go into production on high capacity ledgers. Each organization has a quota of 3 high capacity ledgers for production, staging, and QA or development. More information about ledger capacities and quotas can be found in the Daml Hub documentation here. To control access to high capacity and production ledgers we advise that the primary enterprise account holder is assigned to an email address owned or operated by a trusted party within the organization. The primary account holder can then control access to the production and high capacity ledgers by adding or removing ledger collaborators.