Deployment
Agents KIT
Operating federated queries over the whole data space.
- Our Adoption guideline
- The Architecture documentation
- The EDC Deployment description
- The (Data/Function) Provider Deployment description
- The AAS Bridge Deployment description
- The Conformity testbed
- A Data Sovereignity & Graph Policy discussion
Motivation & Deployment Roles
Knowledge Agents is a federated technology, so there is no central component setup to take into account. Instead, the Semantic Dataspace is formed by the individual business partners extending/configuring their connectors and enabling their backend systems and/or datalakes. The deployment depends hereby on the role that the business partner takes. The roles are described in more detail in our Adoption guideline.
Role: As A Consumer
As a consumer, you just need to:
- enable your dataspace connector to initiate/delegate the required Agent protocols (here: SparQL-over-Http).
- (optionally) use a separate matchmaking agent to securely host your own business and meta data in the graph storage behind the connector layer
- (optionally) mount your matchmaking agent as a remote repository into your enterprise graph infrastructure.
Role: As A Skill Provider
As a skill provider, you need to
- enable your dataspace connector to transfer/delegate the required Agent protocols.
- (optionally) employ multiple data planes in case you want to expose hosted skills (skill assets that operate as stored procedures and which require computational resources at the provider side) instead of distributed skills (skill assets that are offered as query texts/files and which are executed at the consumer side).
Role: As A (Data/Function) Provider
As a provider, you need to
- enable your dataspace connector to receive/internalize the required Agent protocols.
- (optionally) use a separate matchmaking agent to securely publish your own business and meta data from the graph storage behind the connector layer
Depending on the kind of provisioning, you will setup additional internal "agents" (endpoints).
Sub-Role: As A Data Provider
As a data provider, you want to
- bind your data sources to knowledge graphs following the Catena-X ontology. Therefore, a provisioning agent should be setup on top of a data virtualization/database layer.
Sub-Role: As A Function Provider
As a function provider, you want to
- bind your API to a special knowledge graph structure. Therefore, a remoting agent should be setup.
Sub-Role: As A Twin Provider
As a twin provider, you want to
- bridge between the Knowledge Agent and Asset Administration Shell APIs.
Runbook For Deploying and Smoke-Testing Knowledge Agents (Stable)
The Stable Environment is a minimal example environment exhibiting all roles and capabilities of the Tractus-X/Catena-X dataspace.
Knowledge Agents on Stable is deployed on the following two tenants
- App Provider 1 (BPNL000000000001)
- Dataspace Connector (Postgresl, Hashicorp-Vault) "provider-connector" see manifest
- Agent-Plane (Postgresql, Hashicorp-Vault) "provider-agent-plane" see manifest
- Provisioning Agent incl. Local Database "sql-agent" see manifest
- Remoting Agent (against a Public WebService) "api-agent" see manifest
- AAS Bridge (against a prerecorded )"aas-bridge" see manifest
- App Consumer 4 (BPNL0000000005VV)
1. Prepare the Two Tenants
As a first step, two technical users are installed for the dataspace connectors using the https://portal.stable.demo.catena-x.net
- App Provider 1: sa4
- App Consumer 4: sa5
The generated secrets should be installed under https://vault.demo.catena-x.net/ui/vault/secrets/knowledge
- stable-provider-dim
- stable-consumer-dim
Further secrets should be installed
- oem-cert
- oem-key
- oem-symmetric-key
- consumer-cert
- consumer-key
- consumer-symmetric-key
Finally, an access token to the vault has been generated.
2. Deploy Agent-Enabled Connector's
Using https://argo.stable.demo.catena-x.net/settings/projects/project-knowledge the following three applications have been installed.
We give the complete manifests but hide the secrets.
App Provider 1 Dataspace Connector Manifest
Deployed as "provider-connector"
project: project-knowledge
source:
repoURL: 'https://eclipse-tractusx.github.io/charts/dev'
targetRevision: 0.7.3
plugin:
env:
- name: HELM_VALUES
value: |
participant:
id: BPNL000000000001
nameOverride: agent-connector-provider
fullnameOverride: agent-connector-provider
vault:
hashicorp:
enabled: true
url: https://vault.demo.catena-x.net
token: ****
healthCheck:
enabled: false
standbyOk: true
paths:
secret: /v1/knowledge
secretNames:
transferProxyTokenSignerPrivateKey: oem-key
transferProxyTokenSignerPublicKey: oem-cert
transferProxyTokenEncryptionAesKey: oem-symmetric-key
iatp:
id: did:web:portal-backend.stable.demo.catena-x.net:api:administration:staticdata:did:BPNL000000000001
trustedIssuers:
- did:web:dim-static-prod.dis-cloud-prod.cfapps.eu10-004.hana.ondemand.com:dim-hosted:2f45795c-d6cc-4038-96c9-63cedc0cd266:holder-iatp
sts:
dim:
url: https://dis-integration-service-prod.eu10.dim.cloud.sap/api/v2.0.0/iatp/catena-x-portal
oauth:
token_url: https://bpnl000000000001-authentication.eu10.hana.ondemand.com/oauth/token
client:
id: sa4
secret_alias: stable-provider-dim
postgresql:
name: agent-postgresql
jdbcUrl: jdbc:postgresql://agent-postgresql:5432/provider
auth:
database: provider
username: provider_user
password: ****
controlplane:
securityContext:
readOnlyRootFilesystem: false
image:
pullPolicy: Always
endpoints:
management:
control:
port: 8083
path: "/control"
protocol:
port: 8084
path: "/api/v1/dsp"
management:
port: 8081
path: "/management"
authKey: ***
bdrs:
server:
url: https://bpn-did-resolution-service.int.demo.catena-x.net/api/directory
ingresses:
- enabled: true
# -- The hostname to be used to precisely map incoming traffic onto the underlying network service
hostname: "agent-provider-cp.stable.demo.catena-x.net"
# -- EDC endpoints exposed by this ingress resource
endpoints:
- protocol
- management
- api
# -- Enables TLS on the ingress resource
tls:
enabled: true
dataplane:
token:
signer:
privatekey_alias: consumer-key
verifier:
publickey_alias: consumer-cert
chart: tractusx-connector
destination:
server: 'https://kubernetes.default.svc'
namespace: product-knowledge