Conversation
## Release notes: product changes
Introduce a more user-friendly and explicit "Address already in use"
error while running a second instance of the server with the same
addresses and monitoring enabled:
```
Ready!
WARNING: Diagnostics monitoring server could not get initialised on 0.0.0.0:4104: 'error creating server listener: Address already in use (os error 48)'
Exited with error: [SRO7] Could not serve on 0.0.0.0:1729.
Cause:
tonic::transport::Error(Transport, Os { code: 48, kind: AddrInUse, message: "Address already in use" })
```
## Motivation
The error had been not really specific, but it became even more
confusing with the non-terminating error from the monitoring service:
```
Ready!
Diagnostics monitoring server setup error for 0.0.0.0:4104: error creating server listener: Address already in use (os error 48)
Exited with error: tonic::transport::Error(Transport, Os { code: 48, kind: AddrInUse, message: "Address already in use" })
```
## Implementation
… initialization (typedb#7308) ## Release notes: product changes We add integration with Sentry to allow critical errors (e.g. `panic!`s) reporting. This information will help us see and eliminate unexpected TypeDB Server crashes. Please use the `-diagnostics.reporting.errors` boolean option to disable this feature if its work is undesirable (note that it will reduce the efficiency of our maintenance). Additionally, an overflow subtraction bug that sometimes affects diagnostics initialization and leads to crashes is fixed. ## Motivation The goal of the Sentry integration is to control unexpected errors, which most likely indicate bugs or other issues introduced by developers. It intentionally skips handled user errors as they are reported to the diagnostics service as "user-related errors". ## Implementation Based on [the official Sentry Rust setup page](https://docs.sentry.io/platforms/rust/), the simplest way to initialize Sentry correctly and not worry about threads and "sentry hubs" is to leave Sentry in the `main` function instead of integrating it inside of the `diagnostics` package (which was desired). From the architecture point of view, it's not the cleanest solution, but it is a decent approach from the perspective of how the Sentry crate works in Rust. More comments are left in the PR diff.
## Release notes: product changes We fix the issue where a join on an indexed relation would skip all additional data within the join variable. ## Motivation ## Implementation The `IndexedRelationExecutor` checks whether the input row contains a value for the relation instance (regardless of mode). That value is then used as a filter for the produced iterator. During a cartesian product, instead of the input row, the previous instersection row (i.e., output row) was passed into the executor during (re)opening. That meant that the only accepted relation index tuples would have to come from the same relation the first encountered one did.
## Motivation When we, e.g., look up an attribute of specific type of a bound entity, then the cost should be proportional to the average number of attributes _of that type_ of the entity (_not_ average number of all attributes of the entity). ## Implementation Fixed the cost computation logic.
## Release notes: product changes Implements a release pipeline for windows which uses cargo instead of bazel to build TypeDB. ## Motivation Long sandbox paths from bazel causes the build to fail. Thus we implement a release pipeline without bazel ## Implementation 1. Use choco to install dependencies, 2. `cargo build` builds `typedb_server_bin.exe` 3. `git apply` a patch which imports `typedb_server_bin.exe` as a file, and adapts the packaging target accordingly 4. Uses the regular bazel deploy targets to deploy.
…7310) ## Release notes: product changes Extends query planning to consider functions. Non-recursive functions add up planning cost estimates of every triggered function body. Recursive function planning currently just sets the recursive call cost to 1. ## Motivation Allow query planning to have better cost estimates for function calls. ## Implementation * Functions are planned together when being compiled. * Fixes various bugs in disjunction planning. * Also includes some type-seeding fixes * Fixes a bug where the function manager always used the wrong sequence number to restore from.
## Release notes: product changes We merge development into master for 3.0.2 release.
…latforms (typedb#7316) ## Release notes: product changes Makes the directory structure of the windows distribution consistent with other platforms. ## Motivation Consistency with other platforms. ## Implementation Moves typedb_server_bin.exe into the server folder & updates the launcher batch file accordingly.
…db#7318) ## Release notes: product changes We fix TypeDB reporting crashes when CA certificates are not found on the host. An additional Sentry (our crash reports endpoint) warning will be reported, but it no longer affects the server's availability. ## Motivation ## Implementation If reporting is not successful for either endpoints, a warning/error (based on the cause) to Sentry will be reported, and the reporting for this endpoint will stop. For simplicity, the job will be still activated every 1 hour, but it will quit after a boolean flag check. The next real attempt to report anything will happen only after a restart. The logic described above works for two endpoints separately. This way, if only one endpoint fails (which should not be the case now), the other one continues working as expected, reporting data every hour.
## Release notes: product changes Replaces usages of the todo macro with either errors which will be returned, or custom macros if the line is unreachable. This reduces server crashes due to panics when execution reaches the todo. ## Motivation Avoid server crashes due to unimplemented code-paths. ## Implementation Replaces usages of the todo macro (which would cause server crashes when hit) with either errors which will be returned, or custom macros if the line is unreachable. The custom macros (defined in `error.rs`) use an enum `UnimplementedFeature` which helps track unimplemented code by feature. This should be useful in the future when we're getting around to implementing them since we can just delete the variant and see what breaks.
## Release notes: product changes Enables tests for various read & write behaviour. ## Motivation ## Implementation * Introduces test files for each disabled feature, runs them, and either - fixes bugs that fall out; or - ignores cases for bigger known bugs / features that aren't supported yet. * Enables these BDD tests in CI
## Release notes: product changes Add 'dec' suffix to display of decimal ## Motivation ## Implementation Also bumps TypeDB Behaviour, which makes the dec suffix consistently used. The change in fmt::display for decimal is required for tests to pass.
## Release notes: product changes Add 'dec' suffix to display of decimal ## Motivation ## Implementation Also bumps TypeDB Behaviour, which makes the dec suffix consistently used. The change in fmt::display for decimal is required for tests to pass.
…ated config crashes by user errors. (typedb#7325) ## Release notes: product changes Update the `--version` flag to return the correct version of the server when requested. Prevent the server from crashing when an incorrect encryption configuration is supplied, stopping it gracefully and returning a defined error instead. ## Motivation ## Implementation Behavior of the `--version` flag: ``` bazel-bin/typedb_server_bin --version server 3.0.3 cargo run --package typedb_server_bin --bin typedb_server_bin -- --version server 3.0.3 ``` Behavior of the insufficient encryption flags passed: ``` Ready! Exited with error: [SRO8] TLS certificate path must be specified when encryption is enabled. Ready! Exited with error: [SRO10] Could not read TLS certificate from 'none'. Cause: Os { code: 2, kind: NotFound, message: "No such file or directory" } Ready! Exited with error: [SRO13] Failed to configure TLS. Cause: tonic::transport::Error(Transport, PrivateKeyParseError) ``` The last one is a little strange, but it's how tonic errors are converted to string, so at least it tells something about parsing.
## Release notes: product changes Bump version & prepare relaese notes for 3.0.4
## Release notes: product changes TypeDB now reports extended error stack traces for all error types in the compiler and the intermediate representation builder, improving debuggability and ease-of-use. ## Motivation Improving UX for users of TypeDB. ## Implementation We convert the following errors into `TypeDBError` implementations: - `ExpressionCompilationError` - `ExpressionExecutionError` - `WritecompilationError` - `AnnotationError` - `LiteralParseError` - `ExpressionRepresentationError` - `FunctionReadError` This PR does not yet convert all the errors in `//concept`, `//encoding`, or `//storage`
## Release notes: product changes Fix the behavior of `relates` specialization, featuring: * Unblocked double specialization: ``` define relation family-relation relates member @abstract; relation parentship sub family relation, relates parent as member, relates child as member; # Good! ``` * Fixed validations for multi-layered specializations: ``` define relation family-relation relates member @abstract, relates pet @abstract; relation parentship sub family relation, relates parent as member; relation fathership sub parentship, relates father as member; # Bad! relation fathership sub parentship, relates fathers-dog as pet; # Good! ``` * Better definition resolution and error messaging. * Change the inner terminology for generated `relates` for specialization: "root" and "non specializing" are substituted by "explicit", and "specializing" is substituted by "implicit". ## Motivation We fix typedb#7322 and connected issues. ## Implementation Two steps: **1.** The definable resolution was not correctly implemented for capabilities, specifically for specialized relates. The new logic is the following: * While we work with concept API, we **almost** always want to get all capabilities, and the specialized capabilities should be returned with the `@abstract` annotation. Thus, we do trust regular Concept API methods everywhere. * However, (re/un/)definitions are different. They need to find the actually defined capabilities, not the ones being generated implicitly. This way, a new set of methods is needed for these `non_specialised` searches (decided to be explicit with the naming). Moreover, they need to be straightforward, searching for a definition that a user would think about. With this simplified (the previous logic was quite complicated, at least its reasoning, thus error-prone) search, the first part of the problem has been solved. Now, define and redefine always check if they have a **non-specialized** relates declared for them. Undefine uses "transitive" search and searches for the closest non-specialized relates when a role type is mentioned. **2.** The validations for specializations were not quite correct. They could still allow some invalid usages of Concept API. Now, we check 2 conditions in multiple steps: * The specialized `relates` should be had by my supertype (it was a similar check, but generally for a role type, so it didn't quite work). * The specialized `relates` should not be specializing (as before). P.S. Sadly, the current state of Concept BDDs with a huge amount of tests is not enough for multiple reasons (the first one is simply the additional logic in definitions that can be buggy + a need to produce more and more concept apis). I have some thoughts in my head which could help us bring the old idea of auto-conversion to queries in reality (probably only for schema queries: it will already be enough), but it will take at least a weak to refactor and implement it. So we'll see... In a year... Or more...
…#7334) ## Release notes: product changes We fix the logic in the lowering of query plans to executables that determines the direction of indexed relation instructions. ## Motivation The previous logic was likely wrong. ## Implementation Trivial.
## Release notes: product changes
Introduce role-player deduplication for when specified together in a single links constraint. i.e. `$r links (my-role: $p, my-role: $q);` will not use the same edge twice to satisfy each sub-constraint.
Writing them as separate links constraint `$r links (my-role: $p); $r links (my-role: $q);` will not de-duplicate.
## Motivation
Bring us closer to the intended semantics & Stability.
## Implementation
* Introduces a `RolePlayerDeduplication { links1 : Links, links2 : Links }` constraint in ir.
* During translation, a constraint `$rel links ($r1: $p1, $r2: $p2, ...)`, introduces one RolePlayerDeduplication constraint for each pair of introduced Links constraints ( i.e. `RolePlayerDeduplication( Links::new($rel, $ri, $pi), Links::new($rel, $j, $pj)) )` for i < j )
* Also fixes a bug where the planner copied over local variables from a negation into the outer scope.
* Reintroduces negation BDD.
…b#7335) ## Release notes: product changes TypeDB now shows the source of an error in the context of the original query, wher it is possible. In general, we aim to show a detailed error message in context of the original query whenever the error arises in the compilation phase of the query. Example of the improved error format: ``` [QEX2] Failed to execute define query. Near 4:29 ----- define attribute name value string; --> entity person owns name @range(0..10); ^ ----- Caused by: [DEX25] Defining annotation failed for type 'person'. Caused by: [COW4] Concept write failed due to a schema validation error. Caused by: [SVL34] Invalid arguments for range annotation '@range(0..10)' for value type 'Some(String)'. ``` ## Motivation TypeDB 3.x's error messages show a "stack trace" of causation that lead to a particular failure mode. However, these errors almost always originate somewhere in the user's original query. TypeDB now includes a snippet of the original query, pinpointing (to best-effort) the place in the query that triggered the failure. This improves self-help and debuggability dramatically. ## Implementation * We add `source_query` and `source_span` fields as special entries into `TypeDBError` macros invocations. Error stacks that contain both a source query and a source span, will on formatting produce an excerpt of the query based on the source span. * We add source spans throughout the `//ir` layer, which will be used to produce errors if they are encountered * In general, we do a best-effort placement of the Span, based on the most-accurate possible origin in the TypeQL query ### Tradeoffs We do introduce some architectural awkwardness: all packages that use `TypeDBError` macros must depend on `TypeQL`, in case they decide to include a `Span` query pointer. This includes low-level system packages! We choose to accept this instead of building an alternative representation of `Span` that isn't tied to TypeQL to avoid major complexity: converting a `Span`, which is what TypeQL currently offers, to a line, column `usize` pair would require access to the query string. The 'correct' solution may be to change TypeQL to pre-compute (line, col) positions as well as/instead of Spans, that TypeDB can use throughout.
## Release notes: product changes Remove cardinalities operation time validation. Now, both schema and data modifications lead to cardinalities revalidations only on commits. This fix solves typedb#7317. ## Motivation Previously, cardinalities were checked on operation time for schema modifications, while the data modifications only verified it on commit time. This inconsistency led to less smooth user perception of expected validations and possible actions, as well as blocked a couple of complicated and rare cases of schema modifications. The main issue discovered was a combination of the `define` query logic and the need to revalidate every Concept API call affecting cardinalities. When we declared a new `owns name @card(X..Y)`, two cardinality validations happened on operation time: a validation of the default cardinality for `owns name` and a validation of the `@card(X..Y)` cardinality. However, the restrictive default cardinality (which `@card(0..1)` for owns) would block this schema mutation if a subtype instance of this type already had multiple instances of `name`s (or subnames). The issue described above could be resolved by skipping the default validation if a cardinality is defined. However, this would require us to consider multiple changes: * Changing the architecture of `define`, allowing it to look at the annotations while declaring capabilities, and considering queries like `define person owns name; person owns name @card(...); person owns name; person owns name;` -- quite a complication * Passing the cardinality into the `set_owns` API call, making it less granular. * Or introducing another signal and, thus, state to Concept API to call for cardinality validations Overall, combining it with the asymmetry of schema/data validations and its possible confusion for the users, we decided to disable the operation time validations for cardinalities, which is both easier and more natural for the end user. ## Implementation Introduce a new parameter "is operation time validated" for constraints and add the respective filtering for constraints operation time validations in Type Manager. Extend the list of features to look at on Thing Manager's commit validations when constructing a list of affected instances for cardinality checks. Now, we control all objects subtyping changes, interfaces subtyping changes, traits changes, and annotations changes. See code comments for more information.
## Release notes: product changes Adds a flag to type-seeder to indicate the stage is a write stage. This ensures variables constrained by `isa`, or labelled roles are seeded with the exact type, and not (transitive) subtypes. This fixes a bug where one could not insert a role-player for a role with a subtype. ## Motivation Fixes typedb#7333 ## Implementation Adds a flag to type-seeder to indicate the stage is a write stage. This ensures variables constrained by `isa`, or labelled roles are seeded with the exact type, and not (transitive) subtypes. Updates the TypeSeeder `UnaryConstraint` for Isa and RoleName
…edb#7339) ## Release notes: product changes Include deleted concepts in structural equality for delete stage. This avoids a bug where the query cache picks the wrong cached query and runs it - allowing the execution of a delete stage with a totally different set of deleted concepts. ## Motivation Fixes wrong deleted behaviour due to query-caching; Fixes typedb#7321 ## Implementation Include deleted concepts in structural equality for delete stage
## Release notes: product changes We improve interaction of three subsystems: 1. Planner logic for selection of pattern traversal direction 2. Planner logic for selection of sort (a.k.a. join) variables 3. Lowering logic for both, which in same cases overwrote choices made by the planner leading to errors. ## Motivation Often the pieces were out of sync, leading to incorrect query executables. ## Implementation 1. Cost computations now take information about sort variables. 2. Lowering no longer overwrites sort variables.
## Product change and motivation Bump version & prepare relaese notes for 3.5.5
## Product change and motivation Tests the query structure returned by the analyze endpoint (and used by studios' graph visualizer) ## Implementation * Implements steps & functor encoding used in typedb/typedb-behaviour#381 * Small fixes around.
## Product change and motivation Cucumber codegen leads to significantly bloated rlibs. We remove unused step variant implementations (Given, When, Then) as low hanging fruit.
## Product change and motivation Add analyze to GRPC ## Implementation Adds the analyze RPC endpoint, and associated data structures. Notably adds queueing for analyze queries in the HTTP API `query_queue`
## Product change and motivation
We implement `try {}` block handling in all write stages, viz. `insert`,
`delete`, `put`, and `update`. Only top-level `try` blocks are currently
allowed, with no nesting.
Try blocks in write clauses **only execute when all variables are set**.
Deleting an optionally present ownership:
```
match $p isa person, has name $name; try { $p has email $email; };
delete try { has $email of $p; };
```
`Delete`-ing an optionally found relation:
```
match $p isa person, has name $name; try { $f isa friendship, links ($p); };
delete try { $f; };
```
Optionally `insert`-ing:
```
match
friendship ($p, $q);
$p isa person; try { $p has age $age; };
insert try { $q has $age; };
```
In `reduce` operations, unset variables (set to `None`) are treated as if they are not present at all, such as counting which will only count set variables:
```
match $x isa person; try { $x has age $y; };
reduce $red_var? = count($y);
```
`put` also can use `try`:
```
match
$p isa person, has ref $ref; try { $p has age $age; };
put $q isa also-person, has $ref; try { $q has $age; };
```
## Implementation
We also update pipeline validation during translation, since typeql is
now more permissive.
## Product change and motivation ## Implementation
## Product change and motivation Update CircleCI mac executors to `m4pro.medium` and xcode version to `16.4.0` in view of upcoming deprecations.
## Product change and motivation Align HTTP analyze response with GRPC ## Implementation Include architectural decisions, new assumptions, unimplemented paths/technical debt and tests written.
## Product change and motivation All HTTP messages will now silently ignore unused fields. This avoids breaking compatibility when an optional field is added to a client request payload. The server will simply ignore the field - This means the addition of any fields which may not be ignored must explicitly increment the API version. GRPC messages will have an extension field going forward. Newer drivers (>3.5.0) with older servers (<3.5.x) may face "forward compatibility" issues where a method in the driver does not exist on the server and returns an error. Newly added options may also be ignored by the older server.
## Product change and motivation We ensure that panics are written to the configured `tracing` log file, by intercepting the panic event and routing it via the logger. This code is borrowed from https://github.com/LukeMathWalker/tracing-panic ! ## Implementation We decided not to use the published crate for such a small function to minimize dependencies and future security validation work.
## Product change and motivation The expression executor was not copying over provenance from the input row. Fixes this.
…tructure (typedb#7631) ## Product change and motivation Allows a named role to be fully specified label when encoding pipeline structure. This is needed to handle `match $r relates relation:role;` ## Implementation If it's not a named role, it falls through to encode variable or label.
## Product change and motivation We update the release notes and bump the version to 3.7.0-rc0. ## Implementation
## Product change and motivation We improve the error message returned when type-seeding fails before the iterative pruning step by including the constraint name in the message. We also make type-inference propagate labels across 'isa' constraints first, since these are likely to be the most informative. This makes for more intuitive error messages when the 'isa' constraint is the unsatisfiable one.
## Product change and motivation We implement the following functions for `Decimal`: - `abs`, - `round`, - `ceil`, - `floor`. This completes the value type coverage for implemented intrinsic functions. ## Implementation
## Product change and motivation We only include the pipeline structure in concept row responses if the include_query_structure flag in query options is set.
## Product change and motivation Bump version, dependencies, cargo sync, prepare release notes
…#7645) ## Product change and motivation We improve the TypeDB Docker setup by using `/var/lib/typedb/data` for the architecture-agnostic data directory for TypeDB. This path is hardcoded into the built-in docker command for starting the command. This means, we now simplify the docker external volume mount to be: `docker volume create typedb-data` and `docker create --name typedb -v typedb-data:/var/lib/typedb/data -p 1729:1729 -p 8000:8000 typedb/typedb:latest` Which works for either ARM or x86 builds. ## Implementation Use a different storage volume path for TypeDB Docker, and configure docker images that we build to use `/var/lib/typedb/data` as the storage directory.
## Product change and motivation Update release notes for 3.7.1
## Product change and motivation Implement min and max for expressions, usable for numerical types (`integer`, `double`, and `decimal`). Works for exactly 2 arguments of identical type: ``` match let $x = min(10, 12); let $y = max(10, 12); ``` ## Implementation Add new op codes and compilation for min and max in expressions, and new behaviour tests via git dependency. Depends on typedb/typedb-behaviour#394
## Product change and motivation Add `--remote_download_toplevel` to the bazel-rc for build jobs, which improves CI build times using the remote cache significantly (10-20% baseline) by not downloading intermediate rule outputs. We also allow parallelization of multiple checkstyle jobs.
## Product change and motivation Add a requirement for `initial_delay` of diagnostics reports, which, if not met, forces the reporter to skip the first report cycle. This allows silencing CI jobs built by the users of TypeDB not using the `diagnostics.reporting.metrics` flag without significant harm to the data (the data will be added to the next report unless the server stops early). Additionally, clean up the diagnostics logic from the outdated code from TypeDB 2.x not used in TypeDB 3.x. ## Implementation * Update `reporter.rs`: add `REPORT_INTERVAL` to the calculated initial delay if it's too small * Remove `is_owned` flag of the database metrics: all databases are "owned" by each replica of Cluster. In 3.x, the primary replica is selected for the whole TypeDB infrastructure, not per database. Thus, this flag is not needed.
This commit adds a practical 'Hello World' quickstart guide at the end of the README, demonstrating basic TypeDB usage including: - Starting the server and console - Creating a database - Defining a simple schema - Inserting and querying data Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| # bazel run @typedb_dependencies//tool/bazelinstall:remote_cache_setup.sh | ||
| # bazel build --jobs=8 //:assemble-typedb-all --compilation_mode=opt | ||
| # bazel test //tests/assembly:test_assembly --test_output=errors | ||
|
|
There was a problem hiding this comment.
Assembly tests skipped in deployment CI workflows
Medium Severity
The test-assembly-unix command only echoes 'skipped' instead of running actual assembly tests, but it's invoked in multiple deployment workflows (test-deploy-snapshot-linux-x86_64, test-deploy-snapshot-linux-arm64, etc.). This means snapshot deployments proceed without verifying the assembly builds correctly. The TODO comment explicitly acknowledges this: "It's still commented out!!! Implement or remove".
| steps: | ||
| - checkout | ||
| - run: .circleci\windows\prepare.bat | ||
| - run: REM "Restore .circleci\windows\test_assembly.bat" |
There was a problem hiding this comment.
Windows test step is effectively a no-op comment
Low Severity
The step run: REM "Restore .circleci\windows\test_assembly.bat" does nothing because REM is a Windows batch comment. This appears to be a placeholder comment disguised as a CI step within the test-deploy-snapshot-windows-x86_64 job, meaning Windows deployments also proceed without assembly testing.
| # Package Files # | ||
| *.jar | ||
| # Package | ||
| *.jarg |
There was a problem hiding this comment.
Gitignore typo causes jar files not to be ignored
Low Severity
The pattern *.jarg appears to be a typo for *.jar. The old .gitignore had *.jar to ignore Java Archive files, but this was changed to *.jarg which is not a valid file extension. This means .jar files will no longer be ignored and could be accidentally committed to the repository.
| }, | ||
| ports = ["1729", "8000"], | ||
| tars = [":assemble-server-linux-x86_64-targz"], | ||
| visibility = ["//test:__subpackages__"], |
There was a problem hiding this comment.
Docker image visibility references non-existent test directory
Low Severity
The assemble-docker-x86_64 and assemble-docker-arm64 targets use visibility = ["//test:__subpackages__"] but the project uses a tests directory (plural), not test. All other targets in the file consistently use //tests/assembly:__subpackages__. This inconsistency means the docker image targets may not be accessible from the test subpackages as intended.
Change & Motivation
This PR adds a 'Hello World' section at the bottom of the TypeDB README to help new users quickly get started with TypeDB. The section demonstrates the essential steps to:
This addition improves the onboarding experience by providing a practical example right in the main repository README.
Implementation
Related to Linear issue INT-1
Note
Modernizes build and release infrastructure and adds quickstart docs.
.bazelrc,.bazelversion,WORKSPACE,BUILD) to build Rust server and assemble cross-platform distributions (zip/tar.gz).circleci/windows/*) and patch for Windows packaging.factory/automation.yml) for build/tests (unit/integration/behaviour).gitignore; remove.travis.ymlREADME.mdwith a "Hello World" quickstart sectionWritten by Cursor Bugbot for commit dc610b2. This will update automatically on new commits. Configure here.