GUBUS TRANSFORM
TRANSFORM performs field-level data transformations using 63+ atomic operations. Modifies, calculates, formats, and manipulates object fields.
GUBUS TRANSFORM Flow (Field Operations)
Overview
TRANSFORM performs field-level data transformations using 63+ atomic operations. Modifies, calculates, formats, and manipulates object fields.
When to use:
- Copy or calculate field values
- Format strings and concatenate fields
- Parse and generate dates
- Array operations and field cleanup
Key characteristics:
- 63+ operations in 8 categories
- Conditional execution with 30+ if_condition types
- Multi-source field support
- Context and side object access
How It Works
[For each object in localObjs]
→ Check if_condition (if specified)
→ Extract from_fields values
→ Execute command operation
→ Write result to destination_field
Operations are atomic and can be chainined together in sequential transform properties.
Sequence of placed rows matters! Process transform properties carefully. Make sure variable, needed in next step, is created in previous step.
Rule Properties
| PROPERTY | DESCRIPTION |
|---|---|
| reference | Required. Links transform rules. Example: “calculateTotals”. Multiple duplicates allowed |
| command | Required. Operation to perform. Example: “COPY”, “PLUS”, “FORMAT” |
| from_fields | Source field(s). Comma-separated for multiple. Example: “price,quantity” |
| destination_field | Required. Where to write result. Example: “total” |
| pattern | Relevant for FORMAT, OVERWRITE, ROUND operations. |
| if_condition | Conditional execution. See conditional-execution.md |
| if_field | Field for condition check, dynamic check |
| if_value | Value for condition comparison, static check |
| source_type | Source of data. “side_obj” for side objects, not specified for local objects |
| side_ref_if_not_local | Reference to side objects when source_type is “side_obj”. Example: “configData” |
| message | Message to append to mainMessage or localMessage after transform execution |
| message_args | Arguments for message formatting. Comma-separated field names to insert into message |
| log_false_transformation | Boolean. Log warning when transformation doesn’t execute (condition false) |
| break_if_false_transform | Boolean. Abort execution and falsify schema if transformation fails |
Operation Categories
1. Basic Operations
COPY - Copy field value
{"command": "COPY", "from_fields": "name", "destination_field": "display_name"}
**Input:** { name: "Product A", price: 100 }
**Output:** { name: "Product A", price: 100, display_name: "Product A"}
OVERWRITE - Set static value
Weather destination field exists or not, it will be overwritten with specified pattern.
{"command": "OVERWRITE", "destination_field": "status", "pattern": "completed"}
**Input:** `{order_id: "O123", status: "pending"}`
**Output:** `{order_id: "O123", status: "completed"}`
**Input:** `{order_id: "O123"}` (no status field)
**Output:** `{order_id: "O123", status: "completed"}`
FORMAT - Template string composition
Modern way to compose strings using placeholders. Not implemented yet.
{"command": "FORMAT", "from_fields": "first_name,last_name", "destination_field": "full_name",
"pattern": "{first_name} {last_name}"}
**Input:** `{first_name: "John", last_name: "Doe"}`
**Output:** `{first_name: "John", last_name: "Doe", full_name: "John Doe"}`
FORMAT - OLDSCHOOL Template string composition Weird, but so is it so far.
{command: "FORMAT", from_fields: "product,price", destination_field: "display",
pattern: "[elem] - [elem]"}
**Input:** `{product: "Widget", price: 25.50}`
**Output:** `{product: "Widget", price: 25.50, display: "Widget - 25.50"}`
2. Math Operations
PLUS - Add two and record result into destination field. Source fields immutable
{"command": "PLUS", "from_fields": "subtotal,tax", "destination_field": "total"}
**Input:** `{subtotal: 100, tax: 8}`
**Output:** `{subtotal: 100, tax: 8, total: 108}`
Incremental PLUS Note: Can also add to existing field incrementally if onlye one source field configuredinsp:
{"command": "PLUS", "from_fields": "tax", "destination_field": "total"}
**Input:** `{total: 100, tax: 8}`
**Output:** `{total: 108, tax: 8}`
MINUS - Subtract (a - b)
{"command": "MINUS", "from_fields": "price,discount", "destination_field": "final_price"}
**Input:** `{price: 100, discount: 15}`
**Output:** `{price: 100, discount: 15, final_price: 85}`
**Decremental MINUS Note: Can also subtract from existing field incrementally if only one source field configured:
{"command": "MINUS", "from_fields": "discount", "destination_field": "total"}
**Input:** `{total: 100, discount: 15}`
**Output:** `{total: 85, discount: 15}`
MULTIPLY - Multiply two or more fields
{"command": "MULTIPLY", "from_fields": "price,quantity", "destination_field": "subtotal"}
**Input:** `{price: 25.50, quantity: 3}`
**Output:** `{price: 25.50, quantity: 3, subtotal: 76.5}`
MULTIPLY_ROUND - Multiply and round (avoids two-step setup)
{"command": "MULTIPLY_ROUND", "from_fields": "price,quantity",
"destination_field": "subtotal", "pattern": "2"}
**Input:** `{price: 25.567, quantity: 3}`
**Output:** `{price: 25.567, quantity: 3, subtotal: 76.70}`
DIVIDE - Divide (a / b)
{"command": "DIVIDE", "from_fields": "total,count", "destination_field": "average"}
**Input:** `{total: 100, count: 3}`
**Output:** `{total: 100, count: 3, average: 33.333333}`
DIVIDE_ROUND - Divide and round (avoids two-step setup)
{"command": "DIVIDE_ROUND", "from_fields": "total,count",
"destination_field": "average", "pattern": "2"}
**Input:** `{total: 100, count: 3}`
**Output:** `{total: 100, count: 3, average: 33.33}`
ROUND / MATH_ROUND - Round number
{"command": "ROUND", "from_fields": "calc_total", "destination_field": "total", "pattern": "2"}
**Input:** `{calc_total: 45.6789}`
**Output:** `{calc_total: 45.6789, total: 45.68}`
MODULUS - Remainder (a % b)
{"command": "MODULUS", "from_fields": "quantity,pack_size", "destination_field": "remainder"}
**Input:** `{quantity: 17, pack_size: 5}`
**Output:** `{quantity: 17, pack_size: 5, remainder: 2}`
PARSE_FLOAT - Convert string to number
Recognized only on . or , as decimal separator.
{"command": "PARSE_FLOAT", "from_fields": "price_str", "destination_field": "price"}
**Input:** `{price_str: "123.45"}`
**Output:** `{price_str: "123.45", price: 123.45}`
TO_MINUS - Make number negative Shines extremely useful…
{"command": "TO_MINUS", "from_fields": "amount", "destination_field": "refund"}
**Input:** `{amount: 100}`
**Output:** `{amount: 100, refund: -100}`
MULTISUM - Sum multiple fields
{"command": "MULTISUM", "from_fields": "subtotal,tax,shipping", "destination_field": "total"}
Input: {subtotal: 100, tax: 8, shipping: 5}
Output: {subtotal: 100, tax: 8, shipping: 5, total: 113}
MARGIN_PROC - Calculate profit margin percentage
{"command": "MARGIN_PROC", "from_fields": "cost,price", "destination_field": "margin_pct"}
**Input:** `{cost: 60, price: 100}`
**Output:** `{cost: 60, price: 100, margin_pct: 40}`
MONEY_FORMAT - Format number as currency with spaces
{"command": "MONEY_FORMAT", "from_fields": "amount",
"destination_field": "amount_formatted", "pattern": "UAH"}
**Input:** `{amount: 1250.50}`
**Output:** `{amount: 1250.50, amount_formatted: "1 250.50 UAH"}`
3. String Operations
TO_UPPER - Convert to uppercase
{"command": "TO_UPPER", "from_fields": "named", "destination_field": "name_upper"}
**Input:** `{name: "Product A"}`
**Output:** `{name: "Product A", name_upper: "PRODUCT A"}`
TO_LOWER - Convert to lowercase
{"command": "TO_LOWER", "from_fields": "status", "destination_field": "status_lower"}
**Input:** `{status: "PENDING"}`
**Output:** `{status: "PENDING", status_lower: "pending"}`
SUBSTRING - Extract substring (start position, length)
{"command": "SUBSTRING", "from_fields": "order_id", "destination_field": "year", "pattern": "0,4"}
**Input:** `{order_id: "2025-A-001"}`
**Output:** `{order_id: "2025-A-001", year: "2025"}`
4. Array Operations
ARR_SUM - Sum array values
{"command": "ARR_SUM", "from_fields": "line_totals", "destination_field": "order_total"}
**Input:** `{line_totals: [100, 50, 75]}`
**Output:** `{line_totals: [100, 50, 75], order_total: 225}`
JOIN_ARRAY - Convert array to string with separator
{"command": "JOIN_ARRAY", "from_fields": "tags", "destination_field": "tags_str", "pattern": ", "}
**Input:** `{tags: ["urgent", "review", "priority"]}`
**Output:** `{tags: ["urgent", "review", "priority"], tags_str: "urgent, review, priority"}`
CONCAT_NEW_LINE - Join array elements with newline
{"command": "CONCAT_NEW_LINE", "from_fields": "lines", "destination_field": "description"}
**Input:** `{lines: ["Line 1", "Line 2", "Line 3"]}`
**Output:** `{lines: ["Line 1", "Line 2", "Line 3"], description: "Line 1\nLine 2\nLine 3"}`
FORMAT_ARRAY_NEW_LINE - Format array with pattern and join with newlines
{"command": "FORMAT_ARRAY_NEW_LINE", "from_fields": "items",
"destination_field": "formatted_list", "pattern": "- {elem}"}
**Input:** `{items: ["Apple", "Banana", "Orange"]}`
**Output:** `{items: ["Apple", "Banana", "Orange"], formatted_list: "- Apple\n- Banana\n- Orange"}`
SPLIT - Split string into multiple fields
{"command": "SPLIT", "from_fields": "full_name",
"destination_field": "first_name,last_name", "pattern": " "}
**Input:** `{full_name: "John Doe"}`
**Output:** `{full_name: "John Doe", first_name: "John", last_name: "Doe"}`
Another example - splitting address:
{"command": "SPLIT", "from_fields": "address",
"destination_field": "street,city,zip", "pattern": ","}
**Input:** `{address: "Main St, New York, 10001"}`
**Output:** `{address: "Main St, New York, 10001", street: "Main St",
city: "New York", zip: "10001"}`
Note: destination_field must contain comma-separated field names. Each split piece is assigned to corresponding destination field by index.
GATHER_VALUES_FROM_ALL_LOCAL_OBJS - Collect field values from all localObjs into context variable
If no pattern assigned - collects items into array
{"command": "GATHER_VALUES_FROM_ALL_LOCAL_OBJS", "from_fields": "customer_id",
"destination_field": "context:all_customer_ids"}
**Input (multiple objects):** `[{customer_id: "C001"},
{customer_id: "C002"}, {customer_id: "C003"}]`
**Output:** Context variable `all_customer_ids` = `["C001", "C002", "C003"]`
Note: Executes ONLY on first object (idx=0). destination_field must use context: prefix. Result is stored in context variable, not in object field.
GATHER_VALUES_FROM_ALL_LOCAL_OBJS with aggregation (pattern: SUM_INTS)
{"command": "GATHER_VALUES_FROM_ALL_LOCAL_OBJS", "from_fields": "amount",
"destination_field": "context:total", "pattern": "SUM_INTS"}
**Input (multiple objects):** `[{amount: 100}, {amount: 50}, {amount: 75}]`
**Output:** Context variable `total` = `225`
_Available patterns: SUM_INTS, SUM_FLOATS, CONCAT, CONCAT_NEW_LINE
SQUEEZE_FROM_ALL_SIDE_OBJS - Extract field values from side objects into array
{"command": "SQUEEZE_FROM_ALL_SIDE_OBJS", "from_fields": "product_name",
"destination_field": "product_names", "source_type": "side_obj",
"side_ref_if_not_local": "products"}
**Input (with side objects):** `{order_id: "O123"}`
**Side objects (products):** `[{product_name: "Laptop"}, {product_name: "Mouse"}]`
**Output:** `{order_id: "O123", product_names: ["Laptop", "Mouse"]}`
GATHER_VALUES_FROM_ALL_LOCAL_OBJS and SQUEEZE_FROM_ALL_SIDE_OBJS ARE NOT duplicates - they serve very different purposes! Let me explain the key differences:
Key Differences:
- Data Source (Most Important!)
- GATHER_VALUES_FROM_ALL_LOCAL_OBJS: Works on localObjs (current objects being processed in the schema)
- SQUEEZE_FROM_ALL_SIDE_OBJS: Works on sideObjs (external data loaded from other schemas/sources via side_ref_if_not_local)
- Destination Storage
- GATHER: Stores in context variable (context:variable_name) - accessible across transforms
- SQUEEZE: Stores in object field (regular field on each object)
- Execution Pattern
- GATHER: Executes ONLY ONCE on first object (idx=0), skips all others
- SQUEEZE: Executes on EVERY object in localObjs
- Configuration Requirements
- GATHER: No additional reference needed (works on current localObjs)
- SQUEEZE: Requires side_ref_if_not_local to specify which side objects to extract from
- Aggregation
- GATHER: Supports optional patterns (SUM_INTS, SUM_FLOATS, CONCAT, CONCAT_NEW_LINE)
- SQUEEZE: No aggregation patterns (just collects into array)
Practical Example:
Scenario: Process orders and collect product details
// Step 1: GATHER customer IDs from current orders (localObjs)
{"command": "GATHER_VALUES_FROM_ALL_LOCAL_OBJS", "from_fields": "customer_id",
"destination_field": "context:all_customers"}
```ß
// Result: Context variable all_customers = ["C001", "C002", "C003"]
// Runs ONCE, stores in context variable
// Step 2: SQUEEZE product names from side objects (loaded product catalog)
```json
{"command": "SQUEEZE_FROM_ALL_SIDE_OBJS", "from_fields": "product_name",
"destination_field": "available_products", "side_ref_if_not_local": "productCatalog"}
// Result: Each order gets available_products = [“Laptop”, “Mouse”, “Keyboard”] // Runs on EVERY order, stores in object field
Summary: GATHER collects from current objects into a context variable once, SQUEEZE extracts from external side objects into each object’s field repeatedly.
5. Date Operations
GENERATE_DATE_TODAY - Generate current date
{"command": "GENERATE_DATE_TODAY", "destination_field": "created", "pattern": "yyyy-MM-dd"}
**Input:** `{order_id: "O123"}`
**Output:** `{order_id: "O123", created: "2025-10-11"}`
GENERATE_FUTURE_DATE - Calculate future date with offset
{"command": "GENERATE_FUTURE_DATE", "destination_field": "due_date",
"pattern": "yyyy-MM-dd", "step_if_date": "days", "value_if_date": "7"}
**Input:** `{order_id: "O123"}`
**Output:** `{order_id: "O123", due_date: "2025-10-18"}` _(7 days from current date)_
GENERATE_PAST_DATE - Calculate past date with offset
{"command": "GENERATE_PAST_DATE", "destination_field": "week_ago",
"pattern": "yyyy-MM-dd", "step_if_date": "weeks", "value_if_date": "1"}
**Input:** `{report_id: "R001"}`
**Output:** `{report_id: "R001", week_ago: "2025-10-04"}` _(1 week ago from current date)_
PARSE_UNIX_TIME_IN_SECONDS_TO_STRING - Convert Unix timestamp to formatted date
{"command": "PARSE_UNIX_TIME_IN_SECONDS_TO_STRING", "from_fields": "timestamp",
"destination_field": "created_date", "pattern": "yyyy-MM-dd HH:mm"}
**Input:** `{timestamp: 1728648000}`
**Output:** `{timestamp: 1728648000, created_date: "2024-10-11 12:00"}`
GENERATE_UNIX_TIME_IN_SECONDS - Generate current Unix timestamp
{"command": "GENERATE_UNIX_TIME_IN_SECONDS", "destination_field": "timestamp"}
**Input:** `{event_id: "E001"}`
**Output:** `{event_id: "E001", timestamp: 1728648000}`
TRANSFORM_DATE_FORMATS - Convert date from one format to another
{"command": "TRANSFORM_DATE_FORMATS", "from_fields": "date_us",
"destination_field": "date_iso", "pattern": "MM/dd/yyyy[:]yyyy-MM-dd"}
**Input:** `{date_us: "01/15/2025"}`
**Output:** `{date_us: "01/15/2025", date_iso: "2025-01-15"}`
✅ Important Note:
Important: Use [:] (not just :) as separator between input and output formats. This is essential when your date formats contain colons (like time formats).
✅ Practical Example with Time Formats:
{"command": "TRANSFORM_DATE_FORMATS", "from_fields": "datetime_us",
"destination_field": "datetime_iso", "pattern": "MM/dd/yyyy HH:mm:ss[:]yyyy-MM-dd HH:mm:ss"}
**Input:** `{datetime_us: "01/15/2025 14:30:45"}`
**Output:** `{datetime_us: "01/15/2025 14:30:45", datetime_iso: "2025-01-15 14:30:45"}`
✅ Explanation of Why:
If you used just : as separator, it would incorrectly split at HH:mm:ss colons. The [:] delimiter prevents this.
Why This Matters:
Wrong (using :):
- Pattern: MM/dd/yyyy HH:mm:ss:yyyy-MM-dd HH:mm:ss
- Would split at FIRST : → breaks at HH:mm ❌
Correct (using [:]):
- Pattern: MM/dd/yyyy HH:mm:ss[:]yyyy-MM-dd HH:mm:ss
- Splits only at [:] → works perfectly ✅
This prevents ambiguity when date format patterns themselves contain colons!
GENERATE_PL_MONTH - Generate Polish month name
{"command": "GENERATE_PL_MONTH", "from_fields": "month_number", "destination_field": "month_pl"}
**Input:** `{month_number: 10}`
**Output:** `{month_number: 10, month_pl: "październik"}`
Date Generation Time Units (step_if_date)
For GENERATE_FUTURE_DATE and GENERATE_PAST_DATE commands, the step_if_date property defines the time unit:
| step_if_date | Description | Example Usage |
|---|---|---|
seconds | Add/subtract seconds | Short-term scheduling |
minutes | Add/subtract minutes | Appointment times |
hours | Add/subtract hours | Same-day deadlines |
days | Add/subtract days | Most common use case |
weeks | Add/subtract weeks | Weekly schedules |
months | Add/subtract months | Monthly billing |
years | Add/subtract years | Annual events |
Examples with different time units:
// Add 30 minutes to current time
{"command": "GENERATE_FUTURE_DATE", "destination_field": "meeting_time",
"pattern": "yyyy-MM-dd HH:mm", "step_if_date": "minutes", "value_if_date": "30"}
// Input: {event: "Meeting"} (current time: 2025-10-11 14:00)
// Output: {event: "Meeting", meeting_time: "2025-10-11 14:30"}
// Add 3 hours
{"command": "GENERATE_FUTURE_DATE", "destination_field": "deadline",
"pattern": "yyyy-MM-dd HH:mm:ss", "step_if_date": "hours", "value_if_date": "3"}
// Input: {task: "Review"} (current time: 2025-10-11 09:00:00)
// Output: {task: "Review", deadline: "2025-10-11 12:00:00"}
// Subtract 45 seconds (PAST)
{"command": "GENERATE_PAST_DATE", "destination_field": "recent_time",
"pattern": "HH:mm:ss", "step_if_date": "seconds", "value_if_date": "45"}
// Input: {log: "Entry"} (current time: 14:30:00)
// Output: {log: "Entry", recent_time: "14:29:15"}
// Add 2 weeks
{"command": "GENERATE_FUTURE_DATE", "destination_field": "follow_up",
"pattern": "yyyy-MM-dd", "step_if_date": "weeks", "value_if_date": "2"}
// Input: {ticket: "Support-123"}
// Output: {ticket: "Support-123", follow_up: "2025-10-25"}
// Subtract 3 months (quarterly report)
{"command": "GENERATE_PAST_DATE", "destination_field": "period_start",
"pattern": "yyyy-MM-dd", "step_if_date": "months", "value_if_date": "3"}
// Input: {report: "Q4"}
// Output: {report: "Q4", period_start: "2025-07-11"}
// Add 1 year (expiration date)
{"command": "GENERATE_FUTURE_DATE", "destination_field": "expires",
"pattern": "yyyy-MM-dd", "step_if_date": "years", "value_if_date": "1"}
// Input: {license: "L-001"}
// Output: {license: "L-001", expires: "2026-10-11"}
Important Notes:
value_if_datemust be a string containing the number value- Both
step_if_dateandvalue_if_dateare required for GENERATE_FUTURE_DATE and GENERATE_PAST_DATE - Use appropriate
patternformat to include time components when working with hours/minutes/seconds - Common date-time patterns:
yyyy-MM-dd- Date onlyHH:mm- Time only (24-hour)yyyy-MM-dd HH:mm- Date with timeyyyy-MM-dd HH:mm:ss- Date with secondsdd.MM.yyyy HH:mm- European format with time
6. Context Operations
EXTRACT_FROM_FIRST_SIDE_OBJ - Copy field from first side object to all localObjs
This is extremely useful to provide proliferation between patterns read. Widely used to retrieved user input from single-lined multipanel, that started gubus execution.
{"command": "EXTRACT_FROM_FIRST_SIDE_OBJ", "from_field": "tax_rate",
"side_ref_if_not_local": "somePreviousSchemaName", "destination_field": "tax_rate"}
**Input (localObjs):** `[{order_id: "O001"}, {order_id: "O002"}]`
**Side objects (configData):** `[{tax_rate: 0.2, currency: "USD"}]`
**Output:** `[{order_id: "O001", tax_rate: 0.2}, {order_id: "O002", tax_rate: 0.2}]`
Note: Extracts field from first side object and adds to ALL local objects.
7. Message Operations
FROM_MAIN_MESSAGE - Copy mainMessage to field
{"command": "FROM_MAIN_MESSAGE", "destination_field": "schema_message"}
**Input:** `{order_id: "O123"}`
**mainMessage context:** `"Processing completed successfully"`
**Output:** `{order_id: "O123", schema_message: "Processing completed successfully"}`
FROM_LOCAL_MESSAGE - Copy localMessage to field
{"command": "FROM_LOCAL_MESSAGE", "destination_field": "local_msg"}
**Input:** `{order_id: "O123"}`
**localMessage context:** `"Validation passed"`
**Output:** `{order_id: "O123", local_msg: "Validation passed"}`
SET_MAIN_MESSAGE - Set mainMessage from field value
{"command": "SET_MAIN_MESSAGE", "from_fields": "notification"}
**Input:** `{notification: "Order processed"}`
**Output:** `{notification: "Order processed"}` _(mainMessage is set to "Order processed")_
JOIN_MAIN_MESSAGE_NEW_LINE - Append to mainMessage with newline
{"command": "JOIN_MAIN_MESSAGE_NEW_LINE", "from_fields": "status_update"}
**Input:** `{status_update: "Step 2 completed"}`
**mainMessage before:** `"Step 1 completed"`
**Output:** `{status_update: "Step 2 completed"}`
_(mainMessage becomes "Step 1 completed\nStep 2 completed")_
JOIN_MAIN_MESSAGE_EMPTY_LINE - Append to mainMessage with double newline
{"command": "JOIN_MAIN_MESSAGE_EMPTY_LINE", "from_fields": "section_message"}
**Input:** `{section_message: "Section B results"}`
**mainMessage before:** `"Section A results"`
**Output:** `{section_message: "Section B results"}`
_(mainMessage becomes "Section A results\n\nSection B results")_
8. Field Management
KILL_FIELDS - Remove specified fields Killing is never the best solution
{"command": "KILL_FIELDS", "from_fields": "temp_calc,debug_flag"}
**Input:** `{id: "A1", temp_calc: 123, debug_flag: true, total: 100}`
**Output:** `{id: "A1", total: 100}`
KILL_ALL_OTHER_FIELDS - Keep only specified fields (delete all others) Killing is never the best solution
{"command": "KILL_ALL_OTHER_FIELDS", "from_fields": "order_id,customer,total"}
**Input:** `{order_id: "A1", customer: "John", total: 100, temp: "x", debug: true}`
**Output:** `{order_id: "A1", customer: "John", total: 100}`
RENAME_FIELD - Rename field
!not recommended, better not to mutate objects but instantiate new fields with COPY or OVERWRITE commands
{"command": "RENAME_FIELD", "from_fields": "old_name", "destination_field": "new_name"}
**Input:** `{old_name: "Product A", price: 100}`
**Output:** `{new_name: "Product A", price: 100}`
9. Special Operations
IN_RULE_CUSTOM_INCREMENT - Database-stored counter (persists across executions)
beautiful batch incrementor functionality, that can provide multiple incrementations in one schema execution, if multiple objects are processed. Each object gets next incremented value.
{"command": "IN_RULE_CUSTOM_INCREMENT", "from_fields": "counter_name",
"destination_field": "invoice_number"}
**Input:** `{order_id: "O123", counter_name: "invoice_counter"}`
**Output:** `{order_id: "O123", counter_name: "invoice_counter",
invoice_number: 1001}` _(next execution returns 1002, 1003, etc.)_
Note: Counter value is stored in database and increments with each execution. Used for invoice numbers, order IDs, etc.
STD_GUBUS_INCREMENT - Sequential ID generator (resets each execution)
beautiful batch incrementor functionality, that can provide multiple incrementations in one schema execution, if multiple objects are processed. Each object gets next incremented value.
{"command": "STD_GUBUS_INCREMENT", "destination_field": "line_number"}
**Input (multiple objects):** `[{item: "A"}, {item: "B"}, {item: "C"}]`
**Output:** `[{item: "A", line_number: 1}, {item: "B", line_number: 2},
{item: "C", line_number: 3}]`
Note: Counter value is stored in database and increments with each execution. Used for invoice numbers, order IDs, etc.
SchemaContext ignore mistakes, log mistakes, falsify and break execution
Transform stage is not crucial, and by default falsy transforms happens silently.
You MAY define loggins or execution break of essential operations by setting log_false_transformation=true or break_if_false_transform=true properties.
Summary
TRANSFORM provides 63+ field operations in 8 categories:
- Basic: COPY, OVERWRITE, FORMAT
- Math: PLUS, MINUS, MULTIPLY, DIVIDE, ROUND
- String: TO_UPPER, TO_LOWER, SUBSTRING
- Array: ARR_SUM, JOIN_ARRAY, SPLIT
- Date: GENERATE_DATE_TODAY, TRANSFORM_DATE_FORMATS
- Context: EXTRACT_FROM_FIRST_SIDE_OBJ
- Message: SET_MAIN_MESSAGE, FROM_MAIN_MESSAGE
- Field Management: KILL_FIELDS, RENAME_FIELD