Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.provisionr.io/llms.txt

Use this file to discover all available pages before exploring further.

A beautiful onboarding automation exists:
  1. HRIS webhook triggers when someone joins
  2. Script creates Okta account
  3. Script adds user to appropriate Okta groups
  4. Okta provisions access to integrated apps
Life is good. Until the reality sets in:
  • Google Workspace uses different group naming conventions than Okta
  • AWS doesn’t integrate with Okta groups directly (needs SSO + permission sets)
  • GitLab has its own permission model (Guest/Reporter/Developer/Maintainer)
  • Slack channels can’t be managed via Okta
  • On-prem systems don’t talk to Okta at all
Now there are:
  • 8 different scripts, each handling one system
  • 3 different authentication methods
  • 5 different permission models
  • No consistent way to track what happened
  • Manual fallback for systems that can’t be automated
This is cross-system access orchestration hell.

The Problem With Multiple Systems

Every system thinks about access differently: Okta: Groups. Users are in groups. Groups are assigned to apps. Google Workspace: Groups + organizational units. Groups for email distribution. OUs for organizational hierarchy. Sometimes groups control app access. Sometimes OUs do. AWS: IAM roles + permission sets + SSO integration. Users don’t directly have permissions. They assume roles. Roles have permission sets. SSO maps Okta groups to permission sets. GitLab: Project-based permissions. Users can be Guest/Reporter/Developer/Maintainer at the project or group level. Permissions cascade down hierarchy. Slack: Channel-based access. Users are invited to channels. Channels can be public, private, or shared. Some channels are managed via SSO groups. Some are manual. Jira: Project roles. Users have roles within projects. Roles grant permissions. Roles can be granted to groups or individuals. Salesforce: Permission sets + profiles. Profiles are legacy (assigned to users). Permission sets are modern (assigned to users, additive). Permission set groups combine multiple permission sets. GitHub: Organization-based. Users are in organizations. Organizations have teams. Teams have permissions on repositories. Permissions are None/Read/Triage/Write/Maintain/Admin. Eight different permission models. Mapping between them. Keeping them in sync.

Why This Is Hard

1

No common permission model exists

“Read access” means different things in different systems. GitLab: Reporter role (can clone, can’t push). GitHub: Read permission (can clone, can’t push, can open issues). AWS S3: s3:GetObject permission (can download files). Google Drive: “Viewer” role (can view, can’t edit or share). Jira: “Browse” permission (can view issues in project). When someone is a “Senior Engineer” and should have “read access to production systems,” that translates to 5 different things in 5 different systems.
2

Different group structures create mapping complexity

Okta groups look like: engineering-team. Google Workspace groups look like: engineering@company.com. AWS permission sets are named: EngineeringTeamReadOnly. GitLab groups are nested: engineering/platform and engineering/frontend. Keeping these in sync is non-trivial. If engineering-team updates in Okta, which Google group, which AWS permission set, and which GitLab groups need to update?
3

Nested and cascading permissions differ by system

Some systems have flat permissions. Some have hierarchical. In GitLab, Developer access at the engineering group level means automatic Developer access in all subgroups (engineering/platform, engineering/frontend, etc.). In Okta, membership in engineering-team doesn’t mean automatic membership in engineering-platform-team. Same conceptual hierarchy. Different technical models.
4

Different API patterns require different handling

Okta API: RESTful, well-documented, rate-limited to 10,000 requests/minute. Google Workspace API: RESTful, multiple quota limits (per user, per project), eventual consistency. AWS API: Dozens of services, each with own API, permissions spread across IAM/SSO/Organizations. GitLab API: RESTful, rate-limited to 300 requests/minute per user. Slack API: RESTful, rate-limited (Tier 2, Tier 3, Tier 4 methods), requires separate OAuth scopes per action. Each API has quirks. Each has different error handling. Each has different rate limits.
5

Order dependencies exist

Provisioning can’t happen in arbitrary order. The Okta account must exist before adding the user to Okta groups. Okta groups must be assigned to AWS SSO before AWS permission sets apply. The GitLab account must exist before adding the user to GitLab groups. Google groups must exist before adding users to them. If the user is created in Google first, then Okta, Google provisioning might fail because the Okta account doesn’t exist yet and Google is configured to sync from Okta. The dependency graph must be modeled and provisioning must happen in the right order.
6

Partial failures create messy state

Provisioning access to 8 systems. Six succeed. Two fail (API timeout, rate limit, service outage). Now what? Roll back the six that succeeded? (Not always possible. Some systems don’t support rollback.) Retry the two that failed? (Idempotency isn’t guaranteed. Retrying might create duplicates.) Manual cleanup? (Defeats the purpose of automation.) Leave it broken and hope someone notices? (This is what usually happens.)
7

Drift happens constantly

Access is provisioned via automation. Someone manually adds access directly in a system (Slack admin invites someone to a channel). The automation doesn’t know about manual changes. Result: Drift. The source of truth (the automation) disagrees with reality (what’s actually configured). How is this detected? How is it reconciled?
Cross-system access drift creates compliance gaps. When auditors compare access across systems, inconsistencies indicate control weaknesses. SOC 2 CC6.1 requires that access controls be consistently applied—manual workarounds in individual systems undermine this. Organizations need evidence that access is synchronized across all systems, not just documented in one.

The Naive Approach (And Why It Fails)

Approach 1: Big bash script
#!/bin/bash
# Create user in all systems

# Create Okta account
curl -X POST https://api.okta.com/users ...

# Add to Okta groups
curl -X PUT https://api.okta.com/groups/$GROUP_ID/users/$USER_ID ...

# Create Google Workspace account
curl -X POST https://admin.googleapis.com/users ...

# Add to Google groups
curl -X POST https://admin.googleapis.com/groups/$GROUP/members ...

# Create AWS SSO user
aws identitystore create-user ...

# Add to AWS permission sets
aws sso-admin create-account-assignment ...

# ... 50 more curl commands
Problems: No error handling (if line 37 fails, lines 1-36 already executed). No dependency ordering (might try to add user to group before user exists). No retry logic (transient failures kill the whole process). No rollback (can’t undo partial execution). No drift detection (only runs when triggered, doesn’t validate current state). Approach 2: Separate scripts per system
scripts/
  okta/provision.sh
  google/provision.sh
  aws/provision.sh
  gitlab/provision.sh
  slack/provision.sh
Problems: Who calls them? In what order? How does data pass between scripts? How is success and failure tracked? How is logic duplication avoided (parsing HRIS data, determining what access someone needs)? Approach 3: Vendor’s integration platform Use Okta’s integration gallery, or Workato, or Zapier to connect systems. Problems: Each integration is a black box (can’t see what it’s doing). Limited customization (pre-built workflows might not match requirements). Cost (per-integration pricing adds up). Custom scripts still needed for systems without integrations. No unified view of what happened across all systems.

What Actually Works: An Orchestration Layer

Companies that have solved this build an orchestration layer—a purpose-built system that:
  1. Models access policies (who should have what)
  2. Translates policies to system-specific configurations (Okta groups, AWS permission sets, GitLab roles)
  3. Provisions in the correct order (handles dependencies)
  4. Tracks execution state (what succeeded, what failed, what’s pending)
  5. Retries failures intelligently (exponential backoff, partial retry)
  6. Detects drift (compares policy to reality)
  7. Provides audit logs (every change is logged with justification)
Here’s what this looks like architecturally:

Layer 1: Policy Definition

policy:
  name: "Senior Engineer Access"
  triggers:
    - department: "Engineering"
      seniority: "Senior"

  provisions:
    # Okta
    - system: "okta"
      action: "add_to_group"
      target: "engineering-team"

    - system: "okta"
      action: "add_to_group"
      target: "engineering-senior"

    # Google Workspace
    - system: "google"
      action: "add_to_group"
      target: "engineering@company.com"

    - system: "google"
      action: "add_to_group"
      target: "senior-engineers@company.com"

    # AWS
    - system: "aws-sso"
      action: "assign_permission_set"
      target: "EngineeringSeniorReadOnly"
      account: "production"

    # GitLab
    - system: "gitlab"
      action: "add_to_group"
      target: "engineering"
      role: "developer"

    # Slack
    - system: "slack"
      action: "invite_to_channel"
      target: "#engineering-general"

Layer 2: System Adapters

Each system gets an adapter that knows how to interact with that system’s API:
class OktaAdapter:
    def add_to_group(self, user_id, group_id):
        # Handle Okta API specifics
        # Rate limiting
        # Error handling
        # Idempotency

    def remove_from_group(self, user_id, group_id):
        # ...

class GoogleAdapter:
    def add_to_group(self, user_email, group_email):
        # Handle Google API specifics
        # OAuth2 authentication
        # Quota management
        # Eventual consistency

class GitLabAdapter:
    def add_to_group(self, user_id, group_id, role):
        # Handle GitLab API specifics
        # Namespace resolution
        # Permission inheritance

Layer 3: Orchestration Engine

The orchestrator:
  1. Reads the policy (who should have what)
  2. Determines current state (what they actually have)
  3. Calculates delta (what needs to change)
  4. Orders operations (handles dependencies)
  5. Executes actions (calls system adapters)
  6. Tracks state (logs success/failure)
  7. Retries failures (intelligently)
  8. Reports results (what happened, what’s pending)
class Orchestrator:
    def provision_user(self, user, policy):
        # Step 1: Calculate what needs to happen
        changes = self.calculate_changes(user, policy)

        # Step 2: Order operations by dependencies
        ordered_operations = self.order_by_dependencies(changes)

        # Step 3: Execute with state tracking
        for operation in ordered_operations:
            try:
                result = self.execute_with_retry(operation)
                self.log_success(operation, result)
            except Exception as e:
                self.log_failure(operation, e)
                # Decide: continue with remaining operations or abort?

        # Step 4: Return execution summary
        return self.generate_report()

Layer 4: Drift Detection

Continuously compare policy (expected state) to reality (actual state):
def detect_drift():
    for user in all_users:
        expected = calculate_expected_access(user)
        actual = fetch_actual_access(user)

        drift = compare(expected, actual)

        if drift:
            log_drift(user, drift)
            # Optional: auto-remediate or flag for review
An orchestration layer with comprehensive logging satisfies SOC 2 CC7.2 (system monitoring) and CC7.3 (change management). Every access change is documented with who, what, when, and why. Drift detection and remediation demonstrate continuous control monitoring rather than point-in-time audits.

Building vs. Buying

Build a custom orchestration layer: Pros: Full customization. No vendor lock-in. Optimized for exact requirements. Cons: 6-12 months of engineering effort. Ongoing maintenance (APIs change, new systems added). Need to handle edge cases (rate limits, partial failures, retries, drift). Need to build UI for policy management. Need to build audit logging. Use a purpose-built platform: Pros: Immediate functionality. Pre-built adapters for common systems. Battle-tested error handling. Audit logging out of the box. Cons: Monthly/annual cost. Vendor dependency. May not support exact workflows. Customization limited to what vendor allows. The hybrid approach: Use a platform for common systems (Okta, Google, AWS, Slack, GitLab). Build custom adapters for niche or internal systems. This delivers 80% of functionality immediately and allows extension for unique needs.

Real-World Example: Onboarding a Sales Engineer

Walk through provisioning Sarah, a new Sales Engineer: 1. Policy says Sarah needs:
  • Okta: sales-team, engineering-team
  • Google: sales@company.com, engineering@company.com
  • Salesforce: Sales Engineer profile
  • GitLab: customer-solutions group (Reporter role)
  • Slack: #sales-team, #engineering-general
  • Jira: Sales projects (Reporter role)
2. Orchestrator determines order:
1. Create Okta account (dependency for everything else)
2. Add to Okta groups (unlocks SSO for other systems)
3. Trigger Okta provisioning to Google/Salesforce (automatic via Okta integrations)
4. Create GitLab account (via SAML SSO, needs Okta)
5. Add to GitLab group (needs GitLab account)
6. Invite to Slack channels (via Slack API, independent)
7. Add to Jira projects (via Jira API, independent)
3. Execution with state tracking:
✓ Okta account created
✓ Added to okta:sales-team
✓ Added to okta:engineering-team
✓ Okta → Google provisioning triggered
✓ Okta → Salesforce provisioning triggered
⚠ GitLab account creation pending (SAML SSO first-login)
  → Will retry after SSO login
✓ Slack invite sent to #sales-team
✓ Slack invite sent to #engineering-general
✗ Jira API timeout
  → Scheduled retry in 5 minutes
4. Retry handling:
[5 minutes later]
✓ Jira projects added (retry succeeded)
✓ GitLab SSO completed (user logged in)
✓ GitLab group membership added

Final state: 100% provisioned
5. Audit log:
2024-11-24 09:00:00 | User: sarah@company.com | Action: Provision | Status: Started
2024-11-24 09:00:03 | Okta account created | User ID: 00u1234567
2024-11-24 09:00:05 | Added to okta:sales-team
2024-11-24 09:00:06 | Added to okta:engineering-team
2024-11-24 09:00:10 | Google Workspace provisioned
2024-11-24 09:00:12 | Salesforce provisioned
2024-11-24 09:00:15 | Slack invites sent
2024-11-24 09:00:20 | Jira API timeout (retrying)
2024-11-24 09:05:22 | Jira provisioned (retry 1)
2024-11-24 09:15:30 | GitLab SSO completed
2024-11-24 09:15:35 | GitLab group membership added
2024-11-24 09:15:35 | Provisioning complete
Sarah has full access within 15 minutes. Every step is logged. Failures were retried automatically.
This audit log demonstrates control effectiveness for SOC 2 CC6.2 (user provisioning) and CC6.3 (role-based access). Auditors can trace every access grant from policy trigger through system-level implementation. The retry handling shows resilience controls, and the complete timeline provides evidence of timely provisioning.

The Bottom Line

Cross-system orchestration is hard because every system has a different permission model, APIs have different quirks and rate limits, operations have dependencies, partial failures require sophisticated retry logic, and drift happens when people make manual changes. The solution isn’t better integration APIs (those help, but don’t solve the problem). The solution is an orchestration layer that models policies abstractly, translates to system-specific configurations, handles dependencies, retries, and drift, and provides auditability. Organizations building this should budget 6-12 months. Organizations buying should ensure the platform supports their systems and allows customization for edge cases. Either way, don’t underestimate this problem. It’s harder than it looks.
Next up: The Policy-First Approach—building access management from first principles instead of retrofitting policy onto existing systems.
Want to see cross-system orchestration in action? Check out Provisionr’s integration architecture →