Accountability & Provenance
Humans are accountable for the code they commit. This remains true regardless of how code was generated. âThe AI did itâ is not a defense.
Accountability by role
Section titled âAccountability by roleâIndividual contributor
Section titled âIndividual contributorâResponsible for:
- Quality of code they commit (regardless of origin)
- Appropriate use of AI tools per policy
- Validating AI output before committing
- Understanding code they submit
Code reviewer
Section titled âCode reviewerâResponsible for:
- Reviewing to established standards
- Catching issues regardless of origin
- Raising concerns about quality or patterns
Team lead
Section titled âTeam leadâResponsible for:
- Team practices around AI use
- Ensuring appropriate training
- Addressing patterns of issues
Engineering leadership
Section titled âEngineering leadershipâResponsible for:
- Organizational AI policy
- Tool decisions and procurement
- Risk acceptance at org level
When things go wrong
Section titled âWhen things go wrongâProduction incident from AI code
Section titled âProduction incident from AI codeâ- Treat like any incidentâresolution first
- Post-mortem includes AI involvement as context
- Process improvements may involve AI practices
Donât: Blame the AI, create special âAI incidentâ categories, or exempt individuals from accountability.
Security vulnerability from AI code
Section titled âSecurity vulnerability from AI codeâ- Standard security response
- Document AI involvement for learning
- Review: would our process have caught this?
Accountability flows to: Developer who committed it, reviewers who approved itâNOT the AI tool.
Code provenance
Section titled âCode provenanceâWhere does AI-generated code come from? Models train on vast public code with various licenses. Output is statistically influenced by training data but typically isnât direct copying. Legal uncertainty existsâcourts havenât fully resolved how copyright applies to AI output.
What we know
Section titled âWhat we knowâ- Training legality: Ongoing lawsuits testing fair use; no resolution yet
- Output ownership: Person/org prompting is treated as author practically, but not legally settled
- Verbatim reproduction: If AI outputs exact copies, original copyright likely applies
Risk management
Section titled âRisk managementâLow-risk scenarios:
- Boilerplate code anyone would write the same way
- Internal tools with no external distribution
- Code you heavily modify after generation
Higher-risk scenarios:
- Distributing generated code in products
- Open-source contributions with copyleft licenses
- Unique or distinctive algorithms
Tracking AI involvement
Section titled âTracking AI involvementâWhat to track
Section titled âWhat to trackâ- Which files/commits involved AI assistance
- Which tool was used
- Human review performed
How to track
Section titled âHow to trackâ- Git commit conventions: Tags in commit messages
- Code review annotations: Note AI involvement in review
- Tooling: Some tools log AI interactions
Why track
Section titled âWhy trackâ- Future legal compliance may require it
- Incident response if issues arise
- Regulatory compliance in some industries
Edge cases
Section titled âEdge casesâAutomated AI changes (CI/CD, bots): Person who configured automation owns the output. Donât automate consequential changes without human approval.
Multi-person AI sessions: Committer takes responsibility. Should review/understand before committing.
AI-assisted review: Human reviewer still accountable. AI findings must be human-validated.
Policy checklist
Section titled âPolicy checklistâ- Scope: What activities are covered
- Roles: Who has what accountability
- Requirements: What must happen before commit/merge
- Documentation: What must be recorded
- Exceptions: How to handle special cases
- Enforcement: What happens when policy is violated
Resources
Section titled âResourcesâEssential
Section titled âEssentialâ- Your job is to deliver code you have proven to work - Accountability for AI-generated code