How an AI Tool Pushed Client Strategy Notes to GitHub [Analyst Commentary]

On March 28, 2026 at around 9pm UTC, CybelAngel detected a public GitHub repository containing the full contents of an Obsidian knowledge vault belonging to a senior AI advisory consultant at a French tech consultancy. The repository was created and last updated that same day.

The vault contained 880 markdown files.  The exposure was flagged across dozens of CybelAngel clients; six were issued formal reports due to sensitive data identified within the repository. It was confirmed secure approximately 48 hours after first appearing online, thanks to CybelAngel’s remediation efforts. 

What happened?

The repository functioned as a working business intelligence archive for the consultant’s AI advisory practice. Its contents included:

  • Internal meeting notes with named enterprise clients
  • Business development and partnership records
  • Candidate notes and operational observations
  • Named internal contacts across client organisations
  • Strategic discussion notes from executive-level meetings

One file stood out immediately: meeting notes dated January 28, 2026, from a session between the consultancy and the Group Chief Data & Analytics Officer of one affected organisation. The notes covered agentic commerce strategy, a shift from SEO to AI, completed conversational projects, the company’s 2027 vision, barriers to data openness, 2026 investment priorities in data and infrastructure, and named internal contacts.

This was not incidental or low-sensitivity data. It was the kind of strategic detail that organisations protect at board level.

How did a private knowledge vault end up on GitHub?

This is the question our analysts prioritised, because the answer is not a typical misconfiguration story.

The repository contained a single file that tells the whole story: CLAUDE.md.

This is a configuration file generated automatically by Claude Code, Anthropic’s agentic AI coding assistant. When Claude Code is pointed at a project directory, it produces this file to document the structure and help orient itself across sessions. The CLAUDE.md in this repository stated explicitly that the vault was not a git repository, and that files sync via iCloud. Yet here it was, published wholesale to a public GitHub repository.

Three additional details confirm the picture:

  • No .gitignore file was present, meaning no one planned this as a managed repository
  • No Obsidian git plugin was installed — there was no native mechanism for the vault to sync to GitHub
  • The README.md contained a single line — the name of the vault — typical of a quick, unplanned push rather than a deliberate project

Our assessment is that Claude Code was used to work on this vault — likely to analyse, organise, or search across its content — and that the CLAUDE.md was generated during that session. The most probable scenario is that either Claude Code executed a git init and git push as part of fulfilling a user instruction, or the user did so manually after working with Claude Code. This cannot be confirmed with certainty. What is clear is that the push was done rapidly and without the configuration safeguards — .gitignore, access controls, repository visibility settings, that a deliberate publishing decision would have involved.

The CLAUDE.md file’s own text acknowledging “this is NOT a git repository” makes this one of the more striking self-contradictions our analysts have encountered in an exposure of this kind.

It is worth noting that this incident landed against a backdrop of wider questions about Claude Code’s production behaviour. On March 31, 2026 — just days after this exposure,Anthropic disclosed that version 2.1.88 of its Claude Code npm package had accidentally shipped 512,000 lines of internal source code due to a packaging error. Two separate incidents; one shared thread: agentic AI tools operating without adequate guardrails create exposure vectors that traditional security controls are not built to catch.

Why this matters: Claude Code and the agentic risk layer

Agentic AI tools, tools that don’t just answer questions but take actions, introduce a fundamentally different risk profile than conventional software.

Claude Code and tools like it can read files, write files, execute commands, and interact with version control systems. When a user points one of these tools at a directory containing sensitive data, the tool may take actions the user did not explicitly anticipate.

In this case, the gap was not negligence in any traditional sense. The consultant was using a sophisticated productivity tool as intended. The problem was the absence of any governance framework for what that tool was permitted to do with access to client data.

GitGuardian’s 2026 State of Secrets Sprawl report found that AI-assisted commits leak sensitive data at more than double the baseline rate, and that the fastest-growing category of leaked secrets involves AI service credentials specifically. The tooling has accelerated. The governance has not kept pace.

The broader pattern: third-party AI risk is your risk

This exposure is not an isolated event. It sits within a well-established pattern of client data exposure through third-party and consultant environments — a pattern that is now accelerating as AI tools become embedded in professional workflows.

What this case adds is the AI layer. The consultant was not a bad actor. He was not careless in any conventional sense. He was using tools that are becoming standard practice across consulting, legal, finance, and technology services. The organisations whose data was in that vault had no visibility into those tools, no policies governing their use, and no monitoring of that third-party environment.

The standard third-party risk assessment does not cover this surface. It needs to.

Wrapping up

While the vault was secured within 48 hours, and now the data is no longer publicly accessible, meeting notes from executive-level conversations about strategy, investment priorities, and named internal contacts were available on the public internet, indexed and searchable, for two days.

We cannot know what was read, copied, or archived before the repository came down. That uncertainty is the operational reality of any exposure that occurs before detection.

The lesson here is not that AI tools are dangerous. It is that any tool with file system access and the ability to take autonomous actions is a potential exposure vector — and that the perimeter of your organisation now includes every environment where your data lives, including the laptop of every consultant who attended your last executive briefing.

Beyond detection, CybelAngel helps organisations understand what their external perimeter actually looks like,  including the environments of the third parties who handle their data. If a consultant, vendor, or partner exposes your information, we find it. If your own infrastructure is drifting into public view, we flag it before anyone else does.

About the author