If you are the person using the LLM tool, a prompt injection attack in a database row that you are allowed to view could trick your LLM tool into taking actions that you don't want it to take, including leaking other data you are allowed to see via writing to other tables or using other MCP tools.
If you are the person using the LLM tool, a prompt injection attack in a database row that you are allowed to view could trick your LLM tool into taking actions that you don't want it to take, including leaking other data you are allowed to see via writing to other tables or using other MCP tools.