
AI systems could think in ways beyond human understanding
AI researchers are tracing “chains of thought” (CoT) that models use to solve complex tasks, aiming to spot safety risks. However, these reasoning steps aren’t always visible or understandable; AI can hide harmful decisions or reason in ways humans can’t grasp. Experts therefore stress the need for continuous monitoring and improved transparency methods to keep AI aligned and safe.