While adversarial attacks can effectively deceive deep neural networks, their real-world applicability is often limited by complex and conspicuous patterns that reveal their attack intent to human observers. To overcome this limitation, we propose UYE, a novel camouflage framework designed to simultaneously mislead DNNs and evade human perception. UYE incorporates two key components: an attention refiner leveraging a pre-trained vision encoder to optimize adversarial patterns for robust attacks across diverse environments, and a perception evaluator trained on CAMOCritic—a human preference dataset curated using tailored prompts from human-aligned large multimodal models—to ensure natural and unobtrusive camouflage generation. Extensive experiments demonstrate that UYE outperforms stateof-the-art methods in achieving an optimal balance between human stealth and model deception while maintaining effectiveness in real-world scenarios.