Critically Evaluate AI Output: Always treat AI-generated analysis, suggestions, and automated tags as computational support, not as unquestionable evidence. The AI is powerful but not infallible; it can misinterpret syntax, overlook context, or misclassify elements.
Verify Results with Your Expertise: It is essential to cross-check all automated outputs against your own linguistic judgment and expertise before drawing conclusions or publishing findings. The researcher remains the final arbiter of accuracy and meaning.
Provide Clear Auto-Tagging Instructions: When using the Auto-Tag tool, write specific commands that clearly state what to find (the error), where to look (Error Text or Corrected Text), and which tag to apply. Vague instructions can lead to unreliable results.
Review the AI’s Plan: Before executing an auto-tagging command, always review the “Auto-Tagging Plan” generated by the AI. This step is a methodological safeguard that ensures the AI’s interpretation of your command aligns with your intent.
Select the Right AI Model: Choose the appropriate AI model for your task in the AI Workspace. Use the Flash
model for speed on simple checks and quick queries, and switch to the Pro
model for tasks requiring deep linguistic reasoning and nuanced interpretation.
Manage API Usage Efficiently:
- After selecting a text pair, be patient and allow the NLP analysis to finish loading before navigating to another pair. Switching too quickly cancels ongoing requests and wastes API calls.
- Use the manual refresh button for NLP analysis deliberately and only when you have a clear reason to do so, as each click sends a new request to the AI engine.