CAT Editors

Auto LQA (TMS)

Content is machine translated from English by Phrase Language AI.

Built on existing processes for assessing the quality of translated content, the AI aids in spotting linguistic issues.

Auto LQA can be run on selected jobs and generates a job-level quality score. Annotations are presented in the LQA tab in the editors or in scorecards that can be downloaded and shared. Data can be visualized in Phrase Analytics in a separate Auto LQA tab.

The quality score and pass/fail are generated by default and can be disabled. This score is useful when the trying to spotlight potential issues before post-editing (e.g. after running pre-translation), when a score is required outside TMS or in any other use case when obtaining the score isn’t a primary motivation.

Note

Due to continuous improvements, the user interface may not be exactly the same as presented in the video.

Enable Auto LQA

To enable Auto LQA, follow these steps:

  1. From the Settings Setup_gear.png page, scroll down to the Quality section and select Auto LQA.

    The Auto LQA page opens.

  2. Toggle Auto LQA on.

  3. Uncheck Generate quality score if not required (on by default).

  4. If required, set rates for segment exclusion.

    These would typically be high quality matches (eg. 100+ TM matches or 100+ QPS scores) that were already reviewed and don't require further quality checks.

  5. Click Save.

    Auto LQA is enabled and human LQA is disabled in any selected workflow step. The Run Auto LQA option is now available on the Jobs table for Project Managers and Linguists assigned to the jobs.

    Note

    Linguists must accept the job to run Auto LQA.

Auto LQA can also be enabled as a default for individual projects or project templates in the Workflow settings.

Click through tutorial on setting up Auto LQA and use.

Edit Auto LQA Assessments (CAT web editor)

PMs and Linguists assigned to the job can review and modify the Auto LQA assessments in the LQA tab LQA_pane.png of the CAT web editor. Click Edit LQA from the ellipses menu ellipses.png of a specific assessment to enter the validation mode.

Note

If the Auto LQA assessment exists only for one job in a set of joined jobs, the editing option will not be available.

In the Auto LQA validation mode, users can delete all annotations by clicking Discard LQA or make the following changes to individual issues:

  • Delete the issue if Auto LQA flagged a false positive.

  • Edit the issue:

    • Change the Error category (e.g. if Auto LQA misclassified the issue).

    • Adjust the Severity.

    • Edit the description of the issue (e.g. if Auto LQA's explanation or suggested fix is inaccurate).

  • Add a new issue or batch-create annotations for parent/child LQA issues (e.g. if Auto LQA missed errors or the issue category is not supported, such as terminology).

Users can only edit or remove AI-generated issues, AI issues they have already edited, or issues they have added.

When editing is complete, click Finish LQA in the LQA pane to recalculate and update the quality score across the editor, project, and Phrase Analytics. The status (PASS/FAIL) is also re-evaluated.

Was this article helpful?

Sorry about that! In what way was it not helpful?

The article didn’t address my problem.
I couldn’t understand the article.
The feature doesn’t do what I need.
Other reason.

Note that feedback is provided anonymously so we aren't able to reply to questions.
If you'd like to ask a question, submit a request to our Support team.
Thank you for your feedback.