Built on existing processes for assessing the quality of translated content, the AI aids in spotting linguistic issues.
Auto LQA can be run on selected jobs and generates a job-level quality score. Annotations are presented in the LQA tab in the editors or in scorecards that can be downloaded and shared. Data can be visualized in Phrase Analytics in a separate Auto LQA tab.
The quality score and pass/fail are generated by default and can be disabled. This score is useful when the trying to spotlight potential issues before post-editing (e.g. after running pre-translation), when a score is required outside TMS or in any other use case when obtaining the score isn’t a primary motivation.
Note
Due to continuous improvements, the user interface may not be exactly the same as presented in the video.
Enable Auto LQA
To enable Auto LQA, follow these steps:
-
From the Settings page, scroll down to the Auto LQA.
section and selectThe
page opens. -
Toggle Auto LQA on.
-
Uncheck
if not required (on by default). -
If required, set rates for segment exclusion.
These would typically be high quality matches (eg. 100+ TM matches or 100+ QPS scores) that were already reviewed and don't require further quality checks.
-
Click Save.
Auto LQA is enabled and human LQA is disabled in any selected workflow step. The Jobs table for Project Managers and Linguists assigned to the jobs.
option is now available on theNote
Linguists must accept the job to run Auto LQA.
Auto LQA can also be enabled as a default for individual projects or project templates in the settings.
Click through tutorial on setting up Auto LQA and use.
Edit Auto LQA Assessments (CAT web editor)
PMs and Linguists assigned to the job can review and modify the Auto LQA assessments in the LQA tab of the CAT web editor. Click Edit LQA from the ellipses menu of a specific assessment to enter the validation mode.
Note
If the Auto LQA assessment exists only for one job in a set of joined jobs, the editing option will not be available.
In the Auto LQA validation mode, users can delete all annotations by clicking Discard LQA or make the following changes to individual issues:
-
Delete the issue if Auto LQA flagged a false positive.
-
Edit the issue:
-
Change the Error category (e.g. if Auto LQA misclassified the issue).
-
Adjust the Severity.
-
Edit the description of the issue (e.g. if Auto LQA's explanation or suggested fix is inaccurate).
-
-
Add a new issue or batch-create annotations for parent/child LQA issues (e.g. if Auto LQA missed errors or the issue category is not supported, such as terminology).
Users can only edit or remove AI-generated issues, AI issues they have already edited, or issues they have added.
When editing is complete, click Finish LQA in the LQA pane to recalculate and update the quality score across the editor, project, and Phrase Analytics. The status (PASS/FAIL) is also re-evaluated.