GOVERNANACEAI Ethics
Amace Literary Systems AB (“Amace Literary Systems”) develops and operates Emily;, a system for literary quality assessment grounded in a defined framework of editorial principles. As the operator of an AI-supported assessment infrastructure that processes original manuscripts and produces structured assessments used in editorial decision-making, we treat responsible development and deployment as foundational — both to the legitimacy of our product and to the trust placed in us by publishers, literary agents, and authors.
This statement sets out the principles that guide how we develop, operate, and govern Emily;. It reflects our obligations under applicable law, our commitments to the stakeholders whose work we assess, and the editorial standards on which the system is built.
Scope and applicability
This statement applies to the development and operation of Emily; across its two services — Emily; Publisher and Emily; Author — and to all components of the underlying assessment framework, including the curated reference corpus of professionally evaluated manuscripts and the orchestration architecture through which assessments are produced.
The principles set out here are binding on Amace Literary Systems’s internal operations and are applied consistently across our engineering, editorial, and commercial processes. They are maintained in dialogue with professional editors and advisors.
Emily; is not used to make legally binding or consequential decisions about natural persons. It produces assessments that support human editorial judgement; the publisher, literary agent, or author retains all decision-making authority.
Core principles
Editorial judgement as foundation
Emily; is designed to make editorial judgement more structured, reproducible, and traceable — not to replace it. The framework that governs every assessment is built on decades of professional editorial practice and is calibrated in continuous collaboration with experienced professional readers. Emily; never overrides the judgement of the editor, agent, or author it serves.
Transparency of methodology
Every assessment produced by Emily; is traceable to a defined set of principles within our assessment framework. The framework itself is documented, and the logic behind any given assessment can be made visible to the user. We treat this transparency as a prerequisite for legitimate use in editorial contexts, not as an optional feature.
Authorial and publisher rights
Manuscripts submitted to Emily; remain the intellectual property of their authors and rightsholders. We do not use submitted manuscripts to train or retrain generative models. The reference corpus that informs the framework is built exclusively on texts for which we hold explicit permissions, and on professional editorial assessments produced by readers under contract. Copyright is treated as a central design principle, not a downstream compliance matter.
Privacy and confidentiality
Manuscripts processed by Emily; are unpublished, often commercially sensitive, and subject to strict expectations of confidentiality. We operate on principles of data minimisation, purpose limitation, and secure access control. No manuscript is accessible to other users, to our editorial partners, or to any party other than those explicitly authorised by the rightsholder. Retention is governed by explicit policies and the user’s right to deletion.
Fairness and literary representation
Literary quality cannot be separated from the traditions, languages, and forms in which it is expressed. We are conscious of the risk that any assessment framework may inadvertently privilege dominant literary traditions over others, and we treat the composition of our reference corpus — across genres, formats, maturity stages, and linguistic registers — as an ongoing responsibility. The framework is calibrated rather than prescriptive: it measures how well a work does what it sets out to do, rather than imposing a single stylistic ideal.
Human oversight and accountability
Emily; operates within a defined architecture of human oversight. Professional editors calibrate the framework; an internal governance layer verifies that system outputs remain within defined parameters; and commercial users are given the tools to interpret, weight, and act on system outputs in line with their editorial priorities. Amace Literary Systems is accountable for the assessment framework; editorial decisions remain with the publisher, agent, or author.
Epistemological integrity
We draw a deliberate distinction between filtering and assessment. Emily; operates in both capacities but never conflates them. Assessments are produced against the assessment framework and the reference corpus; preferences and filters are handled separately and are never presented as quality judgements. The corpus exposes only structured metadata to the generative layer — never raw manuscripts or editorial statements — ensuring that every assessment is anchored in the framework rather than in the language model’s own inclinations.
Regulatory alignment
Emily; operates within the regulatory framework established by the EU Artificial Intelligence Act (Regulation (EU) 2024/1689), which entered into force on 1 August 2024 and is being implemented in phases. Relevant obligations include:
The prohibitions under Article 5, applicable from 2 February 2025. Emily; is not used for any practice listed as prohibited, including social scoring, exploitative profiling, or unauthorised biometric processing.
Transparency obligations and the general-purpose AI regime under Articles 51–55, applicable from 2 August 2025. Where Emily; incorporates third-party general-purpose AI models, we maintain appropriate contractual assurances, documentation, and copyright safeguards in line with the AI Office’s guidance and the GPAI Code of Practice.
Obligations for high-risk AI systems under Article 6 and Annex III, applicable from 2 August 2026. Emily; is not classified as a high-risk system under these provisions: it does not determine access to essential services, employment, education, or fundamental rights. We keep this classification under active review as the system evolves and as the Commission issues further guidance.
AI literacy obligations under Article 4, applicable from 2 February 2025. Staff involved in the development and operation of Emily; maintain an appropriate level of AI literacy for their role.
In addition to the AI Act, Emily; is developed and operated in accordance with the General Data Protection Regulation (Regulation (EU) 2016/679), Swedish data protection law, and applicable copyright law, including Directive (EU) 2019/790 on copyright in the Digital Single Market.
Operational governance
The principles set out here are embedded in the day-to-day operation of Emily;. Specifically:
Technical documentation is maintained for every layer of the system, including the assessment framework, the reference corpus, and the orchestration architecture.
A defined governance layer applies programmatic verification to system outputs to ensure that Emily; remains within the bounds of its defined assessment behaviour.
Calibration logs record every material change to the framework, including rationale, editorial input, and the version under which the change takes effect.
Incidents, anomalies, and material changes in system behaviour are recorded and reviewed, with escalation to Amace Literary Systems’s leadership and, where appropriate, to our scientific and editorial advisors.
Continuous review
AI governance is a moving landscape, both legally and technically. We recognise that the principles set out here will need to evolve in parallel with the regulatory framework, with advances in AI capabilities, and with the continued development of editorial norms in a market that is itself changing rapidly.
We commit to reviewing this statement at least annually, and whenever a material change in law, capability, or system architecture makes such review necessary. Where new obligations or best practices emerge, we will adapt our internal processes and update this statement accordingly.
Last updated: April 2026