literica[ai]
AI research workspace·v0.9 beta

Your research
library, made
intelligent[ai].

Upload your papers once and get a personal research assistant that has read every one of them. Every answer is grounded in your sources — every claim, traceable to a page.

$0 to startReads formulas, tables & figuresLibrary stays private
Global Chat
📚 entire library · 351 papers

Q Summarize the disagreements between these five papers.

Disagreement clusters around two interpretations of the MACE-reduction effect: Suzuki[4] and Hernández[7] argue a dose-dependent biological mechanism; Patel[18] and Lin[22] attribute the gap to cohort confounding (age, baseline BMI, prior MI). Lincoff[11] is consistent with both readings — its stratified subgroup analysis is the place to look next.
/Trusted by researchers from
StanfordMITETH ZürichCambridgeMax PlanckUC Berkeley
/02 · COMPREHENSION

Reads what other
AIs skip.

Formulas. Tables. Figures. Every layer of a paper — parsed, understood, explained.

READING · 1 page · 412 tokens · 1 equation · 1 table · 1 figure
EQUATION 4.2 · UNDERSTOOD

Policy-gradient objective

Maximizes expected reward by adjusting policy parameters θ in the direction of higher-advantage actions. Variance-reduced via the advantage function Aπ in place of raw returns.

type objectivefamily policy gradientcf. REINFORCE, A2C
TABLE 2 · PARSED

Best: Ours-L  87.4% top-1 @ 124M params

Linear accuracy gain with model size; diminishing returns past 56M. 7× FLOPs cost from S→L for +3.2 pts top-1.

rows 4cols 5best row Ours-Lvs. baseline +9.3 pt
FIGURE 3 · CAPTIONED

Attention concentrates on tokens 5–6 early, disperses by layer 4

Heatmap shows sharp peaks in layers 1–2 for content tokens, with broadening receptive field by layer 4. Consistent with finding in §4.2.

peak L2 · tok 5–6spread increasing w/ depth
/03 · LITERATURE REVIEW

A draft review,
written from your papers.

Pick a folder. Litericaai reads every paper, finds the through-lines, and writes a structured review — with every claim cited to a page.

← ReviewsAttention mechanisms in modern transformer architecturesGenerating✓ Completed·3 sources·~312 words↓ Export

Attention mechanisms in modern transformer architectures

Introduction

Attention reshaped how sequence models capture long-range dependencies3. The field consolidated around self-attention after1 showed recurrence could be discarded; pretraining work2 then proved attention layers transfer across NLP tasks.

Themes

Three themes emerge. Architectural minimalism1 prioritizes parallelism. Bidirectional context2 reframes pretraining as denoising. Soft alignment3 anticipated both, reframing translation as attention-weighted lookup.

Methods

All three works rely on dot-product or additive attention13, scale to large parallel training, and report ablations on heads and alignment depth. Evaluation centers on translation BLEU13 and downstream NLP benchmarks2.

Gaps

Two gaps stand out. The corpus is silent on the compute–quality trade-off — none report wall-clock or energy budgets. Second, the analysis of failure modes (attention sinks, head collapse) has not yet entered this slice of the literature.

Conclusion

The arc runs from soft alignment as an addition to RNNs3, to attention as the only mechanism1, to attention as substrate for transferable pretraining2. Future reviews should incorporate efficiency- and analysis-oriented work.

/04 · CITATION NETWORK

See how your sources connect.

Visualize how papers in your library reference each other. Seminal works pull to the center; clusters self-organize by topic; outliers float free. Hover any node to see what it cites and what cites it.

Seminal
Cluster member
Outlier
Attention Is All You NeedBERTGPT-2Scaling LawsT5

[Every answer is grounded in your library. Every claim is traceable to a specific page. Every workflow — from initial reading to final review — happens in one place.]

— THE LITERICA[AI] PROMISE
/05 · WHY IT MATTERS

Sits in the gap nothing else fills.

GENERIC AI CHATKnows the internet. Not your library.

×Hallucinates plausible citations
×No grounding in your sources
×Can't open the PDF you read last Tuesday

REFERENCE MANAGERSStore files. Don't read them.

×Metadata-only search
×No synthesis, no Q&A
×Citation networks are static

SEARCH TOOLSFind papers. Don't synthesize them.

×Per-paper, not per-library
×Returns lists, not answers
×Doesn't draft your review

LITERICA[AI]Grounded answers from your corpus.

Every claim links to a page
Synthesizes across hundreds of papers
Drafts reviews, maps citations
Your library stays private
/06 · BUILT FOR

Anyone who reads a lot.

[01]

Graduate students

Lit reviews, thesis chapters, qualifying exams. Start writing on day one instead of day ninety.

[02]

Academic researchers

Keep up with a fast-moving field. Synthesize across hundreds of sources, in plain language.

[03]

R&D and competitive intel

Technical and patent literature is a daily input. Make every PDF interrogable.

[04]

Journalists, analysts, consultants

Defensible, citation-backed answers from a corpus you curated yourself.

/07 · PRICING

Start free. Pay when
your library scales.

[free]

Starter

Try the workspace.

$0/ forever
  • +50 papers in your library
  • +100 chat messages per month
  • +1 literature review per month
  • +Citation network & comprehension
  • +Solo workspace
Get started
MOST POPULAR
[pro]

Pro

For serious readers.

$19/ month, per user
  • +Unlimited papers & folders
  • +Unlimited chat & literature reviews
  • +Frontier-tier reasoning models
  • +Zotero & Mendeley sync
  • +Export reviews to Word, LaTeX, BibTeX
  • +Priority processing
Start Pro →
[team]

Team

For research groups.

$49/ month, per user
  • +Everything in Pro
  • +Shared folders & collaborative chat
  • +Notes, comments & pinned answers
  • +Admin controls & usage analytics
  • +SSO (Google, Microsoft, SAML)
  • +Dedicated support channel
Talk to sales
Cancel anytime, keep your libraryBilled securely by StripeEducation discount availableSOC 2 in progress
/08 · GET STARTED

Start a library. Ask it a question. See for yourself.

Free to start. No credit card. Upgrade when your library grows.

50 papers on the free planUnlimited on Pro · $19/moTeams & institutions available