Datasets:

Formats:
json
Size:
< 1K
ArXiv:
License:
File size: 6,296 Bytes
00657c3
 
 
2b8baed
d5ac2dd
1d66725
2b8baed
 
 
 
 
20727a8
1a0139a
 
 
 
 
 
9e58469
 
 
 
adf575a
 
0e9fcfe
9e58469
 
6114b81
9e58469
1d66725
4092b6c
7d88562
9e58469
c003c53
9e58469
 
 
 
 
8bab4f7
9e58469
 
 
0a815c1
9e58469
8bab4f7
9e58469
8bab4f7
9e58469
 
 
1619b64
 
7d88562
9e58469
37ca77d
9e58469
7d88562
1619b64
dcef7a1
1619b64
cdadec9
b5a79a9
37ca77d
7605982
9e58469
 
1619b64
 
 
 
37ca77d
1619b64
 
 
37ca77d
1619b64
 
 
37ca77d
1619b64
6300ad1
1619b64
37ca77d
1619b64
 
 
 
c3f22fe
9e58469
c9b8e3b
c3f22fe
 
 
9e58469
77975dc
 
 
 
 
cd11190
77975dc
 
cd11190
 
20727a8
 
9e58469
3378f62
c003c53
 
 
8bab4f7
 
 
 
3378f62
33fb63b
20727a8
 
 
 
 
 
 
 
 
 
33fb63b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
license: cc-by-3.0
tags:
- agent
- workflow
- multimodal
- spreadsheet
- pdf
- image
- code
- finance
- accounting
modalities:
- text
- spreadsheet
- pdf
- image
- code

configs:
  - config_name: Finch_Dataset_All
    data_files:
    - split: test
      path:
        - finch_workflows_test.jsonl
---

<img src="figs/finch_workflow.jpeg" width="1000" />

# Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows

This repository contains the dataset for **Finch**, an enterprise-grade benchmark for evaluating an agent’s ability to work like a skilled finance & accounting expert (work IQ) on real-world professional workflows.

* **Paper**: https://arxiv.org/abs/2512.13168

---

## Dataset Description

Finch focuses on **messy and long-horizon finance & accounting workflows** that span:

> data entry/import, structuring/formatting, web search, cross-sheet/file retrieval, calculation, financial modeling, validation, translation, visualization, and reporting.

The workflows are derived from **real-world enterprise workspaces** (primarily Enron, as well as corporations in the EUSES Corpus, investment and securities companies, World Bank, Canadian/British government agencies, and more), including:

- Enterprise **email threads** where collaborators naturally describe, discuss, and track workflows
- Large and messy **spreadsheets** with multimodal artifacts including text, tables, formulas, charts, pivots, images, etc 
- Interlinked **PDFs and documents** that provide additional business context  

We adopt a three-step workflow labeling process:

1. **Inducing workflow types and instances** from real collaborative context in **enterprise email threads** ([Enron Corpus](https://en.wikipedia.org/wiki/Enron_Corpus): 500,000 emails from 150 executives and employees).  
2. **Deriving concrete workflow instances** by analyzing changes across **spreadsheet versions** (15,000 versioned spreadsheets from Enron and [EUSES](https://dl.acm.org/doi/10.1145/1082983.1083242)) and designing workflows based on high-quality artifacts from investment and securities companies, World Bank, Canadian/British government agencies, WideSearch, Dabstep, and more.  
3. **Conducting meticulous expert annotation** of task instructions, input files, and reference outputs, involving hundreds of hours of expert work.

<img src="figs/annotation.png" width="1000" />

This process yields **172 enterprise-grade workflows—primarily multi-task composite workflows**, involving 1,710 spreadsheets and 27 million cells, capturing the intrinsic **messy, long-horizon, knowledge-intensive, and collaborative nature** of real-world finance & accounting work. In this release, we provide full annotations for the first 72 workflows, with the remaining 100 to be released in a subsequent update.

<img src="figs/distribution_chart.png" width="1000" />

We conduct both human and automated evaluations of frontier AI systems including GPT5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max. GPT 5.1 Pro spends 16 mins for each workflow on average and gets the highest pass rate of 38%, while Claude Sonnet 4.5 passes just 25.0%, revealing a substantial performance gap for real-world enterprise scenarios.

<img src="figs/exp_results.jpeg" width="1000" />

---

## Examples

Example 1: Review the Inv & WC Value Adj summary tab and add the missing cross‑sheet data references to the other worksheets so the roll‑up pulls the correct figures. Return the updated file with those links in place.

<img src="figs/example_1.jpeg" width="1000" />

Example 2: Add a new worksheet named "Scenario3" to the same workbook, mirroring the structure, row/column layout, monthly detail table, and chart area of "Scenario1". For Scenario3, update the hedging assumptions to a balanced allocation: 10-Yr 25%, 5-Yr 20%, 1-Yr 15%, May-Sep 20%, Q3 15%. Keep the note "Maximum Monthly Average Short Position to Cover (July Peak) = 30,508 MW" unchanged; only the new sheet should be added, and formulas may be used within it.

<img src="figs/example_2.jpeg" width="1000" />

Example 3: Transcribe the content from the image into the Excel file.

<img src="figs/example_3.jpeg" width="1000" />

Example 4: Per the red parameters and the Method 1/Method 2 guidance noted in H8 and H9, complete the formulas in columns T and U (starting from1), Note that the starting point for the formulas in columns T and U is 1, representing the initial signal to hold Index 1. In the formulas for columns T and U, 1 represents the signal to hold Index 1, -1 represents the signal to hold Index 2, and 0 represents the signal to make no change. Then complete column I. The method selection in B11 should drive the model so that all cells and charts refresh consistently when switching between methods.

<img src="figs/example_4.jpeg" width="1000" />

---


## 📁 Dataset Structure

The corpus is released in **JSONL** format.  
Each line corresponds to one **workflow-centric example**:

```json
{
  "id": "<workflow identifier>",
  "instruction_en": "<English task instruction for a finance & accounting workflow>",
  "source_files": ["<input file name>", "..."],
  "source_files_urls": ["<input file download URL>", "..."],
  "reference_outputs": {
    "files": ["<reference output file name>"],
    "text": "<textual reference output>"
  },
  "reference_file_urls": ["<reference output file download URL>"],
  "task_type": "<task category (e.g., reporting, modeling)>",
  "business_type": "<business domain (e.g., budgeting, trading)>",
  "task_constraints": "<task constraints (e.g., perform modifications rather than generation from scratch)>"
}
```

Annotations of Finch should not appear in training corpora.

---

## 📣 Feedback & Issues

If you find any issues with the dataset or have suggestions, please open a discussion in the **Community** tab — we value your feedback!

## 📣 Citation
```bibtex
@article{dong2025finch,
  title={Finch: Benchmarking Finance \& Accounting across Spreadsheet-Centric Enterprise Workflows},
  author={Dong, Haoyu and Zhang, Pengkun and Gao, Yan and Dong, Xuanyu and Cheng, Yilin and Lu, Mingzhe and Yakefu, Adina and Zheng, Shuxin},
  journal={arXiv preprint arXiv:2512.13168},
  year={2025}
}
```

**📧 Contact:** [email protected]