Datasets:
text_id
int64 0
1.03k
| chunk_id
int64 0
0
| pixel_values
listlengths 16
16
| grapheme_token_ids
listlengths 3
19
| llama_token_ids
listlengths 3
15
| text
stringlengths 2
30
|
|---|---|---|---|---|---|
0
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
19,
0,
2
] |
[
1,
3500,
273,
2
] |
papan
|
1
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
19,
116,
2
] |
[
1,
10203,
29875,
2
] |
pagi
|
2
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
110,
110,
22,
2
] |
[
1,
367,
16863,
2
] |
bebek
|
3
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
0,
50,
41,
2
] |
[
1,
286,
5893,
29875,
2
] |
menteri
|
4
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
156,
282,
2
] |
[
1,
289,
574,
29895,
574,
2
] |
bangkang
|
5
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
194,
194,
2
] |
[
1,
2594,
1646,
2
] |
barbar
|
6
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
6,
0,
2
] |
[
1,
472,
1450,
2
] |
ataw
|
7
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
103,
135,
2
] |
[
1,
282,
686,
29884,
2
] |
pungu
|
8
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
66,
208,
2
] |
[
1,
21062,
272,
2
] |
kotor
|
9
| 0
| [[255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,25(...TRUNCATED)
|
[
1,
13,
0,
2
] |
[
1,
15296,
1794,
2
] |
sabai
|
End of preview. Expand
in Data Studio
Lampung PixelGPT Dataset
This dataset contains preprocessed Lampung text data for training PixelGPT models.
Dataset Statistics
- Language: Lampung (lampung)
- Total samples: 1,029
- Train samples: 945
- Test samples: 84
Tokenizers
- Grapheme tokenizer: izzako/sunda-llama-tokenizer
- LLaMA tokenizer: ernie-research/DualGPT
Features
text_id: Document identifierchunk_id: Chunk identifier within documentpixel_values: Rendered pixel representation of aksara textgrapheme_token_ids: Token IDs from grapheme-based tokenizer (with EOS token)llama_token_ids: Token IDs from LLaMA tokenizer (with EOS token)text: Original text chunk
Renderer Configuration
- Renderer path: ../../renderers/lampung_renderer
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("izzako/lampung-pixelgpt")
# Access train and test splits
train_data = dataset['train']
test_data = dataset['test']
# Example: Get first sample
sample = train_data[0]
print(sample['text'])
Citation
If you use this dataset, please cite:
@dataset{lampung_pixelgpt,
title = {Lampung PixelGPT Dataset},
author = {Musa Izzanardi Wijanarko},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/izzako/lampung-pixelgpt}
}
- Downloads last month
- 30