APTO Releases High-Accuracy Japanese Reasoning Data for LLM Fine-Tuning, Free of Charge

3 hours ago 3
Suniway Group of Companies Inc.

Upgrade to High-Speed Internet for only ₱1499/month!

Enjoy up to 100 Mbps fiber broadband, perfect for browsing, streaming, and gaming.

Visit Suniway.ph to learn

This dataset can help to improve reasoning ability in Japanese and reduce redundant inference.

This allows for faster inference even with limited token counts and memory usage.

Dataset Details

Each data entry includes a question that requires reasoning and its corresponding answer, with the thought process described within 'think' XML tags.

Get the latest news
delivered to your inbox

Sign up for The Manila Times newsletters

By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

This dataset consists of high-quality data generated by our proprietary technology and manually reviewed for accuracy.

Validation using models such as Qwen3 has confirmed that training with this dataset improves reasoning ability in Japanese and enables more efficient inference.

Additionally, testing with the Japanese MT-Bench showed performance improvements particularly in categories such as reasoning, math, and coding.

 An example of JSON format in the free public dataset.

Figure 1: An example of JSON format in the free public dataset.

Tag Information

Each question-and-answer conversation is labeled with tag information indicating the subject matter and genre of the conversation.

The following labels are used:

People Human Relations Social Studies Business
Economics Politics Law Technology
Religion Astronomy Meteorology Fashion
Programming Manufacturing Daily life Mathematics
Health Medicine Education Biology
Japanese Physics Chemistry Geography
Science History Linguistics Literature
Performing Arts Art Music Transport
Food Recipes Leisure Games
Sports Industry

Performance Evaluation Results of the Data

With the Qwen3 model, the thought process enclosed in 'think' tags often became lengthy depending on the task-particularly in multi-turn conversations.

In fact, for math and reasoning tasks in the Japanese MT-Bench, there were many cases where the model engaged in extremely long trial-and-error thinking and failed to reach a conclusion.

In environments with limited token availability, tests showed that avoiding reasoning sometimes yielded higher scores.

However, by fine-tuning with our reasoning dataset, the model was able to reason in Japanese while also suppressing redundant inference, resulting in faster inference even with token count and memory usage constraints.

Figure 2 are the evaluation results from Japanese MT-Bench under a restricted maximum token output setting *¹

( *¹ All results were generated using 4-bit quantization, with a maximum output of 4,096 tokens.)

 Japanese MT-Bench under a restricted maximum token output setting.

Figure 2: Japanese MT-Bench under a restricted maximum token output setting.

The 'Baseline (Qwen3)' refers to the score of the standard Qwen3 model with reasoning enabled as an option.

'+FineTuning' indicates the score after fine-tuning using 100 samples from the included dataset, combined with synthetically generated data created under the same conditions.

In the Japanese MT-Bench, there are 10 questions for each of the 8 categories shown under 'Category.'

The answers were automatically evaluated using OpenAI's GPT-4.1 model API, with scores given on a 10-point scale. The table shows the average of these scores. *² *³

(*² Additionally, during evaluation by GPT-4.1, a Chain-of-Thought (CoT) process prompting the model to explain its reasoning was added for validation.)
(*³ Since output variability occurs during generation, the scores represent the average of four repeated runs of the same benchmark test.)

The 'Total' score represents the average of the scores across all eight categories.

As noted above, improvements were observed across all levels, including categories involving reasoning.

This suggests that the model is now able to generate appropriate responses even with a limited number of tokens, effectively enhancing its performance in Japanese.

This dataset is also publicly available on Hugging Face at the following link:

https://huggingface.co/datasets/APTOinc/japanese-reasoning-dataset-sample

For our existing clients, it will also be shared soon through our email newsletter. We hope it helps accelerate your AI development and enhance accuracy. Feel free to make full use of it!

About APTO, Inc.

APTO provides AI development support services focused on data, the most critical factor influencing accuracy in AI development.

Our offerings include:

  • harBest, a data collection and annotation platform utilizing crowd workers
  • harBest Dataset, which accelerates the preparation of data, a common bottleneck in early development stages
  • harBest Expert, which enhances data quality using the knowledge of field experts.

By supporting AI development projects that face data-related challenges, we have earned the trust of many enterprise clients both in Japan and abroad.

We provide support for AI data, model development, GPU resources, and a variety of other needs. If you're facing challenges in AI development, please feel free to reach out to us.

CONTACT: Katina Nguyen, [email protected]

Read Entire Article