Benchmarking pre-trained text embedding models in aligning built asset information

Research output: Contribution to journalJournal Articlepeer-review

1 Citation (Scopus)

Abstract

Accurate mapping of the built asset information to various data classification systems and taxonomies is crucial for effective asset management, whether for compliance at project handover or ad-hoc data integration scenarios. Due to the complex nature of built asset data, which predominantly comprises technical text elements, this process remains largely manual and reliant on domain expert input. Recent breakthroughs in contextual text representation learning (text embedding), particularly through pre-trained large language models, offer promising approaches that can facilitate the automation of cross-mapping of the built asset data. However, no comprehensive evaluation has yet been conducted to assess these models’ ability to effectively represent the complex semantics specific to built asset technical terminology. This study presents a comparative benchmark of state-of-the-art text embedding models to evaluate their effectiveness in aligning built asset information with domain-specific technical concepts. Our proposed datasets are derived from two renowned built asset data classification dictionaries. The results of our benchmarking across six proposed datasets, covering clustering, retrieval, and reranking tasks, showed performance variations among models, deviating from the common trend of larger models achieving higher scores. Our results underscore the importance of domain-specific evaluations and future research into domain adaptation techniques, with instruction-tuning as a promising direction. The benchmarking resources are published as an open-source library, which will be maintained and extended to support future evaluations in this field.

Original languageEnglish
Article number23866
JournalScientific Reports
Volume15
Issue number1
DOIs
Publication statusPublished - Dec 2025

!!!Keywords

  • Benchmark
  • Dataset
  • Domain-specific evaluation
  • Information alignment
  • Large language models (LLMs)
  • Pre-trained language model
  • Representation
  • Text embedding

Fingerprint

Dive into the research topics of 'Benchmarking pre-trained text embedding models in aligning built asset information'. These topics are generated from the title and abstract of the publication. Together, they form a unique fingerprint.

Cite this