Integrating multiple (sub-)systems is essential to create advanced Information Systems (ISs). Difficulties mainly arise when integrating dynamic environments across the IS lifecycle, e.g., services not yet existent at design time. A traditional approach is a registry that provides the API documentation of the systems’ endpoints. Large Language Models (LLMs) have shown to be capable of automatically creating system integrations (e.g., as service composition) based on this documentation but require concise input due to input token limitations, especially regarding comprehensive API descriptions. Currently, it is unknown how best to preprocess these API descriptions. Within this work, we (i) analyze the usage of Retrieval Augmented Generation (RAG) for endpoint discovery and the chunking, i.e., preprocessing, of state-of-practice OpenAPIs to reduce the input token length while preserving the most relevant information. To further reduce the input token length for the composition prompt and improve endpoint retrieval, we propose (ii) a Discovery Agent that only receives a summary of the most relevant endpoints and retrieves specification details on demand. We evaluate RAG for endpoint discovery using the RestBench benchmark, first, for the different chunking possibilities and parameters measuring the endpoint retrieval recall, precision, and F1 score. Then, we assess the Discovery Agent using the same test set. With our prototype, we demonstrate how to successfully employ RAG for endpoint discovery to reduce token count. While revealing high values for recall, precision, and F1, further research is necessary to retrieve all requisite endpoints. Our experiments show that for preprocessing, LLM-based and format-specific approaches outperform naïve chunking methods. Relying on an agent further enhances these results as the agent splits the tasks into multiple fine granular subtasks, improving the overall RAG performance in the token count, precision, and F1 score.
Content:
code.zip:Python source code to perform the experiments.
evaluate.py: Script to execute the experiments (Uncomment lines to select the embedding model).
socrag/*: Source code for the RAG.
benchmark/*: RestBench specification.
results.zip:Results of the RAG experiments (in the folder /results/data/ inside the zip file).
Experiment results for the RAG: results_{embedding_model}_{top-k}.json.
Experiment results for the Discovery Agent: results_{embedding_model}_{agent}_{refinement}_{llm}.json.
FAISS store (intermediate data required for exact reproduction of results; one folder for each embedding model): bge_small, nvidia and oai.
Intermediate data of the LLM-based refinement methods required for the exact reproduction of results: *_parser.json.