Notice: Unexpected clearActionName after getActionName already called in /var/www/html/w/includes/Context/RequestContext.php on line 324
PyKEEN Benchmarking Experiment Model Files - MaRDI portal

PyKEEN Benchmarking Experiment Model Files

From MaRDI portal
(Redirected from Dataset:6723858)



DOI10.5281/zenodo.7019181Zenodo7019181MaRDI QIDQ6723858

Dataset published at Zenodo repository.

Author name not available (Why is that?)

Publication date: 24 August 2022



Model Weights This repository provides weights of the models from the benchmarking study conducted in "Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models Under a Unified Framework" which have been upgraded to compatible with PyKEEN 1.9. The weights are organized as zipfiles, which are named by the dataset-interaction function configuration. For each of these combinations, we chose the best according to validation Hits@10 to include into this repository. For each model, we have three files: configuration.json contains the (pipeline) configuration used to train the model. It can loaded as import pathlib import json configuration = json.loads(pathlib.Path(configuration.json).read_text()) Since the configuration is intended for the pipeline, we need some custom code to re-create the model without re-training it. from pykeen.datasets import get_dataset from pykeen.models import ERModel, model_resolver configuration = configuration[pipeline] # load the triples factory dataset = get_dataset( dataset=configuration[dataset], dataset_kwargs=configuration.get(dataset_kwargs, None) ) model: ERModel = model_resolver.make( configuration[model], configuration[model_kwargs], triples_factory=dataset.training ) Note, that this only creates the model instance, but does not load the weights, yet. state_dict.pt contains the weights, stored via torch.save. They can be loaded via import torch state_dict = torch.load(state_dict.pt) We can load these weights into the model by using Module.load_state_dict model.load_state_dict(state_dict, strict=False) Note that we set strict=False, since the exported weights do not contain regularizers' state, while the re-instantiated models may have regularizers. results.json contains the results obtained by the original runs. It can be read by import pathlib import json configuration = json.loads(pathlib.Path(results.json).read_text()) Note that some of the recently added metrics are not available in those results.






This page was built for dataset: PyKEEN Benchmarking Experiment Model Files