Welcome to globalemu’s documentation!

globalemu: Robust and Fast Global 21-cm Signal Emulation

Introduction

globalemu:

Robust Global 21-cm Signal Emulation

Author:

Harry Thomas Jones Bevins

Version:

1.8.0

Homepage:

https://github.com/htjb/globalemu

Documentation:

https://globalemu.readthedocs.io/

https://mybinder.org/badge_logo.svg Documentation Status https://codecov.io/gh/htjb/globalemu/branch/master/graph/badge.svg?token=4KLLNT45TQ https://badge.fury.io/py/globalemu.svg github CI MIT License https://zenodo.org/badge/DOI/10.5281/zenodo.4767759.svg ascl:2104.028

Installation

The software can be pip installed from the PYPI repository via,

pip install globalemu

or alternatively it can be installed from the git repository via.

git clone https://github.com/htjb/globalemu.git # or the equivalent using ssh keys
cd globalemu
python setup.py install --user

Emulating the Global 21-cm Signal

globalemu is a fast and robust approach for emulating the Global or sky averaged 21-cm signal and the associated neutral fraction history. In the cited MNRAS paper below we show that it is a factor of approximately 102 times faster and 2 times as accurate as the previous state of the art 21cmGEM. The code is also flexible enough for it to be retrained on detailed simulations containing the most up to date physics. We release two trained networks, one for the Global signal and one for the neutral fraction history, details of which can be found in the MNRAS paper below.

You can download trained networks with the following code after pip installing or installing via the github repository:

from globalemu.downloads import download

download().model() # Redshift-Temperature Network
download(xHI=True).model() # Redshift-Neutral Fraction Network

which will produce two files in your working directory ‘T_release/’ and ‘xHI_release/’. Each file has the respective network model in and related pre and post processing files. You can then go on to evaluate each network for a set of parameters by running:

from globalemu.eval import evaluate

# [fstar, vc, fx, tau, alpha, nu_min, R_mfp]
params = [1e-3, 46.5, 1e-2, 0.0775, 1.25, 1.5, 30]

predictor = evaluate(base_dir='T_release/') # Redshift-Temperature Network
signal, z = predictor(params)

# note the parameter order is different for the neutral fraction emulator
# [fstar, vc, fx, nu_min, tau, alpha, R_mfp]
params = [1e-3, 46.5, 1e-2, 1.5, 0.0775, 1.25, 30]

predictor = evaluate(base_dir='xHI_release/') # Redshift-Neutral Fraction Network
signal, z = predictor(params)

The code can also be used to train a network on your own Global 21-cm signal or neutral fraction simulations using the built in globalemu pre-processing techniques. There is some flexibility on the required astrophysical input parameters and the pre-processing steps which is detailed in the documentation. More details about training your own network can be found in the documentation.

globalemu GUI

globalemu also features a GUI that can be invoked from the command line and used to explore how the structure of the Global 21-cm signal varies with the values of the astrophysical inputs. The GUI needs a configuration file to run and this can be generated using a built in globalemu function. GUI configuration files can be generated for any trained model. For example, if we wanted to generate a configuration file for the released Global signal emulator we would run,

from globalemu.gui_config import config

paramnames = [r'$\log(f_*)$', r'$\log(V_c)$', r'$\log(f_X)$',
              r'$\tau$', r'$\alpha$', r'$\nu_\mathrm{min}$',
              r'$R_\mathrm{mfp}$']

config('T_release/', paramnames, 'data/')

where the directory ‘data/’ contains the training and testing data (in this case that corresponding to 21cmGEM).

The GUI can then be invoked from the terminal via,

globalemu /path/to/base_dir/T_release/etc/

An image of the GUI is shown below.

graphical user interface

The GUI can also be used to investigate the physics of the neutral fraction history by generating a configuration file for the released trained model. There is no need to specify that the configuration file is for a neutral fraction emulator.

Configuration files for the released models are provided.

Documentation

The documentation is available at: https://globalemu.readthedocs.io/

It can be compiled locally after downloading the repo and installing the relevant packages (see below) via,

cd docs
sphinx-build source html-build

You can find a tutorial notebook here.

T_release/ and xHI_release/

The currently released global signal trained model, T_release/ is trained on the same training data set as 21cmGEM which is available here. The data used to train the neutral fraction history network, xHI_release/ is not publicly available but comes from the same large scale simulations used to model the global signal.

For both models the input parameters and ranges are given below.

Parameter

Description

T_release/

xHI_release/

Min

Max

Input Order

Input Order

f*

Star Formation Efficiency

1

1

0.0001

0.5

Vc

Minimal Virial Circular Veloity

2

2

4.2 km/s

100 km/s

fx

X-ray Efficiency

3

3

0

1000

tau

CMB Optical Depth

4

5

0.04

0.17

alpha

Power of X-ray SED slope

5

6

1.0

1.5

nu min

Low Energy Cut Off of X-ray SED

6

4

0.1 keV

3 keV

Rmfp

Mean Free Path of Ionizing Photons

7

7

10.0 Mpc

50.0 Mpc

Licence and Citation

The software is free to use on the MIT open source license. If you use the software for academic puposes then we request that you cite the globalemu paper below.

MNRAS pre-print (referred to in the documentation as the globalemu paper),

Bevins, H., W. J. Handley, A. Fialkov, E. D. L. Acedo and K. Javid. “GLOBALEMU: A novel and robust approach for emulating the sky-averaged 21-cm signal from the cosmic dawn and epoch of reionisation.” (2021). arXiv:2104.04336

Below is the bibtex,

@article{Bevins2021,
      author = {{Bevins}, H.~T.~J. and {Handley}, W.~J. and {Fialkov}, A. and {de Lera Acedo}, E. and {Javid}, K.},
        title = "{GLOBALEMU: a novel and robust approach for emulating the sky-averaged 21-cm signal from the cosmic dawn and epoch of reionization}",
      journal = {\mnras},
        year = 2021,
        month = dec,
      volume = {508},
      number = {2},
        pages = {2923-2936},
          doi = {10.1093/mnras/stab2737},
archivePrefix = {arXiv},
      eprint = {2104.04336},
primaryClass = {astro-ph.CO},
      adsurl = {https://ui.adsabs.harvard.edu/abs/2021MNRAS.508.2923B},
      adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

Requirements

To run the code you will need to following additional packages:

When installing via pip or from source via setup.py the above packages will be installed if absent.

To compile the documentation locally you will need:

To run the test suit you will need:

Contributing

Contributions to globalemu are very much welcome and can be made via,

  • Opening an issue to report a bug/propose a new feature.

  • Making a pull request. Please consider opening an issue first to discuss any proposals and ensure the PR will be accepted.

globalemu Tutorial

This tutorial will show you the basics of training and evaluating an instance of globalemu. If you are just interested in evaluating the released models then take a look at the second part towards the bottom of the page. If you are intending to work with neutral fraction histories then the frame work for training and evaluating models is identical you just need to pass the kwarg xHI=True to the pre-processing function, process(), and model building function, nn(), discussed below.

The tutorial can also be found as a Jupyter notebook here.

Training an instance of globalemu

This tutorial will show you how to train a globalemu model on simulations of the Global 21-cm signal.

The first thing we need to do is download some 21-cm signal models to train our network on. For this we will use the 21cmGEM models and the following code.

import requests
import os
import numpy as np

data_dir = 'downloaded_data/'
if not os.path.exists(data_dir):
  os.mkdir(data_dir)

files = ['Par_test_21cmGEM.txt', 'Par_train_21cmGEM.txt', 'T21_test_21cmGEM.txt', 'T21_train_21cmGEM.txt']
saves = ['test_data.txt', 'train_data.txt', 'test_labels.txt', 'train_labels.txt']

for i in range(len(files)):
  url = 'https://zenodo.org/record/4541500/files/' + files[i]
  with open(data_dir + saves[i], 'wb') as f:
      f.write(requests.get(url).content)

In order for globalemu to work the training data needs to be saved in the data_dir and in the files ‘train_data.txt’ and ‘train_labels.txt’ which are the inputs and outputs of the network respectively.

Once the files have been downloaded we can go ahead and perform the preprocessing necessary for globalemu to effectively train a model. We do this with the process() function found in globalemu.preprocess.

from globalemu.preprocess import process

base_dir = 'results/'
z = np.linspace(5, 50, 451)
num = 1000

process(num, z, base_dir=base_dir, data_location=data_dir)

Since this tutorial is only ment to demonstrate how to train a model with the globalemu code we are only going to pre-process 1000 models and train with 1000 models out of a possible ~24000. We do this by setting num=1000 above but if we wanted to train on all the models we would set num='full'.

Importantly the pre-processing function takes the data in data_dir and saves a .csv file in the base_dir containing the preprocessed inputs for the neural network. It also saves some files used for normalisation in the base_dir so that when evaluating the network the inputs and outputs can be properly dealt with.

By default the network subtracts and astrophysics free baseline from the models and resamples the signals at a higher rate in regions of high variation across the training data. Both of these pre-processing techniques are detailed in the globalemu MNRAS preprint. Users can prevent this happening by passing the kwargs AFB=False and resampling=False to process() if required.

Once pre-processing has been performed we can train our network with the nn() function in globalemu.network.

from globalemu.network import nn

nn(batch_size=451, epochs=10, base_dir=base_dir, layer_sizes=[8])

nn() has a bunch of keyword arguments that can be passed if required. All are documented and all have default values. However you will likely need to change things like base_dir which tells the code where the pre-processed data is and also layer_sizes which determines the network architecture. epochs is the number of training calls and often the default will be insufficient for training the network.

The code saves the model and loss history every ten epochs in case your computer crashes or the program is interrupted for some unforeseen reason. If this happens or you reach the max number of epochs and need to continue training you can do the following and the code will resume from the last save.

nn(batch_size=451, epochs=10, base_dir=base_dir, layer_sizes=[8], resume=True)

You have now successfully trained an instance of globalemu.

Evaluating an instance of globalemu

We can go ahead and evaluate the model using the testing data that we downloaded earlier.

test_data = np.loadtxt(data_dir + 'test_data.txt')
test_labels = np.loadtxt(data_dir + 'test_labels.txt')

With the data loaded we will look at how the model performs when predicting the first signal in the data set. We do this with the evaluate() class in globalemu.eval which takes in a set of parameters and returns a signal. The class must first, however, be initialised with a set of kwargs. We supply a base_dir which contains the pre-processed data, normalisation factors and trained model. You can also pass a redshift range with the z kwarg however if this isn’t supplied than the function will return the signal at the original redshifts that were used for training.

from globalemu.eval import evaluate

input_params = test_data[0, :]
true_signal = test_labels[0, :]

predictor = evaluate(base_dir=base_dir)
signal, z = predictor(input_params)

import matplotlib.pyplot as plt

plt.plot(z, true_signal, label='True Signal')
plt.plot(z, signal, label='Emulation')
plt.legend()
plt.ylabel(r'$\delta T$ [mK]')
plt.xlabel(r'$z$')
See `notebook <https://mybinder.org/v2/gh/htjb/globalemu/master?filepath=notebooks%2F>`__. for plot

The emulation is pretty poor for several reasons; we didn’t run the training for long enough (only 20 epochs), the network size is small and we used very little of the available training data.

We can have a look at the same signal emulated with the released model on github. This was trained with a much more appropriately sized network, the full training data and a few hundred epochs. The results are therefore more similar to the true signal.

predictor = evaluate(base_dir='../T_release/')
signal, z = predictor(input_params)

plt.plot(z, true_signal, label='True Signal')
plt.plot(z, signal, label='Emulation')
plt.legend()
plt.ylabel(r'$\delta T$ [mK]')
plt.xlabel(r'$z$')
See `notebook <https://mybinder.org/v2/gh/htjb/globalemu/master?filepath=notebooks%2F>`__. for plot

In addition to evaluating one model at a time a user can also evaluate a set of parameters using the emulator.

input_params = test_data[:5, :]
true_signal = test_labels[:5, :]

signal, z = predictor(input_params)

for i in range(len(true_signal)):
    if i==0:
        plt.plot(z, true_signal[i, :], c='k', ls='--', label='True Signal')
        plt.plot(z, signal[i, :], c='r', label='Emulation')
    else:
        plt.plot(z, true_signal[i, :], c='k')
        plt.plot(z, signal[i, :], c='r')
plt.legend()
plt.ylabel(r'$\delta T$ [mK]')
plt.xlabel(r'$z$')
See `notebook <https://mybinder.org/v2/gh/htjb/globalemu/master?filepath=notebooks%2F>`__. for plot

Further Evaluation

The function globalemu.plotter.signal_plot() can also be used to assess the quality of emulation. This function is designed to plot the mean, 95th percentile and worse emulations, based on a given loss function, of a set of signals given their corresponding parameters.

from globalemu.eval import evaluate
from globalemu.plotter import signal_plot

predictor = evaluate(base_dir='../T_release/')

parameters = np.loadtxt('downloaded_data/test_data.txt')
labels = np.loadtxt('downloaded_data/test_labels.txt')

plotter = signal_plot(parameters, labels, 'rmse', predictor, '../T_release/',
    loss_label='RMSE = {:.4f} [mK]')

This particular example uses the 'rmse' loss function that is built into the emulator but an alternative function can be provided by the user (see documentation for details). The graph that is produced gets saved in the provided base_dir, in this case 'T_release/' and looks like the below figure.

See `notebook <https://mybinder.org/v2/gh/htjb/globalemu/master?filepath=notebooks%2F>`__. for plot

Downloading Trained Models

The released trained models can be directly downloaded from the github pages or a built in helper function can be used to download the models. The function can be called like so

from globalemu.downloads import download

download().model() # Redshift-Temperature Network
download(xHI=True).model() # Redshift-Neutral Fraction Network

which will download the released models into the present working directory and the files T_release/ and xHI_release.

globalemu Functions

globalemu.preprocess.process()

process() is used to preprocess the data in the provided directory using the techniques outlined in the globalemu paper. For process() to work it requires the testing and training data to be saved in the data_location directory in a specific manner. The “labels” or temperatures (network outputs) should be saved as “test_labels.txt”/”train_labels.txt” and the “data” or astrophysical parameters (network inputs excluding redshift) as “test_data.txt”/”train_data.txt”.

class globalemu.preprocess.process(num, z, **kwargs)[source]

Parameters:

num: int
The number of models that will be used to train globalemu. If you wish to use the full training data set then set num = 'full'.
z: np.array
The redshift range that corresponds to the models in the saved “test_labels.txt” and “train_labels.txt” e.g. for the 21cmGEM data this would be np.arange(5, 50.1, 0.1).

kwargs:

base_dir: string / default: ‘model_dir/’
The base_dir is where the preprocessed data and later the trained models will be placed. This should be thought of as the working directory as it will be needed when training a model and making evaluations of trained models.
data_location: string / default: ‘data/’
As discussed above the data_loaction is where the data to be processed is to be found. It must be accurately provided for the code to work and must end in a ‘/’.
xHI: Bool / default: False
If True then globalemu will act as if it is training a neutral fraction history emulator.
AFB: Bool / default: None
If True then globalemu will calculate an astrophysics free baseline and subtract this from the training data signals. The AFB is specific to the global 21-cm signal and as globalemu is set up to emulate the global signal by default this parameter is set to True. If xHI is True then AFB is set to False by default.
std_division: Bool / default: None
If True then globalemu will divide the training data by the standard deviation across the training data. This is recommended when building an emulator to emulate the global signal and is set to True by default. If xHI is True then std_division is set to False by default.
resampling: Bool / default: None
Controls whether or not the signals will be resampled with higher sampling at regions of large variation in the training data set or not. Set to True by default as this is advised for training both neutral fraction and global signal emulators.
logs: list / default: [0, 1, 2]
The indices corresponding to the astrophysical parameters in “train_data.txt” that need to be logged. The default assumes that the first three columns in “train_data.txt” are \({f_*}\) (star formation efficiency), \({V_c}\) (minimum virial circular velocity) and \({f_x}\) (X-ray efficieny).

globalemu.network.nn()

nn() is used to train an instance of globalemu on the preprocessed data in base_dir. All of the parameters for nn() are kwargs and a number of them can be left at their default values however you will need to set the base_dir and possibly epochs and xHI (see below and the tutorial for details).

class globalemu.network.nn(**kwargs)[source]

kwargs:

batch_size: int / default: 100
The batch size used by tensorflow when performing training. Corresponds to the number of samples propagated before the networks hyperparameters are updated. Keep the value ~100 as this will help with memory management and training speed.
epochs: int / default: 10
The number of epochs to train the network on. An epoch corresponds to training on x batches where x is sufficiently large for every sample to have influenced an update of the network hyperparameters.
activation: string / default: ‘tanh’
The type of activation function used in the neural networks hidden layers. The activation function effects the way that the network learns and updates its hyperparameters. The defualt is a commonly used activation for regression neural networks.
lr: float / default: 0.001
The learning rate acts as a “step size” in the optimization and its value can effect the quality of the emulation. Typical values fall in the range 0.001-0.1.
dropout: float / default: 0
The dropout for the neural network training. globalemu is designed so that you shouldn’t need dropout to prevent overfitting but we leave it as an option.
input_shape: int / default: 8
The number of input parameters (astrophysical parameters plus redshift) for the neural network. The default accounts for 7 astrophysical parameters and a single redshift input.
output_shape: int / default: 1
The number of ouputs (temperature) from the neural network. This shouldn’t need changing.
layer_sizes: list / default: [input_shape, input_shape]
The number of hidden layers and the number of nodes in each layer. For example layer_sizes=[8, 8] will create two hidden layers both with 8 nodes (this is the default).
base_dir: string / default: ‘model_dir/’
This should be the same as the base_dir used when preprocessing. It contains the data that the network will work with and is the directory in which the trained model will be saved in.
early_stop: Bool / default: False
If early_stop is set too True then the network will stop learning if the loss has not changed within the last twenty epochs.
xHI: Bool / default: False
If True then globalemu will act as if it is training a neutral fraction history emulator.
output_activation: string / default: ‘linear’
Determines the output activation function for the network. Modifying this is useful if the emulator output is required to be positive or negative etc. If xHI is True then the output activation is set to ‘relu’ else the function is ‘linear’. See the tensorflow documentation for more details on the types of activation functions available.
loss_function: Callable/ default: None
By default the code uses an MSE loss however users are able to pass their own loss functions when training the neural network. These should be functions that take in the true labels (temperatures) and the predicted labels and return some measure of loss. Care needs to be taken to ensure that the correct loss function is supplied when resuming the training of a previous run as globalemu will not check this. In order for the loss function to work it must be built using the tensorflow.keras backend. An example would be
from tensorflow.keras import backend as K

def custom_loss(true_labels, predicted_labels,
        netowrk_inputs):
    return K.mean(K.abs(true_labels - predicted_labels))

The function must take in as arguments the true_labels, the predicted_labels and the network_inputs.

resume: Bool / default: False
If set to True then globalemu will look in the base_dir for a trained model and loss_history.txt file (which contains the loss recorded at each epoch) and load these in to continue training. If resume is True then you need to make sure all of the kwargs are set the with the same values that they had in the initial training for a consistent run. There will be a human readable file in base_dir called “kwargs.txt” detailing the values of the kwargs that were provided for the initial training run. Anything missing from this file will of had its default value. This file will not be overwritten if resume=True.
random_seed: int or float / default: None
This kwarg sets the random seed used by tensorflow with the function tf.random.set_seed(random_seed). It should be used if you want to have reproducible results but note that it may cause an ‘out of memory’ error if training on large amounts of data (see https://github.com/tensorflow/tensorflow/issues/37252).

globalemu.eval.evaluate()

evaluate() is used to make an evaluation of a trained instance of globalemu. It has to be initialised with a set of kwargs, most importantly the base_dir which contains the trained model. Once initialised it can then be used to make predictions and return the predicted signal plus the corresponding redshift. evaluate() can reproduce a high resolution Global 21-cm signal (450 redshift data points) in 1.5 ms.

class globalemu.eval.evaluate(**kwargs)[source]

The class can be initialised with the following kwargs and the following code

predictor = evaluate(**kwargs)

kwargs:

base_dir: string / default: ‘model_dir/’
The base_dir is where the trained model is saved.
model: tensorflow model / default: None
If making multiple calls to the function it is advisable to load the trained model in the script making the calls and then to pass it to evaluate(). This prevents the model being loaded upon each call and leads to a significant increase speed. You can load a model via,
from tensorflow import keras

model = keras.models.load_model(
    base_dir + 'model.h5',
    compile=False)
logs: list / default: [0, 1, 2]
The indices corresponding to the astrophysical parameters that were logged during training. The default assumes that the first three columns in “train_data.txt” are \({f_*}\) (star formation efficiency), \({V_c}\) (minimum virial circular velocity) and \({f_x}\) (X-ray efficieny).
gc: Bool / default: False
Multiple calls to the function can cause runaway memory related issues (it is worth testing this behaviour before scheduling hpc jobs) and these memory issues can be somewhat eleviated by setting gc=True. This performs a garbage collection after every function call. It is an optional argumen set to False by default because it can increase the time taken to perform the emulation.
z: list or np.array / default: Original redshift array
The redshift values at which you want to emulate the 21-cm signal. The default is given by the redshift range that the network was originally trained on (found in base_dir).

Once the class has been initialised you can then make evaluations of the emulator by passing the parameters like so

signal, z = predictor(parameters)

Parameters:

parameters: list or np.array
The combination of astrophysical parameters that you want to emulate a global signal for. They must be in the same order as was used when training and they must fall with in the trained parameter space. For the 21cmGEM data the order of the astrophysical parameters is given by: \({f_*, V_c, f_x, \tau, \alpha, \nu_\mathrm{min}}\) and \({R_\mathrm{mfp}}\) (see the globalemu paper and references therein for a description of the parameters). You can pass a single set of parameters or a 2D array of different parameters to evaluate. For example if I wanted to evaluate 100 sets of 7 parameters my input array should have shape=(100, 7).

Return:

signal: array or float
The emulated signal. If a single redshift is passed to the emulator then the returned signal will be a single float otherwise the result will be an array. If more than one set of parameters are input then the output signal will be an array of signals. e.g. 100 input sets of parameters gives signal.shape=(100, len(z)).
z: array or float
The redshift values corresponding to the returned signal. If z was not specified on input then the returned signal and redshifts will correspond to the redshifts that the network was originally trained on.

globalemu.plotter.signal_plot()

This function can be used to assess the accuracy of emulation of a test data set given a trained model and produces a figure showing the mean, 95th percentile and worst emulations. Examples of these figures can be found in the MNRAS preprint. The figure will be saved in the provided 'base_dir/'.

class globalemu.plotter.signal_plot(parameters, labels, loss_type, predictor, base_dir, **kwargs)[source]

The class can be initialised with the following kwargs and the following code

plotter  = signal_plot(parameters, labels, loss_type,
                predictor, base_dir, **kwargs)

Parameters:

parameters: list or np.array
The astrophysical parameters corresponding to the testing data.
labels: list or np.array
The signals, corresponding to the input parameters, that we want to predict and subsequently plot the mean, 95th percentile and worst emulations of.
loss_type: ** str or function**
The metric by which we want to assess the accuracy of emulation. The built in loss functions can be accessed by setting this variable to ‘rmse’, ‘mse’ or ‘GEMLoss’. Alternatively, a user defined callable function that takes in the labels and signals can also be provided.
predictor: ** globalemu.eval object **
An instance of the globalemu eval class that will be used to make predictions of the labels from the input parameters.
base_dir: string / default: ‘model_dir/’
The base_dir is where the signal plot will be saved.

kwargs:

rtol: int or float / default: 1e-2
The relative accuracy with which the function finds a signal with a loss equal to the mean loss for all predictions.
atol: int or float / default: 1e-2
The absolute accuracy with which the function finds a signal with a loss equal to the mean loss for all predictions.
figsizex: int or float / default: 5
The of the figure along the x axis to be passed to plt.subplots().
figsizey: int or float / default: 10
The of the figure along the y axis to be passed to plt.subplots().
xHI: Bool / default: False
If True then globalemu will act as if it is evaluating a neutral fraction history emulator.
loss_label: string/ default: ‘Loss = {:.3f}’
This kwarg can be used to adjust the loss labels in the plot legends. For example if we wanted precision in the 4th decimal place we can set loss_label= 'Loss = {:.4f}'. Equally if we wanted to change the name of the loss and add in units we can have loss_label= 'RMSE = {:.3f} mK'.

globalemu.gui_config.config()

This function can be used to generate a configurate file for the GUI that is specific to a given trained model. The file gets saved into the supplied base_dir which should contain the relevant trained model. The user also needs to supply a path to the data_dir that contains the relevant testing and training data. Additional arguments are described below.

A GUI config file is required to be able to visualise the signals with the GUI and once generated the gui can be run from the command line

globalemu /path/to/base_dir/containing/model/and/config/
class globalemu.gui_config.config(base_dir, paramnames, data_dir, **kwargs)[source]

Parameters:

base_dir: string
The path to the file containing the trained tensorflow model that the user wishes to visualise with the GUI. Must end in ‘/’.
paramnames: list of strings
This should be a list of parameter names in the correct input order. For example for the released global signal model this would correspond to

Latex strings can be provided as above.

data_dir: string
The file path to the training and test data which is used to set the y lims of the GUI graph and ranges/intervals of GUI sliders.

Kwargs:

logs: list / default: [0, 1, 2]
The indices corresponding to the astrophysical parameters that were logged during training. The default assumes that the first three columns in “train_data.txt” are \({f_*}\) (star formation efficiency), \({V_c}\) (minimum virial circular velocity) and \({f_x}\) (X-ray efficieny).
ylabel: string / default: ‘y’
y-axis label for gui plot.

globalemu.downloads.download()

download() can be used to download the released trained models for both the global signal and neutral fraction history emulators.

class globalemu.downloads.download(xHI=False)[source]

Parameters:

xHI: Bool / default: False
Setting this equal to True will cause the method model() to download the released neutal fraction history model rather than the released global signal network.