ler.utils

Submodules

Package Contents

Classes

NumpyEncoder

Class for storing a numpy.ndarray or any nested-list composition as JSON file. This is required for dealing np.nan and np.inf.

TrainingDataGenerator

FunctionConditioning

FunctionConditioning

Functions

is_njitted(func)

remove_file(file_name)

Remove a file.

load_pickle(file_name)

Load a pickle file.

save_pickle(file_name, param)

Save a dictionary as a pickle file.

load_hdf5(file_name)

Load a hdf5 file.

save_hdf5(file_name, param)

Save a dictionary as a hdf5 file.

load_json(file_name)

Load a json file.

save_json(file_name, param)

Save a dictionary as a json file.

append_json(file_name, new_dictionary[, ...])

Append (values with corresponding keys) and update a json file with a dictionary. There are four options:

concatenate_dict_values(dict1, dict2)

Adds the values of two dictionaries together.

get_param_from_json(json_file)

Function to get the parameters from json file.

load_txt_from_module(package, directory, filename)

rejection_sample(pdf, xmin, xmax[, size, chunk_size])

Helper function for rejection sampling from a pdf with maximum and minimum arguments.

rejection_sample2d(pdf, xmin, xmax, ymin, ymax[, ...])

Helper function for rejection sampling from a 2D pdf with maximum and minimum arguments.

add_dictionaries_together(dictionary1, dictionary2)

Adds two dictionaries with the same keys together.

trim_dictionary(dictionary, size)

Filters an event dictionary to only contain the size.

create_func_pdf_invcdf(x, y[, category])

Function to create a interpolated function, inverse function or inverse cdf from the input x and y.

create_conditioned_pdf_invcdf(x, conditioned_y, ...)

pdf_func is the function to calculate the pdf of x given y

create_func(x, y)

Function to create a spline interpolated function from the input x and y.

create_func_inv(x, y)

Function to create a spline interpolated inverse function from the input x and y.

create_pdf(x, y)

Function to create a spline interpolated normalized pdf from the input x and y.

create_inv_cdf_array(x, y)

Function to create a spline interpolated inverse cdf from the input x and y.

create_conditioned_pdf(x, conditioned_y, pdf_func)

Function to create a conditioned pdf from the input x and y.

create_conditioned_inv_cdf_array(x, conditioned_y, ...)

Function to create a conditioned inv_cdf from the input x and y.

interpolator_from_json(identifier_dict, directory, ...)

Function to decide which interpolator to use.

interpolator_json_path(identifier_dict, directory, ...)

Function to create the interpolator json file path.

batch_handler(size, batch_size, sampling_routine, ...)

Function to run the sampling in batches.

create_batch_params(sampling_routine, frac_batches, ...)

Helper function to batch_handler. It create batch parameters and store in a dictionary.

monte_carlo_integration(function, uniform_prior[, size])

Function to perform Monte Carlo integration.

cubic_spline_interpolator(xnew, coefficients, x)

Function to interpolate using cubic spline.

pdf_cubic_spline_interpolator(xnew, norm_const, ...)

Function to interpolate pdf using cubic spline.

pdf_cubic_spline_interpolator2d_array(xnew_array, ...)

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

cubic_spline_interpolator2d_array(xnew_array, ...)

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

coefficients_generator_ler(y1, y2, y3, y4, z1, z2, z3, z4)

Function to generate the coefficients for the cubic spline interpolation of fn(y)=z.

inverse_transform_sampler2d(size, conditioned_y, ...)

Function to find sampler interpolator coefficients from the conditioned y.

inverse_transform_sampler(size, cdf, x)

Function to sample from the inverse transform method.

normal_pdf(x[, mean, std])

Calculate the value of a normal probability density function.

normal_pdf_2d(x, y[, mean_x, std_x, mean_y, std_y])

Calculate the value of a 2D normal probability density function.

cumulative_trapezoid(y[, x, dx, initial])

Compute the cumulative integral of a function using the trapezoidal rule.

sample_from_powerlaw_distribution(size, alphans, ...)

Inverse transform sampling for a power-law mass distribution:

get_param_from_json(json_file)

Function to get the parameters from json file.

param_plot([param_name, param_dict, plot_label, ...])

Function to plot the distribution of the GW source parameters.

relative_mu_dt_unlensed(param[, size, randomize])

Function to generate relative magnification vs time delay difference for unlensed samples.

relative_mu_dt_lensed(lensed_param[, pdet_threshold, ...])

Function to classify the lensed images wrt to the morse phase difference.

mu_vs_dt_plot(x_array, y_array[, xscale, yscale, ...])

Function to generate 2D KDE and plot the relative magnification vs time delay difference for lensed samples.

append_json(file_name, new_dictionary[, ...])

Append (values with corresponding keys) and update a json file with a dictionary. There are four options:

get_param_from_json(json_file)

Function to get the parameters from json file.

interpolator_json_path(identifier_dict, directory, ...)

Function to create the interpolator json file path.

cubic_spline_interpolator(xnew, coefficients, x)

Function to interpolate using cubic spline.

pdf_cubic_spline_interpolator(xnew, norm_const, ...)

Function to interpolate pdf using cubic spline.

cubic_spline_interpolator2d_array(xnew_array, ...)

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

inverse_transform_sampler(size, cdf, x)

Function to sample from the inverse transform method.

pdf_cubic_spline_interpolator2d_array(xnew_array, ...)

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

save_json(file_name, param)

Save a dictionary as a json file.

load_json(file_name)

Load a json file.

inverse_transform_sampler2d(size, conditioned_y, ...)

Function to find sampler interpolator coefficients from the conditioned y.

redshift_optimal_spacing(z_min, z_max, resolution)

luminosity_distance([z, z_min, z_max, cosmo, ...])

Function to create a lookup table for the luminosity distance wrt redshift.

differential_comoving_volume([z, z_min, z_max, cosmo, ...])

comoving_distance([z, z_min, z_max, cosmo, directory, ...])

angular_diameter_distance([z, z_min, z_max, cosmo, ...])

angular_diameter_distance_z1z2([z1, z2, z_min, z_max, ...])

ler.utils.is_njitted(func)[source]
class ler.utils.NumpyEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]

Bases: json.JSONEncoder

Class for storing a numpy.ndarray or any nested-list composition as JSON file. This is required for dealing np.nan and np.inf.

Parameters:
json.JSONEncoderclass

class for encoding JSON file

Returns:
json.JSONEncoder.defaultfunction

function for encoding JSON file

default(obj)[source]

function for encoding JSON file

ler.utils.remove_file(file_name)[source]

Remove a file.

ler.utils.load_pickle(file_name)[source]

Load a pickle file.

Parameters:
file_namestr

pickle file name for storing the parameters.

Returns:
paramdict
ler.utils.save_pickle(file_name, param)[source]

Save a dictionary as a pickle file.

Parameters:
file_namestr

pickle file name for storing the parameters.

paramdict

dictionary to be saved as a pickle file.

ler.utils.load_hdf5(file_name)[source]

Load a hdf5 file.

Parameters:
file_namestr

hdf5 file name for storing the parameters.

Returns:
paramdict
ler.utils.save_hdf5(file_name, param)[source]

Save a dictionary as a hdf5 file.

Parameters:
file_namestr

hdf5 file name for storing the parameters.

paramdict

dictionary to be saved as a hdf5 file.

ler.utils.load_json(file_name)[source]

Load a json file.

Parameters:
file_namestr

json file name for storing the parameters.

Returns:
paramdict
ler.utils.save_json(file_name, param)[source]

Save a dictionary as a json file.

Parameters:
file_namestr

json file name for storing the parameters.

paramdict

dictionary to be saved as a json file.

ler.utils.append_json(file_name, new_dictionary, old_dictionary=None, replace=False)[source]

Append (values with corresponding keys) and update a json file with a dictionary. There are four options:

  1. If old_dictionary is provided, the values of the new dictionary will be appended to the old dictionary and save in the ‘file_name’ json file.

  2. If replace is True, replace the json file (with the ‘file_name’) content with the new_dictionary.

  3. If the file does not exist, create a new one with the new_dictionary.

  4. If none of the above, append the new dictionary to the content of the json file.

Parameters:
file_namestr

json file name for storing the parameters.

new_dictionarydict

dictionary to be appended to the json file.

old_dictionarydict, optional

If provided the values of the new dictionary will be appended to the old dictionary and save in the ‘file_name’ json file. Default is None.

replacebool, optional

If True, replace the json file with the dictionary. Default is False.

ler.utils.concatenate_dict_values(dict1, dict2)[source]

Adds the values of two dictionaries together.

Parameters:
dict1dict

dictionary to be added.

dict2dict

dictionary to be added.

Returns:
dict1dict

dictionary with added values.

ler.utils.get_param_from_json(json_file)[source]

Function to get the parameters from json file.

Parameters:
json_filestr

json file name for storing the parameters.

Returns:
paramdict
ler.utils.load_txt_from_module(package, directory, filename)[source]
ler.utils.rejection_sample(pdf, xmin, xmax, size=100, chunk_size=10000)[source]

Helper function for rejection sampling from a pdf with maximum and minimum arguments.

Parameters:
pdffunction

pdf function.

xminfloat

minimum value of the pdf.

xmaxfloat

maximum value of the pdf.

sizeint, optional

number of samples. Default is 100.

chunk_sizeint, optional

chunk size for sampling. Default is 10000.

Returns:
x_samplenumpy.ndarray

samples from the pdf.

ler.utils.rejection_sample2d(pdf, xmin, xmax, ymin, ymax, size=100, chunk_size=10000)[source]

Helper function for rejection sampling from a 2D pdf with maximum and minimum arguments.

Parameters:
pdffunction

2D pdf function.

xminfloat

minimum value of the pdf in the x-axis.

xmaxfloat

maximum value of the pdf in the x-axis.

yminfloat

minimum value of the pdf in the y-axis.

ymaxfloat

maximum value of the pdf in the y-axis.

sizeint, optional

number of samples. Default is 100.

chunk_sizeint, optional

chunk size for sampling. Default is 10000.

Returns:
x_samplenumpy.ndarray

samples from the pdf in the x-axis.

ler.utils.add_dictionaries_together(dictionary1, dictionary2)[source]

Adds two dictionaries with the same keys together.

Parameters:
dictionary1dict

dictionary to be added.

dictionary2dict

dictionary to be added.

Returns:
dictionarydict

dictionary with added values.

ler.utils.trim_dictionary(dictionary, size)[source]

Filters an event dictionary to only contain the size.

Parameters:
dictionarydict

dictionary to be trimmed.

sizeint

size to trim the dictionary to.

Returns:
dictionarydict

trimmed dictionary.

ler.utils.create_func_pdf_invcdf(x, y, category='function')[source]

Function to create a interpolated function, inverse function or inverse cdf from the input x and y.

Parameters:
xnumpy.ndarray

x values. This has to sorted in ascending order.

ynumpy.ndarray

y values. Corresponding to the x values.

categorystr, optional

category of the function. Default is “function”. Other options are “function_inverse”, “pdf” and “inv_cdf”.

Returns:
pdfpdf function

interpolated pdf function.

inv_pdffunction inverse

interpolated inverse pdf function.

inv_cdffunction

interpolated inverse cdf.

ler.utils.create_conditioned_pdf_invcdf(x, conditioned_y, pdf_func, category)[source]

pdf_func is the function to calculate the pdf of x given y x is an array and the output of pdf_func is an array y is the condition we consider parameter plane of x and y

Parameters:
xnumpy.ndarray

x values.

conditioned_ynumpy.ndarray

conditioned y values.

pdf_funcfunction

function to calculate the pdf of x given y.

categorystr, optional

category of the function. Default is “function”. Other options are “function_inverse”, “pdf” and “inv_cdf”.

ler.utils.create_func(x, y)[source]

Function to create a spline interpolated function from the input x and y.

Parameters:
xnumpy.ndarray

x values.

ynumpy.ndarray

y values.

Returns:
cnumpy.ndarray

spline coefficients.

ler.utils.create_func_inv(x, y)[source]

Function to create a spline interpolated inverse function from the input x and y.

Parameters:
xnumpy.ndarray

x values.

ynumpy.ndarray

y values.

Returns:
cnumpy.ndarray

spline coefficients.

ler.utils.create_pdf(x, y)[source]

Function to create a spline interpolated normalized pdf from the input x and y.

Parameters:
xnumpy.ndarray

x values.

ynumpy.ndarray

y values.

Returns:
cnumpy.ndarray

spline coefficients.

ler.utils.create_inv_cdf_array(x, y)[source]

Function to create a spline interpolated inverse cdf from the input x and y.

Parameters:
xnumpy.ndarray

x values.

ynumpy.ndarray

y values.

Returns:
cnumpy.ndarray

spline coefficients.

ler.utils.create_conditioned_pdf(x, conditioned_y, pdf_func)[source]

Function to create a conditioned pdf from the input x and y.

Parameters:
xnumpy.ndarray

x values.

conditioned_ynumpy.ndarray

conditioned y values.

pdf_funcfunction

function to calculate the pdf of x given y.

Returns:
list_list

list of pdfs.

ler.utils.create_conditioned_inv_cdf_array(x, conditioned_y, pdf_func)[source]

Function to create a conditioned inv_cdf from the input x and y.

Parameters:
xnumpy.ndarray

x values.

conditioned_ynumpy.ndarray

conditioned y values.

pdf_funcfunction

function to calculate the pdf of x given y.

Returns:
list_list

list of inv_cdfs.

ler.utils.interpolator_from_json(identifier_dict, directory, sub_directory, name, x, pdf_func=None, y=None, conditioned_y=None, dimension=1, category='pdf', create_new=False)[source]

Function to decide which interpolator to use.

Parameters:
identifier_dictdict

dictionary of identifiers.

directorystr

directory to store the interpolator.

sub_directorystr

sub-directory to store the interpolator.

namestr

name of the interpolator.

xnumpy.ndarray

x values.

pdf_funcfunction

function to calculate the pdf of x given y.

ynumpy.ndarray

y values.

conditioned_ynumpy.ndarray

conditioned y values.

dimensionint

dimension of the interpolator. Default is 1.

categorystr

category of the function. Default is “pdf”.

create_newbool

if True, create a new interpolator. Default is False.

Returns:
interpolatorfunction

interpolator function.

ler.utils.interpolator_json_path(identifier_dict, directory, sub_directory, interpolator_name)[source]

Function to create the interpolator json file path.

Parameters:
identifier_dictdict

dictionary of identifiers.

directorystr

directory to store the interpolator.

sub_directorystr

sub-directory to store the interpolator.

interpolator_namestr

name of the interpolator.

Returns:
path_inv_cdfstr

path of the interpolator json file.

it_existbool

if True, the interpolator exists.

ler.utils.batch_handler(size, batch_size, sampling_routine, output_jsonfile, save_batch=True, resume=False, param_name='parameters')[source]

Function to run the sampling in batches.

Parameters:
sizeint

number of samples.

batch_sizeint

batch size.

sampling_routinefunction

sampling function. It should have ‘size’ as input and return a dictionary.

output_jsonfilestr

json file name for storing the parameters.

save_batchbool, optional

if True, save sampled parameters in each iteration. Default is True.

resumebool, optional

if True, resume sampling from the last batch. Default is False.

param_namestr, optional

name of the parameter. Default is ‘parameters’.

Returns:
dict_bufferdict

dictionary of parameters.

ler.utils.create_batch_params(sampling_routine, frac_batches, dict_buffer, save_batch, output_jsonfile, track_batches, resume=False)[source]

Helper function to batch_handler. It create batch parameters and store in a dictionary.

Parameters:
sampling_routinefunction

sampling function. It should have ‘size’ as input and return a dictionary.

frac_batchesint

batch size.

dict_bufferdict

dictionary of parameters.

save_batchbool

if True, save sampled parameters in each iteration.

output_jsonfilestr

json file name for storing the parameters.

track_batchesint

track the number of batches.

resumebool, optional

if True, resume sampling from the last batch. Default is False.

Returns:
track_batchesint

track the number of batches.

ler.utils.monte_carlo_integration(function, uniform_prior, size=10000)[source]

Function to perform Monte Carlo integration.

Parameters:
functionfunction

function to be integrated.

priorfunction

prior function.

Returns:
integralfloat

integral value.

ler.utils.cubic_spline_interpolator(xnew, coefficients, x)[source]

Function to interpolate using cubic spline.

Parameters:
xnewnumpy.ndarray

new x values.

coefficientsnumpy.ndarray

coefficients of the cubic spline.

xnumpy.ndarray

x values.

Returns:
resultnumpy.ndarray

interpolated values.

ler.utils.pdf_cubic_spline_interpolator(xnew, norm_const, coefficients, x)[source]

Function to interpolate pdf using cubic spline.

Parameters:
xnewnumpy.ndarray

new x values.

coefficientsnumpy.ndarray

coefficients of the cubic spline.

xnumpy.ndarray

x values.

Returns:
resultnumpy.ndarray

interpolated values.

ler.utils.pdf_cubic_spline_interpolator2d_array(xnew_array, ynew_array, norm_array, coefficients, x, y)[source]

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

Parameters:
xnew_arraynumpy.ndarray

Total mass of the binary.

ynew_arraynumpy.ndarray

Mass ratio of the binary.

coefficientsnumpy.ndarray

Array of coefficients for the cubic spline interpolation.

xnumpy.ndarray

Array of total mass values for the coefficients.

ynumpy.ndarray

Array of mass ratio values for the coefficients.

Returns:
resultfloat

Interpolated value of snr_partialscaled.

ler.utils.cubic_spline_interpolator2d_array(xnew_array, ynew_array, coefficients, x, y)[source]

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

Parameters:
xnew_arraynumpy.ndarray

Total mass of the binary.

ynew_arraynumpy.ndarray

Mass ratio of the binary.

coefficientsnumpy.ndarray

Array of coefficients for the cubic spline interpolation.

xnumpy.ndarray

Array of total mass values for the coefficients.

ynumpy.ndarray

Array of mass ratio values for the coefficients.

Returns:
resultfloat

Interpolated value of snr_partialscaled.

ler.utils.coefficients_generator_ler(y1, y2, y3, y4, z1, z2, z3, z4)[source]

Function to generate the coefficients for the cubic spline interpolation of fn(y)=z.

Parameters:
y1, y2, y3, y4, z1, z2, z3, z4: `float`

Values of y and z for the cubic spline interpolation.

Returns:
coefficients: numpy.ndarray

Coefficients for the cubic spline interpolation.

ler.utils.inverse_transform_sampler2d(size, conditioned_y, cdf2d, x2d, y1d)[source]

Function to find sampler interpolator coefficients from the conditioned y.

Parameters:
size: `int`

Size of the sample.

conditioned_y: `float`

Conditioned y value.

cdf2d: `numpy.ndarray`

2D array of cdf values.

x2d: `numpy.ndarray`

2D array of x values.

y1d: `numpy.ndarray`

1D array of y values.

Returns:
samples: numpy.ndarray

Samples of the conditioned y.

ler.utils.inverse_transform_sampler(size, cdf, x)[source]

Function to sample from the inverse transform method.

Parameters:
sizeint

number of samples.

cdfnumpy.ndarray

cdf values.

xnumpy.ndarray

x values.

Returns:
samplesnumpy.ndarray

samples from the cdf.

ler.utils.normal_pdf(x, mean=0.0, std=0.05)[source]

Calculate the value of a normal probability density function.

Parameters:
xfloat or numpy.ndarray

The value(s) at which to evaluate the PDF.

meanfloat, optional

The mean of the normal distribution. Default is 0.

stdfloat, optional

The standard deviation of the normal distribution. Default is 0.05.

Returns:
pdffloat or numpy.ndarray

The probability density function value(s) at x.

ler.utils.normal_pdf_2d(x, y, mean_x=0.0, std_x=0.05, mean_y=0.0, std_y=0.05)[source]

Calculate the value of a 2D normal probability density function.

Parameters:
xfloat

The x-coordinate for which the PDF is evaluated.

yfloat

The y-coordinate for which the PDF is evaluated.

mean_xfloat, optional

The mean of the normal distribution along the x-axis. Default is 0.

std_xfloat, optional

The standard deviation of the normal distribution along the x-axis. Default is 0.05.

mean_yfloat, optional

The mean of the normal distribution along the y-axis. Default is 0.

std_yfloat, optional

The standard deviation of the normal distribution along the y-axis. Default is 0.05.

Returns:
float

The probability density function value at the given x and y coordinates.

ler.utils.cumulative_trapezoid(y, x=None, dx=1.0, initial=0.0)[source]

Compute the cumulative integral of a function using the trapezoidal rule.

ler.utils.sample_from_powerlaw_distribution(size, alphans, mminns, mmaxns)[source]

Inverse transform sampling for a power-law mass distribution: p(m) ∝ m^{-alphans}, m in [mminns, mmaxns]

Parameters:
sizeint

Number of samples to generate.

alphansfloat

Power-law index (alpha).

mminnsfloat

Minimum neutron star mass (lower bound).

mmaxnsfloat

Maximum neutron star mass (upper bound).

random_stateint, np.random.Generator, or None

Seed or random generator for reproducibility.

Returns:
mndarray

Array of sampled neutron star masses.

ler.utils.get_param_from_json(json_file)[source]

Function to get the parameters from json file.

Parameters:
json_filestr

json file name for storing the parameters.

Returns:
paramdict
ler.utils.param_plot(param_name='zs', param_dict='./gw_params.json', plot_label='zs', param_min=None, param_max=None, kde=True, kde_bandwidth=0.2, histogram=True, histogram_bins=30)[source]

Function to plot the distribution of the GW source parameters.

Parameters:
param_namestr

name of the parameter to plot. default param_name = ‘zs’.

param_dictdict or str

dictionary of GW source parameters or json file name. default param_dict = ‘./gw_params.json’.

param_xlabelstr

x-axis label. default param_xlabel = ‘source redshift’.

param_ylabelstr

y-axis label. default param_ylabel = ‘probability density’.

param_minfloat

minimum value of the parameter. default param_min = None.

param_maxfloat

maximum value of the parameter. default param_max = None.

figsizetuple

figure size. default figsize = (4, 4).

kdebool

if True, kde will be plotted. default kde = True.

kde_bandwidthfloat

bandwidth for kde. default kde_bandwidth = 0.2.

histogrambool

if True, histogram will be plotted. default histogram = True.

histogram_binsint

number of bins for histogram. default histogram_bins = 30.

Examples

>>> import matplotlib.pyplot as plt
>>> from ler.utils import param_plot
>>> from ler.rates import LeR
>>> ler = LeR(verbose=False)
>>> param = ler.unlensed_cbc_statistics();
>>> rate, param_detectable = ler.unlensed_rate()
>>> plt.figure(figsize=(6, 4))
>>> param_plot(param_name='zs', param_dict=param, plot_label='all events')
>>> param_plot(param_name='zs', param_dict=param_detectable, plot_label='detectable events')
>>> plt.xlabel('source redshift')
>>> plt.ylabel('probability density')
>>> plt.title('source redshift distribution')
>>> plt.grid(alpha=0.5)
>>> plt.savefig('source_redshift_distribution.png')
ler.utils.relative_mu_dt_unlensed(param, size=100, randomize=True)[source]

Function to generate relative magnification vs time delay difference for unlensed samples.

Parameters:
paramdict

dictionary of unlensed GW source parameters. unlensed_param.keys() = [‘m1’, ‘m2’, ‘z’, ‘snr’, ‘theta_jn’, ‘ra’, ‘dec’, ‘psi’, ‘phase’, ‘geocent_time’]

sizeint

number of samples. default size = 100.

randomizebool

if True, it will randomize the samples. default randomize = True.

Returns:
dmufloat.array

relative magnification: abs(mu2/mu1) or abs(dl1/dl2)**2.

dtfloat.array

relative time delay: abs(t1-t2) in days.

ler.utils.relative_mu_dt_lensed(lensed_param, pdet_threshold=[0.5, 0.5], classification_type='morse_phase')[source]

Function to classify the lensed images wrt to the morse phase difference.

Parameters:
lensed_paramdict

dictionary of lensed GW source parameters. lensed_param.keys() = [‘x_source’, ‘y_source’, ‘x0_image_position’, ‘x1_image_position’, ‘magnifications’, ‘time_delays’, ‘pdet_net’, ‘image_type’]

pdet_thresholdlist

threshold for pdet_net to consider the image as detectable. default pdet_threshold = [0.5, 0.5].

classification_typestr

type of classification to be done. default classification_type = ‘morse_phase’. other options: ‘arrival_time’.

Returns:
dict
dictionary containing the relative magnification and time delay difference for the classified images.
if classification_type = ‘morse_phase’:
{

“dt_rel0”: np.array of relative time delay difference for images with morse phase difference = 0, “mu_rel0”: np.array of relative magnification for images with morse phase difference = 0, “dt_rel90”: np.array of relative time delay difference for images with morse phase difference = 90, “mu_rel90”: np.array of relative magnification for images with morse phase difference = 90,

}

if classification_type = ‘arrival_time’:
{

“dt_12”: np.array of relative time delay difference for image 1 and image 2, “mu_12”: np.array of relative magnification for image 1 and image 2, “dt_13”: np.array of relative time delay difference for image 1 and image 3, “mu_13”: np.array of relative magnification for image 1 and image 3, “dt_14”: np.array of relative time delay difference for image 1 and image 4, “mu_14”: np.array of relative magnification for image 1 and image 4, “dt_23”: np.array of relative time delay difference for image 2 and image 3, “mu_23”: np.array of relative magnification for image 2 and image 3, “dt_24”: np.array of relative time delay difference for image 2 and image 4, “mu_24”: np.array of relative magnification for image 2 and image 4, “dt_34”: np.array of relative time delay difference for image 3 and image 4, “mu_34”: np.array of relative magnification for image 3 and image 4,

}

ler.utils.mu_vs_dt_plot(x_array, y_array, xscale='log10', yscale='log10', alpha=0.6, extent=None, contour_levels=[10, 40, 68, 95], colors=['blue', 'blue', 'blue', 'blue', 'blue'])[source]

Function to generate 2D KDE and plot the relative magnification vs time delay difference for lensed samples.

Parameters:
x_arrayfloat.array

x array.

y_arrayfloat.array

y array.

xscalestr

x-axis scale. default xscale = ‘log10’. other options: ‘log’, None.

yscalestr

y-axis scale. default yscale = ‘log10’. other options: ‘log’, None.

alphafloat

transparency of the contour plot. default alpha = 0.6.

extentlist

extent of the plot. default extent = None. It will consider the full range of x_array and y_array.

contour_levelslist

levels for contour plot. default contour_levels = [10, 40, 68, 95].

colorsstr

colors for contour plot. default colors = [‘blue’, ‘blue’, ‘blue’, ‘blue’, ‘blue’].

Examples

>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from ler.utils import param_plot, mu_vs_dt_plot, get_param_from_json, relative_mu_dt_unlensed, relative_mu_dt_lensed
>>> # get the parameters. For data generation, refer to the 'LeR complete example' in the documentation.
>>> unlensed_param = get_param_from_json('ler_data/unlensed_param.json')
>>> unlensed_param_detectable = get_param_from_json('ler_data/unlensed_param_detectable.json')
>>> lensed_param = get_param_from_json('ler_data/lensed_param.json')
>>> lensed_param_detectable = get_param_from_json('ler_data/lensed_param_detectable.json')
>>> # get the relative mu and dt
>>> ans = relative_mu_dt_lensed(lensed_param_detectable)
>>> dmu, dt = relative_mu_dt_unlensed(unlensed_param_detectable, size=1000, randomize=True)
>>> # plot
>>> plt.figure(figsize=(4, 4))
>>> mu_vs_dt_plot(ans['dt_rel90'], ans['mu_rel90'], colors='b')
>>> mu_vs_dt_plot(ans['dt_rel0'], ans['mu_rel0'], colors='g')
>>> mu_vs_dt_plot(dt, dmu, colors='r')
>>> # Create proxy artists for legend
>>> proxy1 = plt.Line2D([0], [0], linestyle='-', color='b', label=r'Lensed ($\Delta \phi=90$)')
>>> proxy2 = plt.Line2D([0], [0], linestyle='-', color='g', label=r'Lensed ($\Delta \phi=0$)')
>>> proxy3 = plt.Line2D([0], [0], linestyle='-', color='r', label=r'Unlensed')
>>> plt.legend(handles=[proxy1, proxy2, proxy3], loc='upper left')
>>> plt.xlim(-5, 2.5)
>>> plt.ylim(-2.5, 2.5)
>>> plt.grid(alpha=0.4)
>>> plt.show()
ler.utils.append_json(file_name, new_dictionary, old_dictionary=None, replace=False)[source]

Append (values with corresponding keys) and update a json file with a dictionary. There are four options:

  1. If old_dictionary is provided, the values of the new dictionary will be appended to the old dictionary and save in the ‘file_name’ json file.

  2. If replace is True, replace the json file (with the ‘file_name’) content with the new_dictionary.

  3. If the file does not exist, create a new one with the new_dictionary.

  4. If none of the above, append the new dictionary to the content of the json file.

Parameters:
file_namestr

json file name for storing the parameters.

new_dictionarydict

dictionary to be appended to the json file.

old_dictionarydict, optional

If provided the values of the new dictionary will be appended to the old dictionary and save in the ‘file_name’ json file. Default is None.

replacebool, optional

If True, replace the json file with the dictionary. Default is False.

ler.utils.get_param_from_json(json_file)[source]

Function to get the parameters from json file.

Parameters:
json_filestr

json file name for storing the parameters.

Returns:
paramdict
class ler.utils.TrainingDataGenerator(npool=4, z_min=0.0, z_max=5.0, verbose=True, **kwargs)[source]
npool = '4'
z_min = '0.0'
z_max = '5.0'
verbose = 'True'
ler_init_args
gw_parameters_generator(size, batch_size=100000, snr_recalculation=True, trim_to_size=False, verbose=True, replace=False, data_distribution_range=[0, 2, 4, 6, 8, 10, 12, 14, 16, 100], output_jsonfile='gw_parameters.json')[source]
helper_data_distribution(gw_param, data_distribution_range)[source]
combine_dicts(file_name_list=None, path_list=None, detector='L1', parameter_list=['mass_1', 'mass_2', 'luminosity_distance', 'theta_jn', 'psi', 'geocent_time', 'ra', 'dec', 'a_1', 'a_2', 'tilt_1', 'tilt_2'], output_jsonfile='combined_data.json')[source]
delete_json_file(path_list)[source]
ler.utils.interpolator_json_path(identifier_dict, directory, sub_directory, interpolator_name)[source]

Function to create the interpolator json file path.

Parameters:
identifier_dictdict

dictionary of identifiers.

directorystr

directory to store the interpolator.

sub_directorystr

sub-directory to store the interpolator.

interpolator_namestr

name of the interpolator.

Returns:
path_inv_cdfstr

path of the interpolator json file.

it_existbool

if True, the interpolator exists.

ler.utils.cubic_spline_interpolator(xnew, coefficients, x)[source]

Function to interpolate using cubic spline.

Parameters:
xnewnumpy.ndarray

new x values.

coefficientsnumpy.ndarray

coefficients of the cubic spline.

xnumpy.ndarray

x values.

Returns:
resultnumpy.ndarray

interpolated values.

ler.utils.pdf_cubic_spline_interpolator(xnew, norm_const, coefficients, x)[source]

Function to interpolate pdf using cubic spline.

Parameters:
xnewnumpy.ndarray

new x values.

coefficientsnumpy.ndarray

coefficients of the cubic spline.

xnumpy.ndarray

x values.

Returns:
resultnumpy.ndarray

interpolated values.

ler.utils.cubic_spline_interpolator2d_array(xnew_array, ynew_array, coefficients, x, y)[source]

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

Parameters:
xnew_arraynumpy.ndarray

Total mass of the binary.

ynew_arraynumpy.ndarray

Mass ratio of the binary.

coefficientsnumpy.ndarray

Array of coefficients for the cubic spline interpolation.

xnumpy.ndarray

Array of total mass values for the coefficients.

ynumpy.ndarray

Array of mass ratio values for the coefficients.

Returns:
resultfloat

Interpolated value of snr_partialscaled.

ler.utils.inverse_transform_sampler(size, cdf, x)[source]

Function to sample from the inverse transform method.

Parameters:
sizeint

number of samples.

cdfnumpy.ndarray

cdf values.

xnumpy.ndarray

x values.

Returns:
samplesnumpy.ndarray

samples from the cdf.

ler.utils.pdf_cubic_spline_interpolator2d_array(xnew_array, ynew_array, norm_array, coefficients, x, y)[source]

Function to calculate the interpolated value of snr_partialscaled given the mass ratio (ynew) and total mass (xnew). This is based off 2D bicubic spline interpolation.

Parameters:
xnew_arraynumpy.ndarray

Total mass of the binary.

ynew_arraynumpy.ndarray

Mass ratio of the binary.

coefficientsnumpy.ndarray

Array of coefficients for the cubic spline interpolation.

xnumpy.ndarray

Array of total mass values for the coefficients.

ynumpy.ndarray

Array of mass ratio values for the coefficients.

Returns:
resultfloat

Interpolated value of snr_partialscaled.

ler.utils.save_json(file_name, param)[source]

Save a dictionary as a json file.

Parameters:
file_namestr

json file name for storing the parameters.

paramdict

dictionary to be saved as a json file.

ler.utils.load_json(file_name)[source]

Load a json file.

Parameters:
file_namestr

json file name for storing the parameters.

Returns:
paramdict
ler.utils.inverse_transform_sampler2d(size, conditioned_y, cdf2d, x2d, y1d)[source]

Function to find sampler interpolator coefficients from the conditioned y.

Parameters:
size: `int`

Size of the sample.

conditioned_y: `float`

Conditioned y value.

cdf2d: `numpy.ndarray`

2D array of cdf values.

x2d: `numpy.ndarray`

2D array of x values.

y1d: `numpy.ndarray`

1D array of y values.

Returns:
samples: numpy.ndarray

Samples of the conditioned y.

class ler.utils.FunctionConditioning(function=None, x_array=None, conditioned_y_array=None, y_array=None, non_zero_function=False, gaussian_kde=False, gaussian_kde_kwargs={}, identifier_dict={}, directory='./interpolator_json', sub_directory='default', name='default', create_new=False, create_function=False, create_function_inverse=False, create_pdf=False, create_rvs=False, multiprocessing_function=False, callback=None)[source]
info
callback = 'None'
__call__(*args)[source]
create_decision_function(create_function, create_function_inverse, create_pdf, create_rvs)[source]
create_gaussian_kde(x_array, y_array, gaussian_kde_kwargs)[source]
create_interpolator(function, x_array, conditioned_y_array, create_function_inverse, create_pdf, create_rvs, multiprocessing_function)[source]
create_z_array(x_array, function, conditioned_y_array, create_pdf, create_rvs, multiprocessing_function)[source]
cdf_values_generator(x_array, z_array, conditioned_y_array)[source]
pdf_norm_const_generator(x_array, function_spline, conditioned_y_array)[source]
function_spline_generator(x_array, z_array, conditioned_y_array)[source]
class ler.utils.FunctionConditioning(function=None, x_array=None, conditioned_y_array=None, y_array=None, non_zero_function=False, gaussian_kde=False, gaussian_kde_kwargs={}, identifier_dict={}, directory='./interpolator_json', sub_directory='default', name='default', create_new=False, create_function=False, create_function_inverse=False, create_pdf=False, create_rvs=False, multiprocessing_function=False, callback=None)[source]
info
callback = 'None'
__call__(*args)[source]
create_decision_function(create_function, create_function_inverse, create_pdf, create_rvs)[source]
create_gaussian_kde(x_array, y_array, gaussian_kde_kwargs)[source]
create_interpolator(function, x_array, conditioned_y_array, create_function_inverse, create_pdf, create_rvs, multiprocessing_function)[source]
create_z_array(x_array, function, conditioned_y_array, create_pdf, create_rvs, multiprocessing_function)[source]
cdf_values_generator(x_array, z_array, conditioned_y_array)[source]
pdf_norm_const_generator(x_array, function_spline, conditioned_y_array)[source]
function_spline_generator(x_array, z_array, conditioned_y_array)[source]
ler.utils.redshift_optimal_spacing(z_min, z_max, resolution)[source]
ler.utils.luminosity_distance(z=None, z_min=0.001, z_max=10.0, cosmo=LambdaCDM(H0=70, Om0=0.3, Ode0=0.7, Tcmb0=0.0, Neff=3.04, m_nu=None, Ob0=0.0), directory='./interpolator_json', create_new=False, resolution=500, get_attribute=True)[source]

Function to create a lookup table for the luminosity distance wrt redshift.

Parameters:
znumpy.ndarray or float

Source redshifts

z_minfloat

Minimum redshift of the source population

z_maxfloat

Maximum redshift of the source population

Attributes:
z_to_luminosity_distanceler.utils.FunctionConditioning

Object of FunctionConditioning class containing the luminosity distance wrt redshift

ler.utils.differential_comoving_volume(z=None, z_min=0.001, z_max=10.0, cosmo=LambdaCDM(H0=70, Om0=0.3, Ode0=0.7, Tcmb0=0.0, Neff=3.04, m_nu=None, Ob0=0.0), directory='./interpolator_json', create_new=False, resolution=500, get_attribute=True)[source]
ler.utils.comoving_distance(z=None, z_min=0.001, z_max=10.0, cosmo=LambdaCDM(H0=70, Om0=0.3, Ode0=0.7, Tcmb0=0.0, Neff=3.04, m_nu=None, Ob0=0.0), directory='./interpolator_json', create_new=False, resolution=500, get_attribute=True)[source]
ler.utils.angular_diameter_distance(z=None, z_min=0.001, z_max=10.0, cosmo=LambdaCDM(H0=70, Om0=0.3, Ode0=0.7, Tcmb0=0.0, Neff=3.04, m_nu=None, Ob0=0.0), directory='./interpolator_json', create_new=False, resolution=500, get_attribute=True)[source]
ler.utils.angular_diameter_distance_z1z2(z1=None, z2=None, z_min=0.001, z_max=10.0, cosmo=LambdaCDM(H0=70, Om0=0.3, Ode0=0.7, Tcmb0=0.0, Neff=3.04, m_nu=None, Ob0=0.0), directory='./interpolator_json', create_new=False, resolution=500, get_attribute=True)[source]