Welcome to ConIII’s documentation!¶
coniii.enumerate module¶
-
coniii.enumerate.
fast_logsumexp
(X, coeffs=None)¶ Simplified version of logsumexp to do correlation calculation in Ising equation files. Scipy’s logsumexp can be around 10x slower in comparison.
- X : ndarray
- Terms inside logs.
- coeffs : ndarray
- Factors in front of exponentials.
- float
- Value of magnitude of quantity inside log (the sum of exponentials).
- float
- Sign.
-
coniii.enumerate.
get_3idx
(n)¶ Get binary 3D matrix with truth values where index values correspond to the index of all possible ijk parameters. We can do this by recognizing that the pattern along each plane in the third dimension is like the upper triangle pattern that just moves up and over by one block each cut lower into the box.
-
coniii.enumerate.
get_nidx
(k, n)¶ Get the kth order indices corresponding to all the states in which k elements are firing up out of n spins. The ordering correspond to that returned by bin_states().
print where(exact.get_3idx(4)) print where(exact.get_nidx(3,4)) <<<<<
-
coniii.enumerate.
get_terms
(subix, prefix, binstate, br, ix0)¶ Spins are put in explicitly
-
coniii.enumerate.
get_terms01
(subix, prefix, binstate, br, ix0)¶ Specific to {0,1}.
-
coniii.enumerate.
get_terms11
(subix, prefix, binstate, br, ix0)¶ Specific to {-1,1}.
-
coniii.enumerate.
mp_fast_logsumexp
(X, coeffs=None)¶ fast_logsumexp for high precision numbers using mpmath.
- X : ndarray
- Terms inside logs.
- coeffs : ndarray
- Factors in front of exponentials.
- float
- Value of magnitude of quantity inside log (the sum of exponentials).
- float
- Sign.
-
coniii.enumerate.
pairwise
(n, sym=0, **kwargs)¶ Wrapper for writing pairwise maxent model (Ising) files.
- n : int
- System size.
- sym : int, 0
- Can be 0 or 1.
**kwargs
None
-
coniii.enumerate.
triplet
(n, sym=0, **kwargs)¶ Wrapper for writing triplet-order maxent model.
- n : int
- System size.
- sym : int, 0
- Can be 0 or 1.
**kwargs
None
-
coniii.enumerate.
write_eqns
(n, sym, corrTermsIx, suffix='', high_prec=False)¶ Create strings for writing out the equations and then write them to file.
TODO: This code needs some cleanup.
- n : int
- number of spins
- sym : int
- value of 1 will use {-1,1} formulation, 0 means {0,1}
- corrTermsIx : list of ndarrays
- Allows specification of arbitrary correlations to constrain using an index based structure. These should be index arrays as would be returned by np.where that specify which correlations to write down. Each consecutive array should specify a matrix of sequentially increasing dimension. [Nx1, NxN, NxNxN, …]
suffix : str, ‘’ high_prec : bool, False
-
coniii.enumerate.
write_py
(n, sym, contraintTermsIx, signs, expterms, Z, extra='', suffix='', high_prec=False)¶ Write out Ising equations for Python.
- n : int
- System size.
contraintTermsIx : list of str signs : list of ndarray
Sign for each term in the numerator when computing correlations.- expterms : list of str
- Every single energy term.
- Z : str
- Energies for all states that will be put into partition function.
- extra : str, ‘’
- any extra lines to add at the end
suffix : str, ‘’ high_prec : bool, False
If True, write version that uses mpmath for high precision calculations.
coniii.enumerate_potts module¶
-
class
coniii.enumerate_potts.
PythonFileWriterBase
(n, k)¶ Bases:
object
-
energy_terms_generator
()¶ Generator for iterating through all possible states and yield the energy expression as well as the configuration of spins.
-
write
(fname)¶ Write equations to file.
-
-
class
coniii.enumerate_potts.
SpecificFieldGenericCouplings
(n, k)¶ Bases:
coniii.enumerate_potts.PythonFileWriterBase
This version specifies a field for every distinct Potts state, but considers correlation averaged over all possible states as long as they agree.
When writing the equations, the fields are assumed to first increase by spin index then by state index (fields for the n spins for k=0 come first, then for k=1, etc.). The couplings come after in the conventional order of ij where j increases up to i before i is incremented.
-
energy_terms_generator
()¶ Generator for iterating through all possible states and yield the energy expression as well as the configuration of spins. The energy expression is returned as a string and assumes that the field come first followed by the couplings.
-
-
coniii.enumerate_potts.
insert_newlines
(s, n)¶ Insert character every n in list.
-
coniii.enumerate_potts.
split_string
(s, n)¶ Insert character every n.
coniii.ising package¶
Submodules¶
coniii.ising.automaton module¶
-
class
coniii.ising.automaton.
Ising2D
(dim, J, h=0, rng=None)¶ Bases:
object
Simulation of the ferromagnetic Ising model on a 2D periodic lattice with quenched disorder in the local fields.
-
flip_metropolis
¶ Flip a single lattice spin using Metropolis sampling.
i : int j : int
-
iterate
(n_iters, systematic=True)¶ n_iters : int systematic : bool,True
If True, iterate through each spin on the lattice in sequence.
-
-
coniii.ising.automaton.
coarse_grain
(lattice, factor)¶ Block spin renormalization with majority rule.
- lattice : ndarray
- +/-1
factor : int
renormalized_lattice : ndarray
coniii.ising.test_automaton module¶
Module contents¶
coniii.solvers module¶
-
class
coniii.solvers.
ClusterExpansion
(sample, model=None, calc_observables=None, sample_size=1000, iprint=True, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
Implementation of Adaptive Cluster Expansion for solving the inverse Ising problem, as described in John Barton and Simona Cocco, J. of Stat. Mech. P03002 (2013).
Specific to pairwise Ising constraints.
-
S
(cluster, coocMat, deltaJdict={}, useAnalyticResults=False, priorLmbda=0.0, numSamples=None)¶ Calculate pairwise entropy of cluster. (First fits pairwise Ising model.)
- cluster : list
- List of indices belonging to each cluster.
- coocMat : ndarray
- Pairwise correlations.
deltaJdict : dict, {} useAnalyticResults : bool, False
Probably want False until analytic formulas are changed to include prior on Jentropy : float Jfull : ndarray
Matrix of couplings.
-
Sindependent
(cluster, coocMat)¶ Entropy approximation assuming that each cluster appears independently of the others.
cluster : list coocMat : ndarray
Pairwise correlations.- float
- Sind, independent entropy.
- ndarray
- Pairwise couplings.
-
clusterID
(cluster)¶
-
deltaS
(cluster, coocMat, deltaSdict=None, deltaJdict=None, iprint=True, meanFieldRef=False, priorLmbda=0.0, numSamples=None, independentRef=False, meanFieldPriorLmbda=None)¶ - cluster : list
- List of indices in cluster
coocMat : ndarray deltaSdict : dict, None deltaJdict : dict, None iprint : bool, True meanFieldRef : bool, False numSamples : int, None independentRef : bool, False
If True, expand about independent entropy- meanFieldRef : bool, False
- If True, expand about mean field entropy
- float
- deltaScluster
- float
- deltaJcluster
-
solve
(threshold, cluster=None, deltaSdict=None, deltaJdict=None, iprint=True, priorLmbda=0.0, numSamples=None, meanFieldRef=False, independentRef=True, veryVerbose=False, meanFieldPriorLmbda=None, full_output=False)¶ threshold : float meanFieldRef : bool, False
Expand about mean-field reference.- independentRef : bool, True
- Expand about independent reference.
- priorLmbda : float, 0.
- Strength of non-interacting prior.
- meanFieldPriorLmbda : float, None
- Strength of non-interacting prior in mean field calculation (defaults to priorLmbda).
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
- float (optional, only if full_output=True)
- Estimated entropy.
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
- list (optional, only if full_output=True)
- List of clusters.
- dict (optional, only if full_output=True)
- deltaSdict
- dict (optional, only if full_output=True)
- deltaJdict
-
subsets
(thisSet, size, sort=False)¶ Given a list, returns a list of all unique subsets of that list with given size.
thisSet : list size : int sort : bool, False
- list
- All subsets of given size.
-
-
class
coniii.solvers.
Enumerate
(sample=None, model=None, calc_observables=None, iprint=True, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
Class for solving fully-connected inverse Ising model problem by enumeration of the partition function and then using gradient descent.
-
solve
(initial_guess=None, constraints=None, max_param_value=50, full_output=False, use_root=True, scipy_solver_kwargs={'method': 'krylov', 'options': {'fatol': 1e-13, 'xatol': 1e-13}})¶ Must specify either constraints (the correlations) or samples from which the correlations will be calculated using self.calc_observables. This routine by default uses scipy.optimize.root to find the solution. This is MUCH faster than the scipy.optimize.minimize routine which can be used instead.
If still too slow, try adjusting the accuracy.
If not converging, try increasing the max number of iterations.
If receiving Jacobian error (or some other numerical estimation error), parameter values may be too large for faithful evaluation. Try decreasing max_param_value.
- initial_guess : ndarray, None
- Initial starting guess for parameters. By default, this will start with all zeros if left unspecified.
- constraints : ndarray, None
- Can specify constraints directly instead of using the ones calculated from the sample. This can be useful when the pairwise correlations are known exactly. This will override the self.constraints data member.
- max_param_value : float, 50
- Absolute value of max parameter value. Bounds can also be set in the kwargs passed to the minimizer, in which case this should be set to None.
- full_output : bool, False
- If True, return output from scipy.optimize.minimize.
- use_root : bool, True
- If False, use scipy.optimize.minimize instead. This is typically much slower.
- scipy_solver_kwargs : dict, {‘method’:’krylov’, ‘options’:{‘fatol’:1e-13,’xatol’:1e-13}}
- High accuracy is slower. Although default accuracy may not be so good, lowering these custom presets will speed things up. Choice of the root finding method can also change runtime and whether a solution is found or not. Recommend playing around with different solvers and tolerances or getting a close approximation using a different method if solution is hard to find.
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
- dict, optional
- Output from scipy.optimize.root.
-
-
class
coniii.solvers.
MCH
(sample, model=None, calc_observables=None, sample_size=1000, sample_method='metropolis', mch_approximation=None, iprint=True, sampler_kw={}, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
Class for solving maxent problems using the Monte Carlo Histogram method.
Broderick, T., Dudik, M., Tkacik, G., Schapire, R. E. & Bialek, W. Faster solutions of the inverse pairwise Ising problem. arXiv 1-8 (2007).
-
estimate_jac
(eps=0.001)¶ Approximation Jacobian using the MCH approximation.
eps : float, 1e-3
- jac : ndarray
- Jacobian is an n x n matrix where each row corresponds to the behavior of fvec wrt to a single parameter.
-
learn_parameters_mch
(estConstraints, constraints, maxdlamda=1, maxdlamdaNorm=1, maxLearningSteps=50, eta=1)¶ - estConstraints : ndarray
- Constraints estimated from MCH approximation.
constraints : ndarray maxdlamda : float, 1
Max allowed magnitude for any element of dlamda vector before exiting.- maxdlamdaNorm : float, 1
- Max allowed norm of dlamda vector before exiting.
- maxLearningSteps : int
- max learning steps before ending MCH
- eta : float, 1
- factor for changing dlamda
- ndarray
- MCH estimate for constraints from parameters lamda+dlamda.
-
solve
(initial_guess=None, constraints=None, tol=None, tolNorm=None, n_iters=30, burn_in=30, maxiter=10, custom_convergence_f=None, iprint=False, full_output=False, learn_params_kwargs={'eta': 1, 'maxdlamda': 1}, generate_kwargs={})¶ Solve for maxent model parameters using MCH routine.
- initial_guess : ndarray, None
- Initial starting point.
- constraints : ndarray, None
- For debugging! Vector of correlations to fit.
- tol : float, None
- Maximum error allowed in any observable.
- tolNorm : float, None
- Norm error allowed in found solution.
- n_iters : int, 30
- Number of iterations to make between samples in MCMC sampling.
- burn_in : int, 30
- Initial burn in from random sample when MC sampling.
- max_iter : int, 10
- Max number of iterations of MC sampling and MCH approximation.
- custom_convergence_f : function, None
Function for determining convergence criterion. At each iteration, this function should return the next set of learn_params_kwargs and optionally the sample size.
As an example: def learn_settings(i):
‘’’ Take in the iteration counter and set the maximum change allowed in any given parameter (maxdlamda) and the multiplicative factor eta, where d(parameter) = (error in observable) * eta.
Additional option is to also return the sample size for that step by returning a tuple. Larger sample sizes are necessary for higher accuracy. ‘’’ if i<10:
return {‘maxdlamda’:1,’eta’:1}- else:
- return {‘maxdlamda’:.05,’eta’:.05}
iprint : bool, False full_output : bool, False
If True, also return the errflag and error history.learn_parameters_kwargs : dict, {‘maxdlamda’:1,’eta’:1} generate_kwargs : dict, {}
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
- int
- Error flag. 0, converged within given criterion 1, max iterations reached
- ndarray
- Log of errors in matching constraints at each step of iteration.
-
-
class
coniii.solvers.
MCHIncompleteData
(*args, **kwargs)¶ Bases:
coniii.solvers.MCH
Class for solving maxent problems using the Monte Carlo Histogram method on incomplete data where some spins may not be visible.
Broderick, T., Dudik, M., Tkacik, G., Schapire, R. E. & Bialek, W. Faster solutions of the inverse pairwise Ising problem. arXiv 1-8 (2007).
- NOTE: This only works for Ising model.
- Not ready for release.
-
generate_samples
(n_iters, burn_in, uIncompleteStates=None, f_cond_sample_size=None, f_cond_sample_iters=None, sample_size=None, sample_method=None, initial_sample=None, run_regular_sampler=True, run_cond_sampler=True, disp=0, generate_kwargs={})¶ Wrapper around generate_samples_parallel() from available samplers.
n_iters : int burn_in : int
I think burn in is handled automatically in REMC.uIncompleteStates : list of unique states f_cond_sample_size : lambda function
Given the number of hidden spins, return the number of samples to take.- f_cond_sample_iters : lambda function
- Given the number of hidden spins, return the number of MC iterations to make.
sample_size : int sample_method : str initial_sample : ndarray generate_kwargs : dict
-
learn_parameters_mch
(estConstraints, fullFraction, uIncompleteStates, uIncompleteStatesCount, maxdlamda=1, maxdlamdaNorm=1, maxLearningSteps=50, eta=1)¶ Update parameters with MCH step. Update is proportional to the difference between the observables and the predicted observables after a small change to the parameters. This is calculated from likelihood maximization, and for the incomplete data points this corresponds to the marginal probability distribution weighted with the number of corresponding data points.
estConstraints : ndarray fullFraction : float
Fraction of data points that are complete.- uIncompleteStates : list-like
- Unique incomplete states in data.
- uIncompleteStatesCount : list-like
- Frequency of each unique data point.
maxdlamda : float,1 maxdlamdaNorm : float,1 maxLearningSteps : int
max learning steps before ending MCH- eta : float,1
- factor for changing dlamda
estimatedConstraints : ndarray
-
solve
(X=None, constraints=None, initial_guess=None, cond_sample_size=100, cond_sample_iters=100, tol=None, tolNorm=None, n_iters=30, burn_in=30, maxiter=10, disp=False, full_output=False, learn_params_kwargs={}, generate_kwargs={})¶ Solve for parameters using MCH routine.
X : ndarray constraints : ndarray
Constraints calculated from the incomplete data (accounting for missing data points).- initial_guess : ndarray=None
- initial starting point
- cond_sample_size : int or function
- Number of samples to make for conditional distribution. If function is passed in, it will be passed number of missing spins and must return an int.
- cond_sample_iters : int or function
- Number of MC iterations to make between samples.
- tol : float=None
- maximum error allowed in any observable
- tolNorm : float
- norm error allowed in found solution
- n_iters : int=30
- Number of iterations to make between samples in MCMC sampling.
burn_in (int=30) disp : int=0
0, no output 1, some detail 2, most detail- full_output : bool,False
- Return errflag and errors at each iteration if True.
learn_parameters_kwargs : dict generate_kwargs : dict
- parameters : ndarray
- Found solution.
errflag : int errors : ndarray
Errors in matching constraints at each step of iteration.
-
class
coniii.solvers.
MPF
(sample, model=None, calc_observables=None, calc_de=None, adj=None, iprint=True, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
-
K
(Xuniq, Xcount, adjacentStates, params)¶ Compute objective function.
- Xuniq : ndarray
- (ndata x ndims) unique states that appear in the data
- Xcount : ndarray of int
- number of times that each unique state appears in the data
- adjacentStates : list of ndarray
- list of adjacent states for each given unique state
- params : ndarray
- parameters for computation of energy
K : float
-
list_adjacent_states
(Xuniq, all_connected)¶ Use self.adj to evaluate all adjacent states in Xuniq.
Xuniq : ndarray all_connected : bool
adjacentStates
-
logK
(Xuniq, Xcount, adjacentStates, params)¶ Compute log of objective function.
- Xuniq : ndarray
- (n_samples, n_dim) unique states that appear in the data
- Xcount : ndarray of int
- number of times that each unique state appears in the data
- adjacentStates : list of ndarray
- list of adjacent states for each given unique state
- params : ndarray
- parameters for computation of energy
logK : float
-
solve
(initial_guess=None, method='L-BFGS-B', full_output=False, all_connected=True, parameter_limits=100, solver_kwargs={'disp': False, 'ftol': 1e-15, 'maxiter': 100}, uselog=True)¶ Minimize MPF objective function using scipy.optimize.minimize.
initial_guess : ndarray, None method : str, ‘L-BFGS-B’
Option for scipy.optimize.minimize.full_output : bool, False all_connected : bool, True
Switch for summing over all states that data sets could be connected to or just summing over non-data states (second summation in Eq 10 in Sohl-Dickstein 2011).- parameter_limits : float, 100
- Maximum allowed magnitude of any single parameter.
- solver_kwargs : dict, {‘maxiter’:100,’disp’:False,’ftol’:1e-15}
- For scipy.optimize.minimize.
- uselog : bool, True
- If True, calculate log of the objective function. This can help with numerical precision errors.
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
- dict (optional)
- Output from scipy.optimize.minimize returned if full_output is True.
-
static
worker_objective_task
(s, Xcount, adjacentStates, params, calc_e)¶
-
-
coniii.solvers.
MonteCarloHistogram
¶ alias of
coniii.solvers.MCH
-
class
coniii.solvers.
Pseudo
(sample, model=None, calc_observables=None, get_multipliers_r=None, calc_observables_r=None, k=2, iprint=True, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
Pseudolikelihood approximation to solving the inverse Ising problem as described in Aurell and Ekeberg, PRL 108, 090201 (2012).
-
cond_hess
(r, X, Jr, pairCoocRhat=None)¶ Returns d^2 cond_log_likelihood / d Jri d Jrj, with shape (dimension of system)x(dimension of system)
Current implementation uses more memory for speed. For large sample size, it may make sense to break up differently if too much memory is being used.
Deprecated.
- pairCooc : ndarray, None
- Pass pair_cooc_mat(X) to speed calculation.
-
cond_jac
(r, X, Jr)¶ Returns d cond_log_likelihood / d Jr, with shape (dimension of system)
Deprecated.
-
cond_log_likelihood
(r, X, Jr)¶ Equals the conditional log likelihood -L_r.
Deprecated.
- r : int
- individual index
- X : ndarray
- binary matrix, (# X) x (dimension of system)
- Jr : ndarray
- (dimension of system) x (1)
float
-
pair_cooc_mat
(X)¶ Returns matrix of shape (self.n)x(# X)x(self.n).
For use with cond_hess.
Slow because I haven’t thought of a better way of doing it yet.
Deprecated.
-
pseudo_log_likelihood
(X, J)¶ TODO: Could probably be made more efficient.
Deprecated.
- X : ndarray
- binary matrix, (# of samples) x (dimension of system)
- J : ndarray
- (dimension of system) x (dimension of system) J should be symmetric
-
solve
(force_general=False, **kwargs)¶ Uses a general all-purpose optimization to solve the problem using functions defined in self.get_multipliers_r and self.calc_observables_r.
- force_general : bool, False
- If True, force use of “general” algorithm.
- initial_guess : ndarray, None
- Initial guess for the parameter values.
- solver_kwargs : dict, {}
- kwargs for scipy.minimize().
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
-
-
class
coniii.solvers.
RegularizedMeanField
(sample, model=None, calc_observables=None, sample_size=1000, iprint=False, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
Implementation of regularized mean field method for solving the inverse Ising problem, as described in Daniels, Bryan C., David C. Krakauer, and Jessica C. Flack. ``Control of Finite Critical Behaviour in a Small-Scale Social System.’’ Nature Communications 8 (2017): 14301. doi:10.1038/ncomms14301
Specific to pairwise Ising constraints.
-
bracket1d
(xList, funcList)¶ Assumes xList is monotonically increasing
Get bracketed interval (a,b,c) with a < b < c, and f(b) < f(a) and f(c). (Choose b and c to make f(b) and f(c) as small as possible.)
If minimum is at one end, raise error.
-
solve
(n_grid_points=200, min_size=0, reset_rng=True, min_covariance=False, min_independent=True, cooc_cov=None, priorLmbda=0.0, bracket=None)¶ Varies the strength of regularization on the mean field J to best fit given cooccurrence data.
- n_grid_points : int, 200
- If bracket is given, first test at n_grid_points points evenly spaced in the bracket interval, then give the lowest three points to scipy.optimize.minimize_scalar
- min_size : int, 0
- Use a modified model in which samples with fewer ones than min_size are not allowed.
- reset_rng: bool, True
- Reset random number generator seed before sampling to ensure that objective function does not depend on generator state.
- min_covariance : bool, False
- ** As of v1.0.3, not currently supported ** Minimize covariance from emperical frequencies (see notes); trying to avoid biases, as inspired by footnote 12 in TkaSchBer06
- min_independent : bool, True
- ** As of v1.0.3, min_independent is the only mode currently supported ** Each <xi> and <xi xj> residual is treated as independent
- cooc_cov : ndarray, None
- ** As of v1.0.3, not currently supported ** Provide a covariance matrix for residuals. Should typically be coocSampleCovariance(samples). Only used if min_covariance and min_independent are False.
- priorLmbda : float,0.
- ** As of v1.0.3, not currently implemented ** Strength of noninteracting prior.
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
-
-
class
coniii.solvers.
Solver
¶ Bases:
object
Base class for declaring common methods and attributes for inverse maxent algorithms.
-
basic_setup
(sample_or_n=None, model=None, calc_observables=None, iprint=True, model_kwargs={})¶ General routine for setting up a Solver instance.
- sample_or_n : ndarray or int, None
If ndarray, of dimensions (samples, dimension).
If int, specifies system size.
If None, many of the default class members cannot be set and then must be set manually.
- model : class like one from models.py, None
- By default, will be set to solve Ising model.
- calc_observables : function, None
- For calculating observables from a set of samples.
- iprint : str, True
- If empty, do not display warning messages.
- model_kwargs : dict, {}
- Additional arguments that will be passed to Ising class. These only matter if model is None. Important ones include “n_cpus” and “rng”.
-
fill_in
(x, fill_value=0)¶ Helper function for filling in missing parameter values.
x : ndarray fill_value : float, 0
- ndarray
- With missing entries filled in.
-
set_insertion_ix
()¶ Calculate indices to fill in with zeros to “fool” code that takes full set of params.
-
solve
()¶ To be defined in derivative classes.
-
-
class
coniii.solvers.
SparseEnumerate
(sample=None, model=None, calc_observables=None, parameter_ix=None, iprint=True, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
Class for solving Ising model with a sparse parameter set by enumeration of the partition function and then using gradient descent. Unspecified parameters are implicitly fixed to be zero, which corresponds to leaving the corresponding correlation function unconstrained.
-
solve
(initial_guess=None, constraints=None, max_param_value=50, full_output=False, use_root=True, scipy_solver_kwargs={'method': 'krylov', 'options': {'fatol': 1e-13, 'xatol': 1e-13}})¶ Must specify either constraints (the correlations) or samples from which the correlations will be calculated using self.calc_observables. This routine by default uses scipy.optimize.root to find the solution. This is MUCH faster than the scipy.optimize.minimize routine which can be used instead.
If still too slow, try adjusting the accuracy.
If not converging, try increasing the max number of iterations.
If receiving Jacobian error (or some other numerical estimation error), parameter values may be too large for faithful evaluation. Try decreasing max_param_value.
- initial_guess : ndarray, None
- Initial starting guess for parameters. By default, this will start with all zeros if left unspecified.
- constraints : ndarray, None
- Can specify constraints directly instead of using the ones calculated from the sample. This can be useful when the pairwise correlations are known exactly. This will override the self.constraints data member.
- max_param_value : float, 50
- Absolute value of max parameter value. Bounds can also be set in the kwargs passed to the minimizer, in which case this should be set to None.
- full_output : bool, False
- If True, return output from scipy.optimize.minimize.
- use_root : bool, True
- If False, use scipy.optimize.minimize instead. This is typically much slower.
- scipy_solver_kwargs : dict, {‘method’:’krylov’, ‘options’:{‘fatol’:1e-13,’xatol’:1e-13}}
- High accuracy is slower. Although default accuracy may not be so good, lowering these custom presets will speed things up. Choice of the root finding method can also change runtime and whether a solution is found or not. Recommend playing around with different solvers and tolerances or getting a close approximation using a different method if solution is hard to find.
- ndarray
- Solved multipliers (parameters).
- dict, optional
- Output from scipy.optimize.root.
-
-
class
coniii.solvers.
SparseMCH
(sample, model=None, calc_observables=None, sample_size=1000, sample_method='metropolis', mch_approximation=None, parameter_ix=None, iprint=True, sampler_kw={}, **default_model_kwargs)¶ Bases:
coniii.solvers.Solver
Class for solving maxent problems on sparse constraints using the Monte Carlo Histogram method.
See MCH class.
-
learn_parameters_mch
(estConstraints, constraints, maxdlamda=1, maxdlamdaNorm=1, maxLearningSteps=50, eta=1)¶ - estConstraints : ndarray
- Constraints estimated from MCH approximation.
constraints : ndarray maxdlamda : float, 1
Max allowed magnitude for any element of dlamda vector before exiting.- maxdlamdaNorm : float, 1
- Max allowed norm of dlamda vector before exiting.
- maxLearningSteps : int
- max learning steps before ending MCH
- eta : float, 1
- factor for changing dlamda
- ndarray
- MCH estimate for constraints from parameters lamda+dlamda.
-
solve
(initial_guess=None, constraints=None, tol=None, tolNorm=None, n_iters=30, burn_in=30, maxiter=10, custom_convergence_f=None, iprint=False, full_output=False, learn_params_kwargs={'eta': 1, 'maxdlamda': 1}, generate_kwargs={})¶ Solve for maxent model parameters using MCH routine.
- initial_guess : ndarray, None
- Initial starting point.
- constraints : ndarray, None
- For debugging! Vector of correlations to fit.
- tol : float, None
- Maximum error allowed in any observable.
- tolNorm : float, None
- Norm error allowed in found solution.
- n_iters : int, 30
- Number of iterations to make between samples in MCMC sampling.
- burn_in : int, 30
- Initial burn in from random sample when MC sampling.
- max_iter : int, 10
- Max number of iterations of MC sampling and MCH approximation.
- custom_convergence_f : function, None
Function for determining convergence criterion. At each iteration, this function should return the next set of learn_params_kwargs and optionally the sample size.
As an example: def learn_settings(i):
‘’’ Take in the iteration counter and set the maximum change allowed in any given parameter (maxdlamda) and the multiplicative factor eta, where d(parameter) = (error in observable) * eta.
Additional option is to also return the sample size for that step by returning a tuple. Larger sample sizes are necessary for higher accuracy. ‘’’ if i<10:
return {‘maxdlamda’:1,’eta’:1}- else:
- return {‘maxdlamda’:.05,’eta’:.05}
iprint : bool, False full_output : bool, False
If True, also return the errflag and error history.learn_parameters_kwargs : dict, {‘maxdlamda’:1,’eta’:1} generate_kwargs : dict, {}
- ndarray
- Solved multipliers (parameters). For Ising problem, these can be converted into matrix format using utils.vec2mat.
- int
- Error flag. 0, converged within given criterion 1, max iterations reached
- ndarray
- Log of errors in matching constraints at each step of iteration.
-
-
coniii.solvers.
unwrap_self_worker_obj
(arg, **kwarg)¶
coniii.samplers module¶
-
class
coniii.samplers.
HamiltonianMC
(n, theta, calc_e, random_sample, grad_e=None, dt=0.01, leapfrogN=20, nCpus=0)¶ Bases:
coniii.samplers.Sampler
-
generate_samples
(nSamples, nBurn=100, fast=True, x0=None)¶ Generate nSamples from this Hamiltonian starting from random initial conditions from each sample.
-
sample
(x0, nBurn, saveHistory=False)¶ Get a single sample by MC sampling from this Hamiltonian. Slow method
-
-
class
coniii.samplers.
Heisenberg3DSampler
(J, calc_e, random_sample)¶ Bases:
coniii.samplers.Sampler
Simple MC Sampling from Heisenberg model with a lot of helpful functions.
generate_samples() equilibrate_samples() sample_metropolis() sample_energy_min()
-
equilibrate_samples
(samples, n_iters, method='mc', nCpus=0)¶
-
generate_samples
(nSamples, n_iters=100, **kwargs)¶ sample_size : int
-
grad_E
(X)¶ Gradient wrt theta and phi.
- X : ndarray
- with dims (nSpins,2) with angles theta and phi
-
sample_energy_min
(nFixed=0, rng=RandomState(MT19937) at 0x7F956598E160, initialState=None, method='powell', **kwargs)¶ Find local energy minimum given state in angular form. Angular representation makes it easy to be explicit about constraints on the vectors.
- initialState : ndarray,None
- n_samples x n_features x 2
- nFixed : int,0
- Number of vectors that are fixed.
-
sample_metropolis
(oldState, E0)¶ - s : ndarray
- State to perturb randomly.
- energy : float
- Energy of configuration.
-
sample_nearby_sample
(X, **kwargs)¶ Randomly move given state around for new metropolis sample. Question is whether it is more efficient to push only one of the many vectors around or all of them simultaneously.
-
sample_nearby_vector
(v, nSamples=1, otheta=None, ophi=None, sigma=0.1)¶ Sample random vector that is nearby. It is important how you choose the width sigma. NOTE: code might be simplified by using arctan2 instead of arctan
- v : ndarray
- xyz vector about which to sample random vectors
- nSamples : int,1
- number of random samples
- otheta : float,None
- polar angle for v
- ophi : float,None
- azimuthal angle for v
- sigma : float,.1
- width of Gaussian about v
-
classmethod
to_dict
(data, names)¶ Convenience function taking 3d array of of samples and arranging them into n x 3 arrays in a dictionary.
-
-
class
coniii.samplers.
Metropolis
(n, theta, calc_e=None, n_cpus=None, rng=None, boost=True, iprint=True)¶ Bases:
coniii.samplers.Sampler
-
generate_cond_samples
(sample_size, fixed_subset, burn_in=1000, n_cpus=None, initial_sample=None, systematic_iter=False, parallel=True)¶ Generate samples from conditional distribution (while a subset of the spins are held fixed). Samples are generated in parallel.
NOTE: There is a bug with multiprocess where many calls to the parallel sampling routine in a row leads to increasingly slow evaluation of the code.
sample_size : int fixed_subset : list of duples
Each duple is the index of the spin and the value to fix it at. These should be ordered by spin index.- burn_in : int
- Burn in.
- n_cpus : int
- Number of cpus to use.
- initial_sample : ndarray
- Option to set initial random sample.
- systematic_iter : bool
- Iterate through spins systematically instead of choosing them randomly.
- parallel : bool
- If True, use parallelized routine.
- ndarray
- Samples from distribution.
- ndarray
- Energy of each sample.
-
generate_samples_boost
(sample_size, n_iters=1000, burn_in=None, systematic_iter=False)¶ Generate Metropolis samples using C++ and boost.
- sample_size : int
- Number of samples.
- n_iters : int, 1000
- Number of Metropolis iterations between samples.
- burn_in : int, None
- If not set, will be the same value as n_iters.
- systematic_iter : bool, False
- If True, iterate through each element of system by increment index by one.
- ndarray, optional
- Saved array of energies at each sampling step.
-
generate_samples_parallel_boost
(sample_size, n_iters=1000, burn_in=None, systematic_iter=False)¶ Generate samples in parallel. Each replica in self._samples runs on its own thread and a sample is generated every n_iters.
In order to control the random number generator, we pass in seeds that are samples from the class instance’s rng.
- sample_size : int
- Number of samples.
- n_iters : int, 1000
- Number of iterations between taking a random sample.
- burn_in : int, None
- If None, n_iters is used.
- systematic_iter : bool, False
- If True, iterate through spins systematically instead of choosing them randomly.
-
generate_samples_parallel_py
(sample_size, n_iters=1000, burn_in=None, initial_sample=None, systematic_iter=False)¶ Generate samples in parallel. Each replica in self._samples runs on its own thread and a sample is generated every n_iters.
In order to control the random number generator, we pass in seeds that are samples from the class instance’s rng.
- sample_size : int
- Number of samples.
- n_iters : int, 1000
- Number of iterations between taking a random sample.
- burn_in : int, None
- If None, n_iters is used.
- initial_sample : ndarray, None
- Starting set of replicas otherwise self._samples is used.
- systematic_iter : bool, False
- If True, iterate through spins systematically instead of choosing them randomly.
-
generate_samples_py
(sample_size, n_iters=1000, burn_in=None, systematic_iter=False, saveHistory=False, initial_sample=None)¶ Generate Metropolis samples using a for loop.
- sample_size : int
- Number of samples.
- n_iters : int, 1000
- Number of iterations to run the sampler floor.
- burn_in : int, None
- If None, n_iters is used.
- systematic_iter : bool, False
- If True, iterate through each element of system by increment index by one.
- saveHistory : bool, False
- If True, also save the energy of each sample at each sampling step.
- initial_sample : ndarray, None
- Start with this sample (i.e. to avoid warming up). Otherwise, self._samples is the initial sample.
- ndarray, optional
- Saved array of energies at each sampling step.
-
random_sample
(n_samples)¶
-
sample_metropolis
(sample0, E0, rng=None, flip_site=None, calc_e=None)¶ Metropolis sampling given an arbitrary energy function.
- sample0 : ndarray
- Sample to start with. Passed by ref and changed.
- E0 : ndarray
- Initial energy of state.
- rng : np.random.RandomState
- Random number generator.
- flip_site : int
- Site to flip.
- calc_e : function
- If another function to calculate energy should be used
- float
- delta energy.
-
update_parameters
(theta)¶
-
-
class
coniii.samplers.
ParallelTempering
(n, theta, calc_e, n_replicas, Tbds=(1.0, 3.0), sample_size=1000, replica_burnin=None, rep_ex_burnin=None, n_cpus=None, rng=None)¶ Bases:
coniii.samplers.Sampler
-
burn_and_exchange
(pool)¶ pool : mp.multiprocess.Pool
-
burn_in_replicas
(pool=None, close_pool=True, n_iters=None)¶ Run each replica separately.
pool : multiprocess.Pool, None close_pool : bool, True
If True, call pool.close() at end.- n_iters : int, None
- Default value is self.replicaBurnin.
-
generate_samples
(sample_size, save_exchange_trajectory=False)¶ Burn in, run replica exchange simulation, then sample.
- sample_size : int
- Number of samples to take for each replica.
- save_exchange_trajectory : bool, False
- If True, keep track of the location of each replica in beta space and return the history.
- ndarray, optional
- Trajectory of each replica through beta space. Each row is tells where each index is located in beta space.
-
static
initialize_beta
(b0, b1, n_replicas)¶ Use linear interpolation of temperature range.
-
static
iterate_beta
(beta, acceptance_ratio)¶ Apply algorithm from Hukushima but reversed to maintain one replica at T=1.
- beta : ndarray
- Inverse temperature.
- acceptance_ratio : ndarray
- Estimate of acceptance ratio.
- ndarray
- New beta.
-
optimize_beta
(n_samples, n_iters, tol=0.01, max_iter=10)¶ Find suitable temperature range for replicas. Sets self.beta.
- n_samples : int
- Number of samples to use to estimate acceptance ratio. Acceptance ratio is estimated as the average of these samples.
- n_iters : int
- Number of sampling iterations for each replica.
- tol : float, .1
- Average change in beta to reach before stopping.
- max_iter : int, 10
- Number of times to iterate algorithm for beta. Each iteration involves sampling from replicas.
-
setup_replicas
()¶ Initialise a set of replicas at different temperatures using the Metropolis algorithm and optimize the temperatures. Replicas are burned in and ready to sample.
-
update_replica_parameters
()¶ Update parameters for each replica. Remember that the parameters include the factor of beta.
-
-
class
coniii.samplers.
Potts3
(n, theta, calc_e=None, n_cpus=None, rng=None, boost=True)¶ Bases:
coniii.samplers.Metropolis
-
generate_samples_parallel_boost
(sample_size, n_iters=1000, burn_in=None, systematic_iter=False)¶ Generate samples in parallel. Each replica in self._samples runs on its own thread and a sample is generated every n_iters.
In order to control the random number generator, we pass in seeds that are samples from the class instance’s rng.
- sample_size : int
- Number of samples.
- n_iters : int, 1000
- Number of iterations between taking a random sample.
- burn_in : int, None
- If None, n_iters is used.
- systematic_iter : bool, False
- If True, iterate through spins systematically instead of choosing them randomly.
-
random_sample
(n_samples)¶
-
sample_metropolis
(sample0, E0, rng=None, flip_site=None, calc_e=None)¶ Metropolis sampling given an arbitrary sampling function.
- sample0 : ndarray
- Sample to start with. Passed by ref and changed.
- E0 : ndarray
- Initial energy of state.
- rng : np.random.RandomState
- Random number generator.
- flip_site : int
- Site to flip.
- calc_e : function
- If another function to calculate energy should be used
- float
- delta energy.
-
-
class
coniii.samplers.
SWIsing
(n, theta, calc_e, nCpus=None, rng=None)¶ Bases:
coniii.samplers.Sampler
-
generate_sample
(n_samples, n_iters, initial_state=None)¶ n_samples n_iters initial_state : ndarray,None
-
generate_sample_parallel
(n_samples, n_iters, initial_state=None, n_cpus=None)¶ n_samples n_iters initial_state : ndarray,None
-
get_clusters
(state)¶ Get a random sample of clusters.
-
one_step
(state)¶
-
print_cluster_size
(n_iters)¶
-
randomly_flip_clusters
(state, clusters)¶
-
-
class
coniii.samplers.
Sampler
(n, theta, **kwargs)¶ Bases:
object
Base class for MCMC sampling.
-
generate_samples
(sample_size, **kwargs)¶ sample_size : int
-
generate_samples_parallel
(sample_size, **kwargs)¶ sample_size : int
-
sample_metropolis
(s, energy)¶ - s : ndarray
- State to perturb randomly.
- energy : float
- Energy of configuration.
-
update_parameters
(new_parameters)¶
-
-
class
coniii.samplers.
WolffIsing
(J, h)¶ Bases:
coniii.samplers.Sampler
-
build_cluster
(state, initialsite)¶ Grow cluster from initial site.
-
find_neighbors
(state, site, alreadyMarked)¶ Return neighbors of given site that need to be visited excluding sites that have already been visited. This is the implementation of the Wolff algorithm for finding neighbors such that detailed balance is satisfied. I have modified to include random fields such tha the probability of adding a neighbors depends both on its coupling with the current site and the neighbor’s magnetic field.
state site alreadyMarked
-
generate_sample
(samplesize, n_iters, initialSample=None, save_history=False)¶ Generate samples by starting from random initial states.
-
generate_sample_parallel
(samplesize, n_iters, initialSample=None)¶ Generate samples by starting from random or given initial states.
-
one_step
(state, initialsite=None)¶ Run one iteration of the Wolff algorithm that involves finding a cluster and possibly flipping it.
-
update_parameters
(J, h)¶
-
-
coniii.samplers.
calc_e
¶ Heisenberg model.
- theta : ndarray
- List of couplings Jij
- x : ndarray
- List of angles (theta_0,phi_0,theta_1,phi_1,…,theta_n,phi_n)
-
coniii.samplers.
check_e_logp
(sample, calc_e)¶ Boltzmann type model with discrete state space should have E propto -logP. Calculate these quantities for comparison.
sample calc_e
-
coniii.samplers.
cross
¶ Calculate the cross product of two 3d vectors.
-
coniii.samplers.
cross_
¶ Calculate the cross product of two 3d vectors.
-
coniii.samplers.
grad_e
¶ Derivatives wrt the angles of the spins.
-
coniii.samplers.
grad_e_theta
¶ Derivatives wrt the couplings theta.
-
coniii.samplers.
iter_cluster
(adj)¶ Cycle through all spins to get clusters.
-
coniii.samplers.
iterate_neighbors
¶ Iterate through all neighbors of a particular site and see if a bond should be formed between them.
- n : int
- System size.
- ix : ndarray of bool
- Indices of sites that have already been visited.
- expdJ : ndarray
- np.exp( -2*state[:,None]*state[None,:]*J )
- r : ndarray
- Array of random numbers.
-
coniii.samplers.
jit_sample
¶ Get a single sample by MC sampling from this Hamiltonian.
- theta : ndarray
- Parameters
- x0 : ndarray
- Sample
nBurn : int dt : float leapfrogN : int randNormal : ndarray
nBurn x ndim- randUnif : ndarray
- nBurn
-
coniii.samplers.
jit_sample_nearby_vector
¶
-
coniii.samplers.
pairwise_prod
¶
-
coniii.samplers.
sample_bonds
¶ - p : ndarray
- Probability of bond formation.
- r : ndarray
- Random numbers.
state J
-
coniii.samplers.
sample_ising
(multipliers, n_samples, n_cpus=None, seed=None, generate_samples_kw={})¶ Easy way to Metropolis sample from Ising model.
- multipliers : ndarray
- N individual fields followed by N(N-1)/2 pairwise couplings.
- n_samples : int
- Number of samples to take.
- n_cpus : int, None
- Number of cpus to use. If more than one, multiprocessing invoked.
- seed : int, None
- Random number generator seed.
- generate_samples_kw : dict, {}
- Any extra arguments to send into Metropolis sampler. Default args are
- n_iters=1000 systematic_iter=False saveHistory=False initial_sample=None
- ndarray
- Matrix of dimensions (n_samples, n).
-
coniii.samplers.
spec_cluster
(L, exact=True)¶ - L : ndarray
- Graph Laplacian
coniii.utils module¶
-
coniii.utils.
adj
¶ Return one-flip neighbors and a set of random neighbors. This is written to be used with the solvers.MPF class. Use adj_sym() if symmetric spins in {-1,1} are needed.
NOTE: For random neighbors, there is no check to make sure neighbors don’t repeat but this shouldn’t be a problem as long as state space is large enough.
- s : ndarray
- State whose neighbors are found. One-dimensional vector of spins.
- n_random_neighbors : int,0
- If >0, return this many random neighbors. Neighbors are just random states, but they are called “neighbors” because of the terminology in MPF. They can provide coupling from s to states that are very different, increasing the equilibration rate.
- neighbors : ndarray
- Each row is a neighbor. s.size + n_random_neighbors are returned.
-
coniii.utils.
adj_sym
¶ Symmetric version of adj() where spins are in {-1,1}.
-
coniii.utils.
base_repr
¶ Return decimal number in given base as list.
i : int base : int
list
-
coniii.utils.
bin_states
(n, sym=False)¶ Generate all possible binary spin states.
- n : int
- Number of spins.
- sym : bool
- If true, return states in {-1,1} basis.
v : ndarray
-
coniii.utils.
calc_de
(s, i)¶ Calculate the derivative of the energy wrt parameters given the state and index of the parameter. In this case, the parameters are the concatenated vector of {h_i,J_ij}.
- s : ndarray
- Two-dimensional vector of spins where each row is a state.
i : int
- dE : float
- Derivative of hamiltonian with respect to ith parameter, i.e. the corresponding observable.
-
coniii.utils.
calc_overlap
(sample, ignore_zeros=False)¶ <si_a si_b> between all pairs of replicas a and b
sample ignore_zeros (bool=False)
Instead of normalizing by the number of spins, normalize by the minimum number of nonzero spins.
-
coniii.utils.
coarse_grain_with_func
(X, n_times, sim_func, coarse_func)¶ Iteratively coarse-grain X by combining pairs with the highest similarity. Both the function to measure similarity and to implement the coarse-graining must be supplied.
- X : ndarray
- Each col is a variable and each row is an observation (n_samples, n_system).
- n_times : int
- Number of times to coarse grain.
- sim_func : function
- Takes an array like X and returns a vector of ncol*(ncol-1)//2 pairwise similarities.
- coarse_func : function
- Takes a two col array and returns a single vector.
- ndarray
- Coarse-grained version of X.
- list of lists of ints
- Each list specifies which columns of X have been coarse-grained into each col of the coarse X.
-
coniii.utils.
convert_corr
(si, sisj, convert_to, concat=False, **kwargs)¶ Convert single spin means and pairwise correlations between {0,1} and {-1,1} formulations.
- si : ndarray
- Individual means.
- sisj : ndarray
- Pairwise correlations.
- convert_to : str
- ‘11’ will convert {0,1} formulation to +/-1 and ‘01’ will convert +/-1 formulation to {0,1}
- concat : bool, False
- If True, return concatenation of means and pairwise correlations.
- ndarray
- Averages <si>. Converted to appropriate basis. Returns concatenated vector <si> and <sisj> if concat is True.
- ndarray, optional
- Pairwise correlations <si*sj>. Converted to appropriate basis.
-
coniii.utils.
convert_params
(h, J, convert_to, concat=False)¶ Convert Ising model fields and couplings from {0,1} basis to {-1,1} and vice versa.
- h : ndarray
- Fields.
- J : ndarray
- Couplings.
- convert_to : str
- Either ‘01’ or ‘11’.
- concat : bool, False
- If True, return a vector concatenating fields and couplings.
- ndarray
- Mean bias h vector. Concatenated vector of h and J if concat is True.
- ndarray, optional
- Vector of J.
-
coniii.utils.
define_ising_helper_functions
()¶ Functions for plugging into solvers for +/-1 Ising model with fields h_i and couplings J_ij.
- function
- calc_e
- function
- calc_observables
- function
- mch_approximation
-
coniii.utils.
define_ising_helper_functions_sym
()¶ Functions for plugging into solvers for +/-1 Ising model with couplings J_ij and no fields.
- function
- calc_e
- function
- calc_observables
- function
- mch_approximation
-
coniii.utils.
define_potts_helper_functions
(k)¶ Helper functions for calculating quantities in k-state Potts model.
- k : int
- Number of possible states.
- function
- calc_e
- function
- calc_observables
- function
- mch_approximation
-
coniii.utils.
define_pseudo_ising_helper_functions
(N)¶ Define helper functions for using Pseudo method on Ising model.
- N : int
- System size.
- function
- get_multipliers_r
- function
- calc_observables_r
-
coniii.utils.
define_pseudo_potts_helper_functions
(n, k)¶ Define helper functions for using Pseudo method on Potts model with simple form for couplings that are only nonzero when the spins are occupying the same state.
- n : int
- System size.
- k : int
- Number of possible configurations in Potts model.
- function
- get_multipliers_r
- function
- calc_observables_r
-
coniii.utils.
define_ternary_helper_functions
()¶
-
coniii.utils.
define_triplet_helper_functions
()¶
-
coniii.utils.
ind_to_sub
¶ Convert index from flattened upper triangular matrix to pair subindex.
- n : int
- Dimension size of square array.
- ix : int
- Index to convert.
- subix : tuple
- (i,j)
-
coniii.utils.
ising_convert_params
(oparams, convert_to, concat=False)¶ General conversion of parameters from 01 to 11 basis.
Take set of Ising model parameters up to nth order interactions in either {0,1} or {-1,1} basis and convert to other basis.
- oparams : tuple of lists
- Tuple of lists of interactions between spins starting with the lowest order interactions. Each list should consist of all interactions of that order such that the length of each list should be binomial(n,i) for all i starting with i>=1.
convert_to : str concat : bool,False
- params : tuple of lists or list
- New parameters in order of lowest to highest order interactions to mean biases. Can all be concatenated together if concat switch is True.
-
coniii.utils.
k_corr
(X, k, weights=None, exclude_empty=False)¶ Calculate kth order correlations of spins.
- X : ndarray
- Dimensions (n_samples, n_dim).
- k : int
- Order of correlation function <s_{i_1} * s_{i_2} * … * s_{i_k}>.
- weights : np.ndarray, None :
- Calculate single and pairwise means given fractional weights for each state in the data such that a state only appears with some weight, typically less than one.
- exclude_empty : bool, False
- When using with {-1,1} basis, you can leave entries with 0 and those will not be counted for any pair. If True, the weights option doesn’t do anything.
- ndarray
- Kth order correlations <s_{i_1} * s_{i_2} * … * s_{i_k}>.
-
coniii.utils.
mat2vec
(multipliers)¶ Convert matrix form of Ising parameters to a vector.
This is specific to the Ising model.
- multipliers : ndarray
- Matrix of couplings with diagonal elements as fields.
- ndarray
- Vector of fields and couplings, respectively.
-
coniii.utils.
multinomial
(*args)¶
-
coniii.utils.
pair_corr
(X, weights=None, concat=False, exclude_empty=False, subtract_mean=False, laplace_count=False)¶ Calculate averages and pairwise correlations of spins.
- X : ndarray
- Dimensions (n_samples,n_dim).
- weights : float or np.ndarray or twople, None
- If an array is passed, it must be the length of the data and each data point will be given the corresponding weight. Otherwise, the two element tuple should contain the normalization for each mean and each pairwise correlation, in that order. In other words, the first array should be length {s_i} and the second length {si*s_j}.
- concat : bool, False
- Return means concatenated with the pairwise correlations into one array.
- exclude_empty : bool, False
- When using with {-1,1} basis, you can leave entries with 0 and those will not be counted for any pair. If True, the weights option doesn’t do anything.
- subtract_mean : bool, False
- If True, return pairwise correlations with product of individual means subtracted.
laplace_count :
- twople
- (si,sisj) or np.concatenate((si,sisj))
-
coniii.utils.
replace_diag
(mat, newdiag)¶ Replace diagonal entries of square matrix.
mat : ndarray newdiag : ndarray
ndarray
-
coniii.utils.
split_concat_params
(p, n)¶ Split parameters for Ising model that have all been concatenated together into a single list into separate lists. Assumes that the parameters are increasing in order of interaction and that all parameters are present.
p : list-like
- list of list-like
- Parameters increasing in order: (h, Jij, Kijk, … ).
-
coniii.utils.
state_probs
(v, allstates=None, weights=None, normalized=True)¶ Get probability of unique states. There is an option to allow for weighted counting.
- states : ndarray
- Sample of states on which to extract probabilities of unique configurations with dimensions (n_samples,n_dimension).
- allstates : ndarray, None
- Unique configurations to look for with dimensions (n_samples, n_dimension).
- weights : vector, None
- For weighted counting of each state given in allstate kwarg.
- normalized : bool, True
- If True, return probability distribution instead of frequency count.
- ndarray
- Vector of the probabilities of each state.
- ndarray
- All unique states found in the data. Each state is a row. Only returned if allstates kwarg is not provided.
-
coniii.utils.
sub_to_ind
¶ Convert pair of coordinates of a symmetric square array into consecutive index of flattened upper triangle. This is slimmed down so it won’t throw errors like if i>n or j>n or if they’re negative. Only checking for if the returned index is negative which could be problematic with wrapped indices.
- n : int
- Dimension of square array
- i,j : int
- coordinates
int
-
coniii.utils.
unique_rows
(mat, return_inverse=False)¶ Return unique rows indices of a numeric numpy array.
mat : ndarray return_inverse : bool
If True, return inverse that returns back indices of unique array that would return the original array- u : ndarray
- Unique elements of matrix.
- idx : ndarray
- row indices of given mat that will give unique array
-
coniii.utils.
unravel_index
(ijk, n)¶ Unravel multi-dimensional index to flattened index but specifically for multi-dimensional analog of an upper triangular array (lower triangle indices are not counted).
- ijk : tuple
- Raveled index to unravel.
- n : int
- System size.
- ix : int
- Unraveled index.
-
coniii.utils.
vec2mat
(multipliers, separate_fields=False)¶ Convert vector of parameters containing fields and couplings to a matrix where the diagonal elements are the fields and the remaining elements are the couplings. Fields can be returned separately with the separate_fields keyword argument.
This is specific to the Ising model.
- multipliers : ndarray
- Vector of fields and couplings.
separate_fields : bool, False
- ndarray
- n x n matrix. Diagonal elements are fields unless separate_fields keyword argument is True, in which case the diagonal elements are 0.
- ndarray (optional)
- Fields if separate_fields keyword argument is True.
-
coniii.utils.
xbin_states
(n, sym=False)¶ Generator for iterating through all possible binary states.
- n : int
- Number of spins.
- sym : bool
- If true, return states in {-1,1} basis.
generator
-
coniii.utils.
xpotts_states
¶ Generator for iterating through all states for Potts model with k distinct states. This is a faster version of calling xbin_states(n, False) except with strings returned as elements instead of integers.
- n : int
- Number of spins.
- k : int
- Number of distinct states. These are labeled by integers starting from 0 and must be <=36.
generator
-
coniii.utils.
zero_diag
(mat)¶ Replace diagonal entries of square matrix with zeros.
mat : ndarray
ndarray