Data_Analysis ModuleÂś

Tools for data preprocessing, analysis, and visualization.

Features:

  • Read and preprocess data from files or DataFrames

  • Filtering, normalization, and feature extraction

  • Publication-ready plots: line, scatter, histogram, heatmap

  • 68+ experimental analysis tools (NMR, XPS, XRD, UV-Vis, Raman)

  • Scientific constants, unit converters, and utilities

PyGamLab.Data_Analysis.AerospaceAnalysis(dataframe, application)[source]
Parameters:
  • dataframe (pandas.DataFrame) – Must contain two columns: [‘Newton’, ‘Area’]. Values should be in Newtons (N) and square meters (m²).

  • application (str) – One of [‘plot’, ‘maxPressure’]. ‘plot’ → Plot Newton vs Area. ‘maxPressure’ → Return maximum pressure value.

  • Returns

  • -------

  • None (float or) – Maximum pressure if application=’maxPressure’. None if application=’plot’.

PyGamLab.Data_Analysis.Auger_Electron_Spectroscopy_analysis(df, application=None, sensitivity_factors=None)[source]

Analyze and visualize Auger Electron Spectroscopy (AES) data.

This function provides options to plot AES spectra, detect peak positions, and estimate atomic percentages using sensitivity factors.

Parameters:
  • df (pandas.DataFrame) – Input DataFrame containing AES data. Must include columns: - ‘Energy (eV)’ : Electron energy values in eV - ‘Intensity (Counts)’ : Corresponding measured intensity

  • application (str, optional) – Type of analysis to perform: - ‘plot’ : Generates a professional plot of Intensity vs Energy. - ‘peak_position’ : Detects peaks and returns their energy positions and intensities. - ‘atomic’ : Calculates atomic percentages based on provided sensitivity factors.

  • sensitivity_factors (dict, optional) – Dictionary mapping element symbols to their sensitivity factors. Example: {‘C’: 0.25, ‘O’: 0.66, ‘Fe’: 2.5}. Required if application=’atomic’.

Returns:

  • If application=’plot’ : None (displays plot)

  • If application=’peak_position’dict with keys:
    • ”Peak Positions (eV)” : numpy array of peak energies

    • ”Peak Intensities (Counts)” : numpy array of peak intensities

  • If application=’atomic’list of dicts for each element, e.g.:

    [{“Element”: “C”, “Atomic %”: 25.4}, {“Element”: “O”, “Atomic %”: 74.6}]

Return type:

dict or list or None

Raises:

ValueError – If application=’atomic’ and sensitivity_factors is not provided.

Examples

# 1. Plot AES spectrum >>> Auger_Electron_Spectroscopy_analysis(df, application=’plot’)

# 2. Detect peak positions >>> peaks = Auger_Electron_Spectroscopy_analysis(df, application=’peak_position’) >>> print(peaks) {‘Peak Positions (eV)’: array([280, 530]), ‘Peak Intensities (Counts)’: array([150, 200])}

# 3. Estimate atomic composition >>> sensitivity = {‘C’: 0.25, ‘O’: 0.66, ‘Fe’: 2.5} >>> composition = Auger_Electron_Spectroscopy_analysis(df, application=’atomic’, sensitivity_factors=sensitivity) >>> print(composition) [{‘Element’: ‘C’, ‘Atomic %’: 30.5}, {‘Element’: ‘O’, ‘Atomic %’: 69.5}]

PyGamLab.Data_Analysis.BET_Analysis(df, application, mass_of_sample=None, cross_sectional_area=None, T=None, Pa=None, total_surface_area=None, pore_volume=None)[source]

Perform BET (Brunauer–Emmett–Teller) analysis on adsorption data, including surface area determination, pore volume, and pore radius calculations.

Parameters:
  • df (pd.DataFrame) – DataFrame containing adsorption data with columns: - ‘Relative Pressure (P/P0)’ : relative pressure of adsorbate - ‘Adsorbed Volume (cm3/g STP)’ : adsorbed gas volume

  • application (str) – Mode of operation. Options: - ‘plot_isotherm’ : plots the adsorption isotherm. - ‘calculate_surface_area’ : plots BET plot and calculates the specific surface area. - ‘pore_volume_calculation’ : calculates the total pore volume. - ‘pore_radius_calculations’ : calculates average pore radius.

  • mass_of_sample (float, optional) – Mass of the sample in grams. Required for ‘calculate_surface_area’.

  • cross_sectional_area (float, optional) – Cross-sectional area of adsorbate molecule (m^2). Required for ‘calculate_surface_area’.

  • T (float, optional) – Ambient temperature in Kelvin. Required for ‘pore_volume_calculation’.

  • Pa (float, optional) – Ambient pressure in Pa. Required for ‘pore_volume_calculation’.

  • total_surface_area (float, optional) – Total surface area (St in m^2) for pore radius calculation. Required for ‘pore_radius_calculations’.

  • pore_volume (float, optional) – Total pore volume (V_liq) in m^3/g. Can be used instead of recalculating from data.

Returns:

Depending on the application, returns a dictionary with calculated values: - ‘calculate_surface_area’ : {‘slope’: m, ‘intercept’: b, ‘v_m’: vm, ‘constant’: c, ‘sbet’: SBET} - ‘pore_volume_calculation’ : {‘pore_volume’: V_liq} - ‘pore_radius_calculations’ : {‘pore_radius_nm’: r_p} Returns None for simple plots or if calculations fail.

Return type:

dict or None

Examples

# 1. Plot adsorption isotherm >>> BET_Analysis(df, application=’plot_isotherm’)

# 2. Calculate BET surface area >>> BET_Analysis(df, application=’calculate_surface_area’, mass_of_sample=0.05, cross_sectional_area=0.162e-18) — BET Surface Area Calculation — Slope (m): 10.1234 Y-intercept (b): 2.3456 Monolayer Adsorbed Volume (vm): 0.1234 cm^3/g STP BET Constant (c): 5.32 Specific Surface Area (SBET): 45.67 m^2/g

# 3. Calculate pore volume >>> BET_Analysis(df, application=’pore_volume_calculation’, T=77, Pa=101325) — Pore Volume Calculation — Volume of gas adsorbed (V_ads): 150.0 cm^3/g STP Total Pore Volume (V_liq): 0.000150 m^3/g

# 4. Calculate average pore radius >>> BET_Analysis(df, application=’pore_radius_calculations’, total_surface_area=45.67, pore_volume=0.000150) — Pore Radius Calculation — Total Pore Volume (V_liq): 0.000150 m^3/g Total Surface Area (S): 45.67 m^2 Average Pore Radius (r_p): 6.57 nm

PyGamLab.Data_Analysis.CV(data, application)[source]

Perform Cyclic Voltammetry (CV) data analysis for electrochemical characterization.

This function provides core analytical and visualization tools for cyclic voltammetry experiments, including voltammogram plotting, oxidation/reduction peak detection, and peak shape analysis for assessing reversibility of redox processes.

Parameters:
  • data (list of tuples, list of lists, or pandas.DataFrame) –

    Experimental CV dataset containing the following columns or structure: - ‘E’ : float

    Applied potential (V vs. reference electrode)

    • ’I’float

      Measured current (A)

    Example: >>> data = pd.DataFrame({ … “E”: [-0.5, -0.3, 0.0, 0.3, 0.5], … “I”: [-0.0001, 0.0003, 0.0012, 0.0005, -0.0002] … })

  • application (str) –

    Defines the analysis type. Supported options include:

    • ”plot” :

      Display the cyclic voltammogram (current vs. potential).

    • ”peaks” :

      Detect and highlight oxidation and reduction peaks using scipy.signal.find_peaks with a default prominence of 0.001 A. The function identifies the most intense oxidation peak and up to two reduction peaks.

    • ”shape” :

      Analyze the shape and symmetry of oxidation/reduction peaks to determine the reversibility of the redox process. It computes:

      • E_pa : anodic (oxidation) peak potential (V)

      • E_pc : cathodic (reduction) peak potential (V)

      • ΔE_p : peak separation (V)

      • |I_pc/I_pa| : peak current ratio

      Based on electrochemical theory: - Reversible systems exhibit ΔE_p ≈ 59 mV/n (for one-electron transfer)

      and |I_pc/I_pa| ≈ 1.

      • Quasi-reversible systems show moderate deviations.

      • Irreversible systems display large separations and asymmetric peaks.

Returns:

The function primarily displays visualizations and prints analysis results directly to the console.

Return type:

None

Raises:
  • TypeError – If the input data format is invalid.

  • ValueError – If the specified application is not supported.

Notes

  • Ensure that potentials (E) are in ascending or cyclic order for accurate peak detection.

  • Peak prominence and smoothing parameters can be tuned for noisy data.

  • The reversibility classification is heuristic and assumes one-electron transfer unless otherwise known.

Examples

>>> data = pd.DataFrame({
...     "E": np.linspace(-0.5, 0.5, 200),
...     "I": 0.001 * np.sin(4 * np.pi * np.linspace(-0.5, 0.5, 200))
... })
>>> CV(data, "plot")
# Displays the cyclic voltammogram.
>>> CV(data, "peaks")
# Detects and highlights oxidation/reduction peaks.
>>> CV(data, "shape")
# Computes ΔEp and |Ipc/Ipa| to infer redox reversibility.
PyGamLab.Data_Analysis.Compression_TestAnalysis(df, operator, sample_name, density=0)[source]

Analyze compression test data: plot stress-strain curve or calculate maximum strength.

Parameters:
  • df (pandas.DataFrame) – Compression test data containing at least two columns: - ‘e’: strain - ‘S (Mpa)’: stress in MPa

  • operator (str) – Action to perform on data: - ‘plot’: plots stress-strain diagram - ‘S_max’: returns maximum stress - ‘S_max/Density’: returns specific maximum stress (requires density != 0)

  • sample_name (str) – Name of the sample (used for plot label)

  • density (float, optional) – Density of the sample (needed for ‘S_max/Density’). Default is 0.

Returns:

Maximum stress if operator is ‘S_max’. Specific maximum stress if operator is ‘S_max/Density’. None if operator is ‘plot’.

Return type:

float or None

Example

>>> df = pd.DataFrame({'e':[0,0.01,0.02],'S (Mpa)':[10,20,15]})
>>> Compression_TestAnalysis(df, 'S_max', 'Sample1')
20
>>> Compression_TestAnalysis(df, 'plot', 'Sample1')
PyGamLab.Data_Analysis.DMTA_TestAnalysis(df, operator, sample_name)[source]

Analyze DMTA test data: find maxima or plot storage modulus, loss modulus, and tanδ.

Parameters:
  • df (pandas.DataFrame) – DMTA test data containing at least these columns: - ‘Frequency (Hz)’ - “E’-Storage Modulus (Mpa)” - Column 13 (loss modulus) or specify proper column - ‘Tanδ’

  • operator (str) – Action to perform on data: - ‘storage_max’: returns maximum storage modulus - ‘loss_max’: returns maximum loss modulus - ‘tan_max’: returns maximum Tanδ - ‘plot_storage’, ‘plot_loss’, ‘plot_tan’: plots corresponding data

  • sample_name (str) – Name of the sample (used for plot label)

Returns:

Maximum value for storage, loss, or Tanδ if requested. None if plotting.

Return type:

float or None

Example

>>> df = pd.DataFrame({
... 'Frequency (Hz)':[1,10,100],
... "E'-Storage Modulus (Mpa)":[100,150,200],
... df.columns[13]:[10,20,30],  # Loss modulus column
... 'Tanδ':[0.1,0.15,0.2]})
>>> DMTA_TestAnalysis(df, 'storage_max', 'Sample1')
200
PyGamLab.Data_Analysis.DSC(data, application='plot', prominence=0.5, distance=5, sample_mass=1.0, heating_rate=1.0, orientation=None)[source]

Perform Differential Scanning Calorimetry (DSC) data processing, analysis, and visualization.

This function allows for automated DSC curve plotting, peak detection, transition temperature determination (Tg, Tm, Tc), enthalpy (ΔH) estimation, and kinetic analysis from experimental DSC datasets. The analysis can be adapted for both exothermic-up and endothermic-up instrument conventions.

Parameters:
  • data (pandas.DataFrame) – Input DataFrame containing DSC measurement data. It must include one of: - Columns [“t”, “Value”] for time-based measurements, or - Columns [“Temperature”, “Value”] for temperature-based measurements.

  • application (str, optional, default="plot") – The type of analysis or operation to perform. Supported options include: - “plot” : Plot the raw DSC curve. - “peak_detection” : Detect and label endothermic and exothermic peaks. - “Tg” : Estimate the glass transition temperature (Tg). - “Tm” : Determine the melting temperature (Tm). - “Tc” : Determine the crystallization temperature (Tc). - “dH” : Compute enthalpy changes (ΔH) for detected events. - “kinetics” : Estimate reaction onset, peak, endset, and corresponding ΔH.

  • prominence (float, optional, default=0.5) – Minimum prominence of peaks for detection. Higher values filter out smaller peaks. Passed to scipy.signal.find_peaks.

  • distance (int, optional, default=5) – Minimum number of data points between detected peaks. Helps to separate closely spaced transitions.

  • sample_mass (float, optional, default=1.0) – Sample mass in milligrams (mg). Used to normalize enthalpy (ΔH) values.

  • heating_rate (float, optional, default=1.0) – Heating or cooling rate in °C/min. Used to normalize ΔH for temperature-based data.

  • orientation (str or None, optional, default=None) – Defines the thermal orientation of the DSC instrument: - “exo_up” : Exothermic events produce positive peaks. - “endo_up” : Endothermic events produce positive peaks. If None, the user is prompted interactively to choose.

Returns:

  • “plot” : None

  • ”peak_detection”dict
    Contains coordinates of detected endothermic and exothermic peaks:
    {

    “endothermic”: [(x1, y1), (x2, y2), …], “exothermic”: [(x1, y1), (x2, y2), …]

    }

  • ”Tg”, “Tm”, “Tc”float

    The estimated transition temperature value in the same units as the x-axis.

  • ”dH”list of tuples

    Each tuple contains (Temperature, Signal, ΔH) for detected events.

  • ”kinetics”list of dict
    Each dictionary contains:
    {

    “Onset”: float, “Peak”: float, “End”: float, “ΔH (J/g)”: float

    }

Return type:

varies depending on application

Raises:

ValueError – If the required data columns are missing or if application is not one of the supported analysis modes.

Notes

  • The function automatically handles both time-based (t) and temperature-based (Temperature) DSC data.

  • The orientation parameter affects sign convention in peak detection and ΔH calculation. For example, exo_up instruments produce positive exothermic peaks, while endo_up instruments produce negative ones.

  • The area under peaks (ΔH) is numerically integrated using the trapezoidal rule.

Examples

>>> import pandas as pd
>>> data = pd.read_csv("sample_dsc.csv")
>>> DSC(data, application="plot")
# Displays the DSC curve.
>>> results = DSC(data, application="peak_detection", orientation="exo_up")
>>> results["exothermic"]
[(134.2, -0.023), (276.4, -0.018)]
>>> Tg = DSC(data, application="Tg", orientation="exo_up")
Estimated Glass Transition Temperature (Tg): 65.12 °C
>>> dH_values = DSC(data, application="dH", sample_mass=5.0, heating_rate=10.0, orientation="endo_up")
Enthalpy Changes (ΔH):
Peak at 135.50 °C, ΔH ≈ 25.432 J/g
PyGamLab.Data_Analysis.Desulfurization_Rate(data, application)[source]

Analyze desulfurization rate with and without ultrasonic assistance.

Parameters:
  • data (pandas.DataFrame) – A dataframe containing the following columns: - ‘Time’: Measurement times - ‘Desulfurization_With_Ultrasonic’: Removal efficiency with ultrasonic - ‘Desulfurization_Without_Ultrasonic’: Removal efficiency without ultrasonic

  • application (str) – Choose one of the following options: - “plot”: plots the desulfurization with and without ultrasonic - “Max_Removal_With_Ultrasonic”: returns maximum removal efficiency with ultrasonic - “Max_Removal_Without_Ultrasonic”: returns maximum removal efficiency without ultrasonic

Returns:

  • Returns the maximum value (float) if application is “Max_Removal_With_Ultrasonic” or “Max_Removal_Without_Ultrasonic”.

  • Returns None if application is “plot”.

Return type:

float or None

Examples

>>> import pandas as pd
>>> df = pd.DataFrame({
...     "Time": [0, 10, 20, 30],
...     "Desulfurization_With_Ultrasonic": [5, 20, 45, 60],
...     "Desulfurization_Without_Ultrasonic": [3, 15, 35, 50]
... })
>>> Desulfurization_Rate(df, "Max_Removal_With_Ultrasonic")
60
>>> Desulfurization_Rate(df, "plot")
# Displays plot
PyGamLab.Data_Analysis.Dynamic_Light_Scattering_Analysis(df, application=None)[source]

Analyze and visualize Dynamic Light Scattering (DLS) data.

This function provides professional plotting of DLS data and extraction of key metrics such as the particle size corresponding to the maximum intensity.

Parameters:
  • df (pandas.DataFrame) – Input DataFrame containing DLS data. Expected columns include: - ‘Size (nm)’ : Particle size in nanometers - ‘Intensity (%)’ : Corresponding intensity in percentage - ‘Lag time (Âľs)’ : Lag time for autocorrelation measurements - ‘Autocorrelation’ : Autocorrelation function values

  • application (str, optional) –

    Type of analysis to perform: - ‘plot’ : Generate professional plots based on available columns.

    • If ‘Size (nm)’ and ‘Intensity (%)’ exist, plots Intensity vs Size.

    • If ‘Lag time (Âľs)’ and ‘Autocorrelation’ exist, plots Autocorrelation vs Lag time.

    • ’max_intensity’ : Returns the particle size corresponding to maximum intensity.

Returns:

  • If application=’max_intensity’:

    Dictionary with keys: - “Peak Size (nm)” : particle size at maximum intensity - “Peak Intensity (%)” : intensity at that size

  • If application=’plot’ or None, returns None and displays plots.

Return type:

dict or None

Raises:

ValueError –

  • If required columns are missing for the selected application. - If application is invalid (not ‘plot’ or ‘max_intensity’).

Examples

# 1. Plot DLS Intensity vs Size >>> Dynamic_Light_Scattering_Analysis(df, application=”plot”)

# 2. Plot Autocorrelation vs Lag time >>> Dynamic_Light_Scattering_Analysis(df_with_autocorr, application=”plot”)

# 3. Get particle size at maximum intensity >>> result = Dynamic_Light_Scattering_Analysis(df, application=”max_intensity”) >>> print(result) {‘Peak Size (nm)’: 120.5, ‘Peak Intensity (%)’: 85.2}

PyGamLab.Data_Analysis.EDS_Analysis(file_path, application, elements=['C', 'O', 'Fe'])[source]

Perform analysis on Energy Dispersive X-ray Spectroscopy (EDS) data.

This function can plot the EDS spectrum, return raw data, quantify elemental composition, or detect peaks in the spectrum.

Parameters:
  • file_path (str) – Path to the EDS data file in .msa format.

  • application (str) – Mode of operation: - ‘plot’ : Plot the EDS spectrum. - ‘data’ : Return raw energy and counts arrays. - ‘quantify’ : Estimate elemental weight and atomic percentages. - ‘find_peak’ : Detect peaks in the spectrum and plot them.

  • elements (list of str, optional) – List of elements to quantify when application=’quantify’. Default is [“C”,”O”,”Fe”].

Returns:

  • If application=’data’ : tuple (energy, counts) as numpy arrays.

  • If application=’quantify’dict with keys:
    • ’weight_percent’ : dict of elements and their weight percentages

    • ’atomic_percent’ : dict of elements and their atomic percentages

  • If application=’find_peak’ : list of tuples [(energy_keV, counts), …] for detected peaks.

  • If application=’plot’ : None (displays plot only).

Return type:

varies

Raises:

ValueError – If the ‘application’ argument is not one of ‘plot’, ‘data’, ‘quantify’, or ‘find_peak’.

Examples

# 1. Plot the EDS spectrum >>> EDS_Analysis(“sample.msa”, application=’plot’)

# 2. Get raw energy and counts data >>> energy, counts = EDS_Analysis(“sample.msa”, application=’data’)

# 3. Quantify elemental composition >>> results = EDS_Analysis(“sample.msa”, application=’quantify’, elements=[“C”,”O”,”Fe”]) >>> results[‘weight_percent’] {‘C’: 12.3, ‘O’: 30.1, ‘Fe’: 57.6} >>> results[‘atomic_percent’] {‘C’: 35.2, ‘O’: 40.8, ‘Fe’: 24.0}

# 4. Find and plot peaks >>> peaks = EDS_Analysis(“sample.msa”, application=’find_peak’) >>> peaks [(0.28, 100), (0.53, 250), (6.40, 1200)]

PyGamLab.Data_Analysis.EELS_Analysis(data, application)[source]

Perform quantitative and visual analysis of Electron Energy Loss Spectroscopy (EELS) data.

This function allows detailed inspection of EELS spectra across different energy-loss regions, including Zero-Loss Peak (ZLP), low-loss, and core-loss regions. It supports both raw plotting and automated analysis such as peak detection, band gap estimation, plasmon peak identification, and fine structure analysis (ELNES/EXELFS).

Parameters:
  • data (list of tuples/lists or pandas.DataFrame) –

    Input EELS data. - If a list, each element should be (energy_loss, intensity). - If a DataFrame, it must contain columns:

    • ”energy_loss” : float — Energy loss values in eV.

    • ”Intensity” : float — Measured intensity (arbitrary units).

  • application (str) –

    Specifies the type of analysis to perform. Options include:

    • ”plot” :

      Simply plot the EELS spectrum for visual inspection.

    • ”ZLP” :

      Analyze the Zero-Loss Peak (ZLP) region near 0 eV. Automatically detects the main elastic scattering peak and estimates:

      • Peak position (energy in eV)

      • Peak height (intensity)

      • Full Width at Half Maximum (FWHM) if determinable.

      The results are printed and visualized with the smoothed curve and annotations.

    • ”low_loss” :

      Analyze the Low-Loss Region (−5 to 50 eV) including pre-zero baseline. Performs:

      • Baseline smoothing and visualization

      • Detection of plasmon peaks (typically <25 eV)

      • Estimation of optical band gap (Eg) via derivative onset method.

      Prints and plots plasmon peaks and band gap position.

    • ”core_loss” :

      Analyze the Core-Loss (High-Loss) Region (>50 eV). Performs:

      • Edge onset detection using signal derivative

      • Step height estimation at the absorption edge

      • Identification of fine structure features:
        • ELNES (Energy-Loss Near Edge Structure) within ~30 eV above onset

        • EXELFS (Extended Energy-Loss Fine Structure) oscillations beyond onset

      Results include detected edges, peaks, and oscillations with visualized spectrum.

Returns:

The function primarily displays plots and prints analysis results to the console. Key detected parameters (peak positions, FWHM, etc.) are reported in the output text.

Return type:

None

Notes

  • Smoothing is performed using a Savitzky-Golay filter (scipy.signal.savgol_filter) with a default window length of 11 and polynomial order of 3.

  • Peak detection uses scipy.signal.find_peaks with adaptive height thresholds.

  • Energy regions are automatically segmented as:
    • ZLP: around 0 eV

    • Low-loss: −5–50 eV

    • Core-loss: >50 eV

  • The function assumes intensity units are arbitrary and energy loss is in electronvolts (eV).

Examples

>>> import pandas as pd
>>> data = pd.DataFrame({
...     "energy_loss": np.linspace(-10, 200, 500),
...     "Intensity": np.random.random(500) * np.exp(-np.linspace(-10, 200, 500)/100)
... })
>>> EELS_Analysis(data, "plot")
# Displays the EELS spectrum
>>> EELS_Analysis(data, "ZLP")
# Detects and plots Zero-Loss Peak with FWHM estimation
>>> EELS_Analysis(data, "low_loss")
# Identifies plasmon peaks and estimates band gap
>>> EELS_Analysis(data, "core_loss")
# Detects absorption edge and ELNES/EXELFS features
PyGamLab.Data_Analysis.EnergieAnalysis(dataframe, application)[source]

Parameters: - dataframe: pandas.DataFrame

Must contain motor energy data with columns [‘Angle[°]’, ‘Energie’, ‘Power[mW]’, ‘Time for a Cycle’].

  • application: str

    One of [‘draw’, ‘calculate’]. ‘draw’ → Plot energy vs angle. ‘calculate’ → Calculate total consumption energy in Ws.

Returns: - float or None

Energy consumption in Ws if application=’calculate’. None if application=’draw’.

PyGamLab.Data_Analysis.FTIR(data1, application, prominence=0.5, distance=10, save=False)[source]

OLD Version V1.00

PyGamLab.Data_Analysis.Fatigue_Test_Analysis(data, application)[source]

Analyze fatigue test data and provide multiple metrics and plots.

Parameters:
  • data (pandas.DataFrame) – Must contain columns: - ‘stress_amplitude’ : Stress amplitude in MPa - ‘number_of_cycles’ : Number of cycles to failure (N)

  • application (str) – Determines the operation: - “plot” : S-N plot (Stress vs. Number of Cycles) - “max stress amplitude” : Maximum stress amplitude - “fatigue strength” : Mean stress amplitude - “fatigue life” : Mean number of cycles - “stress in one cycle” : Basquin’s equation for stress at N=1 - “Sa” : Stress amplitude at N=1 - “fatigue limit” : Cycle where stress becomes constant - “std stress” : Standard deviation of stress - “std cycles” : Standard deviation of cycles

Returns:

value – Result depending on the chosen application.

Return type:

float or array-like or None

PyGamLab.Data_Analysis.Find_MaxVerticalVelocity(df)[source]

Find the maximum vertical flow velocity and its location from simulation data, and plot velocity versus position.

Parameters:

df (pandas.DataFrame) – DataFrame containing at least two columns: - ‘x(m)’: position in meters - ‘u(m/s)’: vertical velocity in m/s

Returns:

(maximum velocity, location of maximum velocity)

Return type:

tuple

Example

>>> import pandas as pd
>>> df = pd.DataFrame({'x(m)':[0,0.5,1.0],'u(m/s)':[0.1,0.3,0.2]})
>>> Find_MaxVerticalVelocity(df)
The maximum value of Flow Velocity for this problem is: 0.3
Also this maximum value occurs in this location: 0.5
(0.3, 0.5)
PyGamLab.Data_Analysis.FtirAnalysis(dataframe, application, prominence=0.5, distance=10, save=False)[source]

Parameters: - dataframe: pandas.DataFrame

Raw FTIR data (expects one column with tab-separated values ‘X Y’).

  • application: str

    One of [‘plot’, ‘peak’]. ‘plot’ will generate an FTIR plot. ‘peak’ will detect and return peak positions and properties.

  • prominence: float, default=0.5

    Required prominence of peaks (used in peak detection).

  • distance: int, default=10

    Minimum horizontal distance (in number of samples) between peaks.

  • save: bool, default=False

    If True, save the generated plot.

PyGamLab.Data_Analysis.Imerssion_Test(data, application)[source]

Analyze immersion test data for weight gain/loss over time.

Parameters:
  • data (pandas.DataFrame) – A DataFrame containing immersion test results with the following columns: - ‘time’ - ‘Mg’ - ‘Mg_H’ - ‘Mg_Pl’ - ‘Mg_HPl’

  • application (str) –

    The analysis to perform:
    • ’plot’ : Plot the changes of weight (%) vs. time (days).

    • ’More_Bioactive’ : Return the sample with the highest weight gain (more bioactive).

    • ’Less_Bioactive’ : Return the sample with the greatest weight loss (less bioactive).

Returns:

  • If application == ‘plot’, displays a plot and returns None.

  • If application == ‘More_Bioactive’, returns the name of the most bioactive sample.

  • If application == ‘Less_Bioactive’, returns the name of the least bioactive sample.

Return type:

None or str

Examples

>>> import pandas as pd
>>> df = pd.DataFrame({
...     'time': [1, 2, 3],
...     'Mg': [0.1, 0.2, 0.3],
...     'Mg_H': [0.2, 0.3, 0.5],
...     'Mg_Pl': [0.15, 0.25, 0.35],
...     'Mg_HPl': [0.25, 0.35, 0.45]
... })
>>> Imerssion_Test(df, 'More_Bioactive')
'Mg_HPl'
>>> Imerssion_Test(df, 'Less_Bioactive')
'Mg'
>>> Imerssion_Test(df, 'plot')  # plots the data
PyGamLab.Data_Analysis.Import_Data(File_Directory=None)[source]
PyGamLab.Data_Analysis.LN_S_E(df, operation)[source]

This function analyzes the elastic part of a true stress-strain curve.

Parameters:
  • df (pandas.DataFrame) – Must contain 2 columns: - ‘DL’ : elongation (length change in mm) - ‘F’ : force in Newtons

  • operation (str) –

    • ‘PLOT’ : plots the elastic region of the true stress-strain curve

    • ’YOUNG_MODULUS’ : calculates and returns Young’s Modulus (E)

Returns:

  • None if operation=’PLOT’

  • float if operation=’YOUNG_MODULUS’

PyGamLab.Data_Analysis.LoadPositionAnalysis(df, operation, area, length)[source]

Analyze Load-Position data: generate curves, calculate stress-strain, normalized stress-strain, or energy absorption density.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing two columns: - ‘Load (kN)’: load values - ‘Position (mm)’: position values

  • operation (str) – Operation to perform: - ‘LPC’ or ‘Load-Position Curve’ - ‘SSCal’ or ‘Stress-Strain Calculation’ - ‘SSC’ or ‘Stress-Strain Curve’ - ‘NSSCal’ or ‘Normal Stress-Strain Calculation’ - ‘NSSC’ or ‘Normal Stress-Strain Curve’ - ‘EADCal’ or ‘EAD Calculation’

  • area (float) – Cross-sectional area (mm²) for stress calculation

  • length (float) – Gauge length (mm) for strain calculation

Returns:

Depends on operation: - Stress-Strain arrays for ‘SSCal’ and ‘NSSCal’ - Energy absorption density for ‘EADCal’ - None for plotting operations

Return type:

np.ndarray or float or None

Example

>>> df = pd.DataFrame({'Load (kN)':[1,2,3],'Position (mm)':[0,1,2]})
>>> LoadPositionAnalysis(df, 'LPC', 100, 50)  # Plot load-position curve
>>> LoadPositionAnalysis(df, 'SSCal', 100, 50)  # Returns Stress-Strain array
>>> LoadPositionAnalysis(df, 'EADCal', 100, 50)  # Returns energy absorption density
PyGamLab.Data_Analysis.NMR_Analysis(df, application, peak_regions=None, peak_info=None)[source]

Analyze and visualize šH NMR spectra for different applications.

This function provides multiple modes: 1. Plotting the raw NMR spectrum (‘plot’). 2. Plotting the spectrum with integrated peak steps (‘plot_with_integrals’). 3. Estimating mole fractions of compounds in a mixture (‘mixture_composition’). 4. Calculating percentage impurity of a compound (‘calculate_impurity’).

Parameters:
  • df (pd.DataFrame) – DataFrame containing NMR data with columns: - ‘ppm’ : chemical shift values (x-axis) - ‘Spectrum’ : intensity values (y-axis)

  • application (str) – Mode of operation. Options: - ‘plot’ : generates a professional NMR spectrum plot. - ‘plot_with_integrals’ : generates a plot with integral steps (requires peak_regions). - ‘mixture_composition’ : calculates mole fractions of compounds (requires peak_info). - ‘calculate_impurity’ : calculates impurity percentage (requires peak_info with main and impurity info).

  • peak_regions (dict, optional) – Dictionary specifying integration regions for peaks (required for ‘plot_with_integrals’). Format: {region_name: (start_ppm, end_ppm)}

  • peak_info (dict, optional) –

    Dictionary with compound information for mixture analysis or impurity calculation. For ‘mixture_composition’:

    {compound_name: {‘region’: (start_ppm, end_ppm), ‘protons’: int}}

    For ‘calculate_impurity’:
    {

    ‘main_compound’: {‘region’: (start, end), ‘protons’: int}, ‘impurity’: {‘region’: (start, end), ‘protons’: int}

    }

Returns:

The function either displays plots or prints calculated results.

Return type:

None

Examples

# 1. Simple plot of NMR spectrum >>> NMR_Analysis(df, application=’plot’)

# 2. Plot spectrum with integrals >>> peak_regions = {‘peak1’: (7.0, 7.5), ‘peak2’: (3.5, 4.0)} >>> NMR_Analysis(df, application=’plot_with_integrals’, peak_regions=peak_regions)

# 3. Mixture composition analysis >>> peak_info = { … ‘CompoundA’: {‘region’: (7.0, 7.5), ‘protons’: 5}, … ‘CompoundB’: {‘region’: (3.5, 4.0), ‘protons’: 3} … } >>> NMR_Analysis(df, application=’mixture_composition’, peak_info=peak_info) — Mixture Composition — Mole Fraction of CompoundA: 0.62 Mole Fraction of CompoundB: 0.38

# 4. Impurity calculation >>> peak_info = { … ‘main_compound’: {‘region’: (7.0, 7.5), ‘protons’: 5}, … ‘impurity’: {‘region’: (3.5, 4.0), ‘protons’: 1} … } >>> NMR_Analysis(df, application=’calculate_impurity’, peak_info=peak_info) — Impurity Analysis — Main Compound Integral per Proton: 0.1234 Impurity Integral per Proton: 0.0123 Estimated Impurity: 9.09%

PyGamLab.Data_Analysis.Oxygen_HeatCapacity_Analysis(df)[source]

Calculate enthalpy and entropy of oxygen from heat capacity data and plot Cp, enthalpy, and entropy versus temperature.

Parameters:

df (pandas.DataFrame) – DataFrame containing at least two columns: - ‘T’: Temperature values - ‘Cp’: Heat capacity at constant pressure

Returns:

  • pandas.DataFrame – Original DataFrame with added ‘Enthalpy’ and ‘Entropy’ columns.

  • Also shows plots of –

    • Heat capacity vs temperature

    • Enthalpy and entropy vs temperature

Example

>>> df = pd.DataFrame({'T':[100,200,300],'Cp':[0.9,1.1,1.3]})
>>> Oxygen_HeatCapacity_Analysis(df)
PyGamLab.Data_Analysis.ParticleSizeAnalysis(df, operation)[source]

Analyze particle size distribution: calculate average size or plot size distribution.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing at least two columns: - ‘size’: particle sizes (nm) - ‘distribution’: intensity (%) corresponding to each size

  • operation (str) – Action to perform: - ‘calculate’: calculate and return the average particle size - ‘plot’ : plot the particle size distribution curve

Returns:

Average particle size if operation=’calculate’, None if plotting.

Return type:

float or None

Example

>>> import pandas as pd
>>> df = pd.DataFrame({'size':[10,20,30],'distribution':[30,50,20]})
>>> ParticleSizeAnalysis(df, 'calculate')
20
>>> ParticleSizeAnalysis(df, 'plot')  # Displays the plot
PyGamLab.Data_Analysis.Photoluminescence_analysis(data_frame, application='plot')[source]

Perform photoluminescence (PL) data analysis and visualization.

This function analyzes a PL spectrum, identifies the main emission peak, calculates bandgap energy, estimates FWHM, and provides various plots.

Parameters:
  • data_frame (pd.DataFrame) – DataFrame containing PL spectrum data with columns: - ‘wavelength’ : wavelength in nanometers (nm) - ‘intensity’ : emission intensity (arbitrary units)

  • application (str, optional) – Specifies the type of analysis or visualization (default=’plot’): - ‘plot’ : Plot the full PL spectrum. - ‘peak_position’ : Identify and return the wavelength of the main peak. - ‘peak_intensity’ : Identify and return the intensity of the main peak. - ‘bandgap_energy’ : Calculate bandgap energy (eV) from the peak wavelength. - ‘fwhm’ : Calculate and return the full width at half maximum (FWHM) in nm.

Returns:

  • ‘plot’ : None (displays a plot)

  • ’peak_position’ : float, wavelength of main peak in nm

  • ’peak_intensity’ : float, intensity of main peak

  • ’bandgap_energy’ : float, bandgap energy in eV

  • ’fwhm’ : float, full width at half maximum in nm

  • {} : empty dictionary if no peak is detected or invalid application

Return type:

varies

Raises:

ValueError –

  • If the DataFrame does not contain required columns. - If an invalid application string is provided.

Notes

  • Bandgap energy is calculated using Eg = h*c / Îť, where:

    h : Planck constant (J¡s) c : speed of light (m/s) Ν : peak wavelength (m) e : elementary charge (C)

  • FWHM is estimated using linear interpolation and root-finding.

Examples

# 1. Plot PL spectrum >>> Photoluminescence_analysis(df, application=”plot”)

# 2. Get peak wavelength >>> peak_wl = Photoluminescence_analysis(df, application=”peak_position”) >>> print(f”Peak wavelength: {peak_wl:.2f} nm”)

# 3. Get peak intensity >>> peak_int = Photoluminescence_analysis(df, application=”peak_intensity”) >>> print(f”Peak intensity: {peak_int:.3f}”)

# 4. Calculate bandgap energy >>> Eg = Photoluminescence_analysis(df, application=”bandgap_energy”) >>> print(f”Bandgap: {Eg:.3f} eV”)

# 5. Calculate FWHM >>> fwhm = Photoluminescence_analysis(df, application=”fwhm”) >>> print(f”FWHM: {fwhm:.2f} nm”)

PyGamLab.Data_Analysis.PolarizationAnalysis(df, work)[source]

Analyze polarization data: plot polarization curve or calculate corrosion potential.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing at least two columns: - ‘Current density’: current density in A/cm2 - ‘Potential’: potential in V vs Ag/AgCl

  • work (str) – Action to perform: - ‘plot’: plots the polarization curve (log(current) vs potential) - ‘corrosion potential’: returns the potential corresponding to the minimum current density

Returns:

Corrosion potential in volts if work=’corrosion potential’, None if plotting.

Return type:

float or None

Example

>>> import pandas as pd
>>> df = pd.DataFrame({'Current density':[1e-6,1e-5,1e-4],'Potential':[0.1,0.2,0.3]})
>>> PolarizationAnalysis(df, 'plot')  # Displays the plot
>>> PolarizationAnalysis(df, 'corrosion potential')
0.1
PyGamLab.Data_Analysis.Polarization_Control(data, application)[source]

Analyze polymerization process data and either visualize trends or return key values.

Parameters:
  • data (pd.DataFrame) – A DataFrame containing the following required columns: - ‘time’ (float or int): Time in seconds - ‘temp’ (float): Temperature in °C - ‘pressure’ (float): Pressure in Pa - ‘percent’ (float): Reaction progress percentage (0–100)

  • application (str) – Selects the analysis/plotting mode. Options: - ‘temp_time’ : Plot Temperature vs Time - ‘pressure_time’ : Plot Pressure vs Time - ‘percent_time’ : Plot Reaction Percent vs Time - ‘100% reaction’ : Return (temperature, pressure) when polymerization reaches 100% - ‘Max_pressure’ : Return maximum process pressure - ‘Max_temp’ : Return maximum process temperature

Returns:

  • (temp, pressure) if application is ‘100% reaction’

  • max pressure (float) if application is ‘Max_pressure’

  • max temperature (float) if application is ‘Max_temp’

  • None if plotting is performed

Return type:

tuple | float | None

Examples

>>> df = pd.DataFrame({
...     'time': [0, 10, 20, 30],
...     'temp': [25, 50, 75, 100],
...     'pressure': [1, 2, 3, 4],
...     'percent': [0, 30, 70, 100]
... })
>>> Polarization_Control(df, 'temp_time')  # Plots Temperature vs Time
>>> Polarization_Control(df, 'Max_temp')
100
>>> Polarization_Control(df, '100% reaction')
(100, 4)
PyGamLab.Data_Analysis.Pore_Size(df, A, P, Vis=0.00089, Density=1)[source]

Calculate the pore size of membranes and plot a Pore Size Chart.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing columns: ‘membrane’, ‘Ww’, ‘Wd’, ‘V’, ‘q’, ‘l’ Ww = weight of wet sample (g), Wd = weight of dry sample (g), V = sample volume (cm3) q = flow rate (m3/s), l = membrane thickness (m)

  • A (float) – Effective surface area of the membrane (m2)

  • P (float) – Operational pressure (Pa)

  • Vis (float, optional) – Water viscosity (Pa.s). Default is 8.9e-4

  • Density (float, optional) – Water density (g/cm3). Default is 1

Returns:

Pore_Size – Array of pore size values in nm.

Return type:

numpy.ndarray

Example

>>> df = pd.DataFrame({
...     'membrane': ['M1', 'M2', 'M3'],
...     'Ww': [2.5, 3.0, 2.8],
...     'Wd': [2.0, 2.4, 2.3],
...     'V': [1.0, 1.2, 1.1],
...     'q': [1e-6, 1.2e-6, 0.9e-6],
...     'l': [1e-3, 1.1e-3, 0.9e-3]
... })
>>> pore_sizes = Pore_Size(df, A=0.01, P=2e5)
PyGamLab.Data_Analysis.Porosity(df, Density=1)[source]

Calculate porosity of membranes and plot a Porosity Chart.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing columns: ‘membrane’, ‘Ww’, ‘Wd’, ‘V’ where Ww = weight of wet sample, Wd = weight of dry sample, V = sample volume.

  • Density (float, optional) – Water density (g/cm3). Default is 1.

Returns:

Porosity – Array of porosity values for each membrane.

Return type:

numpy.ndarray

Example

>>> df = pd.DataFrame({
...     'membrane': ['M1', 'M2', 'M3'],
...     'Ww': [2.5, 3.0, 2.8],
...     'Wd': [2.0, 2.4, 2.3],
...     'V': [1.0, 1.2, 1.1]
... })
>>> porosity_values = Porosity(df)
PyGamLab.Data_Analysis.PressureVolumeIdealGases(dataframe, application)[source]

Parameters: - dataframe: pandas.DataFrame

Must contain ‘pressure’ and ‘volume’ columns.

  • application: str
    One of [‘plot’, ‘min pressure’, ‘max pressure’, ‘min volume’,

    ‘max volume’, ‘average pressure’, ‘average volume’, ‘temperature’].

Returns: - float, pandas.Series, or None

Depending on the selected application.

PyGamLab.Data_Analysis.Raman_Analysis(data, application)[source]

Perform quantitative and visual analysis of Raman spectroscopy data.

This function provides flexible tools for visualizing and analyzing Raman spectra. It supports basic spectrum plotting and automated peak detection for identifying characteristic Raman bands.

Parameters:
  • data (list of tuples or list of lists) –

    Raman spectrum data, where each element corresponds to one measurement point:

    (wavenumber, intensity)

    • wavenumberfloat

      Raman shift in inverse centimeters (cm⁝š)

    • intensityfloat

      Measured Raman intensity in arbitrary units (a.u.)

    Example: >>> data = [(100, 0.1), (150, 0.5), (200, 1.2)]

  • application (str) –

    Defines the type of analysis to perform. Supported options:

    • ”plot” :

      Plot the Raman spectrum with labeled axes and gridlines for quick visual inspection.

    • ”peak_detect” :

      Automatically detect and highlight prominent peaks in the Raman spectrum. Peak detection is performed using scipy.signal.find_peaks with:

      • Minimum peak height = 10% of maximum intensity

      • Minimum distance between peaks = 5 data points

      The detected peaks are printed (wavenumber and intensity) and plotted with red markers.

Raises:

ValueError – If the data format is invalid or the specified application is not supported.

Returns:

The function generates plots and prints peak data to the console when applicable. No explicit return value.

Return type:

None

Notes

  • The function assumes Raman shift values are given in cm⁝š and intensity in arbitrary units.

  • The x-axis is plotted as Raman shift (increasing rightward). Uncomment the invert_xaxis() line to follow the traditional Raman plotting convention (decreasing Raman shift).

  • Peak detection parameters (height and distance) can be fine-tuned based on spectral resolution.

Examples

>>> import numpy as np
>>> # Generate synthetic Raman data
>>> wavenumbers = np.linspace(100, 2000, 500)
>>> intensities = np.exp(-((wavenumbers - 1350)/40)**2) + 0.5*np.exp(-((wavenumbers - 1580)/30)**2)
>>> data = list(zip(wavenumbers, intensities))
>>> Raman_Analysis(data, "plot")
# Displays the Raman spectrum
>>> Raman_Analysis(data, "peak_detect")
# Detects and highlights Raman peaks in the spectrum
PyGamLab.Data_Analysis.Reaction_Conversion_Analysis(data, app)[source]

Analyze and visualize conversion data from a chemical reaction experiment.

Parameters:
  • data (pandas.DataFrame) – A DataFrame that must contain the following columns: - ‘time’ : time in seconds - ‘temp’ : temperature in Celsius - ‘pressure’ : pressure in bar - ‘conv’ : conversion percentage

  • app (str) – Determines the action: - “PLOT_TEMP” → plots Temperature vs. Time - “PLOT_PRESSURE” → plots Pressure vs. Time - “PLOT_CONVERSION” → plots Conversion vs. Time - “MAXIMUM_CONVERSION” → returns index and values at maximum conversion

Returns:

result –

  • If app=”MAXIMUM_CONVERSION”, returns the index of maximum conversion.

  • Otherwise, returns None (just shows plots).

Return type:

int or None

Raises:

TypeError – If app is not one of the accepted values.

Examples

>>> import pandas as pd
>>> df = pd.DataFrame({
...     "time": [0, 1, 2, 3],
...     "temp": [300, 310, 315, 320],
...     "pressure": [1, 1.2, 1.3, 1.4],
...     "conv": [10, 20, 30, 50]
... })
>>> Conversion_Analysis(df, "PLOT_TEMP")  # plots Temperature vs Time
>>> Conversion_Analysis(df, "MAXIMUM_CONVERSION")
maximum of temperature is  320
maximum of conversion is  50
The temperature in maximum conversion is  320 and the pressure is  1.4
3
PyGamLab.Data_Analysis.SAXS_Analysis(data, application)[source]

Perform Small-Angle X-ray Scattering (SAXS) data analysis for nanostructural characterization.

This function provides key analytical tools to extract structural information from SAXS profiles, including visualization, peak position detection, intensity integration, peak width (FWHM) determination, and Guinier (radius of gyration) analysis.

Parameters:
  • data (pandas.DataFrame) –

    Experimental SAXS dataset containing: - ‘q’ : float

    Scattering vector magnitude (1/nm)

    • ’I’float

      Scattered intensity I(q)

    Example: >>> data = pd.DataFrame({ … “q”: [0.01, 0.02, 0.03, 0.04], … “I”: [300, 800, 400, 200] … })

  • application (str) –

    Defines the type of analysis to perform. Supported options include:

    • ”plot” :

      Plot I(q) vs. q to visualize the SAXS curve and scattering profile.

    • ”peak_position” :

      Identify the q position of the main scattering peak and calculate the corresponding real-space characteristic spacing: d = 2π / q_peak

    • ”peak_intensity” :

      Quantify the intensity and integrated area under the most intense scattering peak using numerical integration (numpy.trapz).

    • ”peak_width” :

      Compute the full width at half maximum (FWHM) of the main scattering peak, which provides information about domain size and order distribution.

    • ”rog” :

      Perform Guinier analysis (low-q region) by linear fitting of ln I(q) vs. q² to estimate the radius of gyration (Rg) and I(0):

      ln I(q) = ln I(0) − (Rg² * q²) / 3

Returns:

Depending on the analysis: - “peak_position” → (q_peak, d_spacing) - “peak_intensity” → (I_peak, area) - “peak_width” → FWHM - “rog” → (Rg, I0) - “plot” → None

Return type:

tuple or None

Raises:
  • TypeError – If the input data is not a pandas DataFrame or lacks required columns.

  • ValueError – If the specified application is not supported or no peaks are detected.

Notes

  • q is defined as q = (4π/Îť) sin(θ), where θ is half the scattering angle.

  • The characteristic spacing (d) corresponds to periodicity or average interparticle distance.

  • FWHM can be used to estimate crystalline order (via Scherrer-like relations).

  • The Guinier approximation is valid only for q¡Rg < 1.3.

Examples

>>> SAXS_Analysis(data, "plot")
# Displays the SAXS intensity profile.
>>> SAXS_Analysis(data, "peak_position")
# Prints and plots q_peak and corresponding d-spacing.
>>> SAXS_Analysis(data, "peak_intensity")
# Calculates peak height and integrated scattering area.
>>> SAXS_Analysis(data, "peak_width")
# Determines full width at half maximum (FWHM) in q-space.
>>> SAXS_Analysis(data, "rog")
# Performs Guinier analysis to estimate radius of gyration (Rg).
PyGamLab.Data_Analysis.SICalculation(f_loc, P, PC, Density=1)[source]

This function is used for Separation Index Calculation

P : Pressure (bar)

Density : Feed Density(g/cm3)

PC : Pollutant concentration in Feed (g/L)

Returns Separation Index and Flux & Rejection & Rejection Charts

PyGamLab.Data_Analysis.SI_Calculation(df, P, PC, Density=1)[source]

Calculate Separation Index (SI) and plot Flux, Rejection, and SI charts.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing columns: ‘Mem Code’, ‘Flux’, ‘Rejection’.

  • P (float) – Pressure (bar)

  • PC (float) – Pollutant concentration in Feed (g/L)

  • Density (float, optional) – Feed Density (g/cm3), default is 1.

Returns:

SI – Array of Separation Index values for each membrane.

Return type:

numpy.ndarray

Example

>>> df = pd.DataFrame({
...     'Mem Code': ['M1', 'M2', 'M3'],
...     'Flux': [90, 150, 250],
...     'Rejection': [0.5, 0.7, 0.8]
... })
>>> SI = SI_Calculation(df, P=5, PC=50)
PyGamLab.Data_Analysis.Signal_To_Noise_Ratio(data, application)[source]

Calculate and optionally plot signal, noise, or SNR from experimental data.

Parameters:
  • data (DataFrame) –

    Experimental data with columns:

    1- ‘location’: measurement locations 2- ‘Signal Strength’: signal power in dBm 3- ‘Noise Power’: noise power in dBm

  • application (str) –

    One of the following:

    ’plot signal’ - plots the signal column ‘plot noise’ - plots the noise column ‘plot snr’ - plots the signal-to-noise ratio

Returns:

mx – Maximum signal-to-noise ratio in dB

Return type:

float

PyGamLab.Data_Analysis.SolidificationStart(df, temp_sol)[source]

Determine if solidification has started based on temperature profile, and plot temperature along the centerline.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing at least two columns: - ‘x(m)’: position in meters - ‘T(K)’: temperature in Kelvin

  • temp_sol (float) – Solidus temperature of the material in Kelvin.

Returns:

True if solidification has started (temperature <= solidus temperature), False otherwise.

Return type:

bool

Example

>>> import pandas as pd
>>> df = pd.DataFrame({'x(m)':[0,0.5,1],'T(K)':[1600,1550,1500]})
>>> SolidificationStart(df, 1520)
The solidification process has started.
True
PyGamLab.Data_Analysis.StatisticalAnalysis(df, operation)[source]

Perform statistical analysis or plots on a DataFrame.

Parameters:
  • df (pandas.DataFrame) – Input DataFrame with numeric features.

  • operation (str) – Operation to perform: - ‘statistics’ : prints min, max, median, quantiles, IQR, and z-score for each numeric feature - ‘histogram’ : plots histograms for numeric features - ‘correlation’: plots correlation heatmap - ‘pairplot’ : plots pairplot with regression lines

Returns:

Prints statistics or displays plots.

Return type:

None

Example

>>> import pandas as pd
>>> df = pd.DataFrame({'A':[1,2,3,4],'B':[4,3,2,1]})
>>> StatisticalAnalysis(df, 'statistics')
>>> StatisticalAnalysis(df, 'histogram')
>>> StatisticalAnalysis(df, 'correlation')
>>> StatisticalAnalysis(df, 'pairplot')
PyGamLab.Data_Analysis.Stress_Strain1(df, operation, L0=90, D0=9)[source]

This function gets data and an operation . It plots Stress-Strain curve if the oepration is plot and finds the UTS value (which is the ultimate tensile strength) otherwise. —————————— :type df: :param df: It has 2 columns: DL(which is length in mm) & F (which is the force in N). :type df: DataFrame :type operation: :param operation: It tells the function to whether PLOT the curve or find the UTS valu.

L0: initial length of the sample D0: initial diameter of the sample

Return type:

The Stress-Strain curve or the amount of UTS

PyGamLab.Data_Analysis.Stress_Strain2(input_file, which, count)[source]

This function claculates the stress and strain Parameters from load and elongation data ———- input_file : .csv format

the file must be inserted in csv.

whcihstr

please say which work we do ( plot or calculate?).

count: int

please enter the yarn count in Tex

remember: gauge length has been set in 250 mm

PyGamLab.Data_Analysis.Stress_Strain3(input_data, action)[source]
PyGamLab.Data_Analysis.Stress_Strain4(file_path, D0, L0)[source]

This function uses the data file that contains length and force, calculates the engineering, true and yielding stress and strain and also draws a graph of these.

Parameters: D0(mm): First Qatar to calculate stress L0(mm): First Length to canculate strain F(N): The force applied to the object during this test DL(mm): Length changes

Returns: Depending on the operation selected, it returns calculated values, plots, advanced analysis, or saves results.

PyGamLab.Data_Analysis.Stress_Strain5(input_data, action)[source]
PyGamLab.Data_Analysis.Stress_Strain6(data, application)[source]

this function converts F and dD to Stress and Strain by thickness(1.55mm), width(3.2mm) and parallel length(35mm).

Parameters:
  • data (DataFrame) – this DataFrame contains F(N) and dD(mm) received from the tensil test machine.

  • application (str) – application determines the expected output of Stress_Strain function.

Returns:

return may be elongation at break, strength or a plot.

Return type:

int, float or plot

PyGamLab.Data_Analysis.TGA(data, application)[source]

Perform multi-mode Thermogravimetric Analysis (TGA) for material characterization.

This function enables comprehensive TGA data analysis for studying thermal stability, composition, surface modification, and reaction kinetics. It supports visualization, derivative thermogravimetry (DTG), decomposition step identification, and moisture or solvent content determination.

Parameters:
  • data (pandas.DataFrame) –

    Experimental TGA dataset with the following required columns: - ‘Temp’ : float

    Temperature in degrees Celsius (°C)

    • ’Mass’float

      Corresponding sample mass in percentage (%)

    Example: >>> data = pd.DataFrame({ … “Temp”: [25, 100, 200, 300], … “Mass”: [100, 99.5, 80.2, 10.5] … })

  • application (str) –

    Defines the type of analysis to perform. Supported options include:

    • ”plot” :

      Plot the raw TGA curve (Mass vs. Temperature).

    • ”peaks” :

      Compute and display the derivative thermogravimetry (DTG) curve and identify key decomposition peaks using scipy.signal.find_peaks.

    • ”stability” :

      Estimate the onset temperature of thermal degradation by tangent extrapolation from the baseline region.

    • ”moisture” :

      Calculate moisture or solvent content based on mass loss before the first decomposition event (typically below 150 °C).

    • ”functionalization” :

      Identify surface functionalization or modification steps by detecting multiple degradation peaks above 150 °C.

    • ”composition” :

      Estimate polymer and filler content from the initial and final mass values (residue analysis).

    • ”DTG” :

      Compute and plot the first derivative of the TGA curve (dM/dT) for insight into reaction rate behavior.

    • ”decomposition_steps” :

      Identify and quantify major decomposition events (DTG peaks), returning their temperatures and mass values.

    • ”kinetics” :

      Evaluate relative reaction rates and identify the fastest decomposition step (maximum |dM/dT| above 150 °C).

Raises:
  • TypeError – If the input is not a pandas DataFrame.

  • ValueError – If required columns (‘Temp’, ‘Mass’) are missing or if the specified application is not supported.

Returns:

Depends on the application:

  • ”plot” :

    Displays the TGA curve; returns None.

  • ”peaks” :

    DataFrame containing detected DTG peak temperatures and intensities.

  • ”stability” :

    Dictionary with onset temperature and mass at onset.

  • ”moisture” :

    Dictionary with moisture content, cutoff temperature, and mass loss.

  • ”functionalization” :

    DataFrame listing detected modification steps.

  • ”composition” :

    Dictionary with polymer and filler content percentages.

  • ”DTG” :

    DataFrame of temperatures and corresponding dM/dT values.

  • ”decomposition_steps” :

    DataFrame of decomposition step information.

  • ”kinetics” :

    Dictionary with step-wise reaction rate data and the fastest decomposition step.

Return type:

object

Notes

  • TGA data should be preprocessed to ensure monotonic temperature increase.

  • The function uses numerical differentiation (np.gradient) for DTG calculations.

  • Peak prominence thresholds can be adjusted to improve detection sensitivity.

  • Onset temperatures are approximate and depend on the slope estimation method.

Examples

>>> import pandas as pd, numpy as np
>>> T = np.linspace(25, 800, 300)
>>> M = 100 - 0.05*(T - 25) + 10*np.exp(-((T-400)/50)**2)
>>> data = pd.DataFrame({"Temp": T, "Mass": M})
>>> TGA(data, "plot")
# Displays the TGA curve.
>>> peaks_info = TGA(data, "peaks")
>>> print(peaks_info.head())
>>> stability = TGA(data, "stability")
>>> print(stability)
PyGamLab.Data_Analysis.Tensile_Analysis(dataframe, gauge_length=1, width=1, thickness=1, application='plot-force', save=False)[source]

Parameters: - dataframe: raw data from Excel (Force vs Displacement) - gauge_length: Initial length of the sample in mm - width: Width of the sample in mm - thickness: Thickness of the sample in mm - application: ‘plot-force’ or ‘plot-stress’ - save: True to save the plot - show_peaks: True to annotate peaks (e.g. UTS) - fname: Filename to save if save=True

PyGamLab.Data_Analysis.Tortuosity(df, Density=1)[source]

Calculate the pore tortuosity of membranes and plot a Tortuosity Chart.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing columns: ‘membrane’, ‘Ww’, ‘Wd’, ‘V’ where Ww = weight of wet sample, Wd = weight of dry sample, V = sample volume.

  • Density (float, optional) – Water density (g/cm3). Default is 1.

Returns:

Tortuosity – Array of tortuosity values for each membrane.

Return type:

numpy.ndarray

Example

>>> df = pd.DataFrame({
...     'membrane': ['M1', 'M2', 'M3'],
...     'Ww': [2.5, 3.0, 2.8],
...     'Wd': [2.0, 2.4, 2.3],
...     'V': [1.0, 1.2, 1.1]
... })
>>> tort_values = Tortuosity(df)
PyGamLab.Data_Analysis.UV_Visible_Analysis(data, application, **kwargs)[source]

Perform multi-mode UV–Visible Spectroscopy analysis for optical and electronic characterization.

This function provides tools for analyzing UV–Vis absorbance spectra, including visualization, Beer–Lambert law concentration estimation, peak identification, Landau maximum detection, and Tauc plot-based band gap estimation.

Parameters:
  • data (pandas.DataFrame or dict) –

    Experimental UV–Vis dataset containing the columns: - ‘Wavelength’ : float

    Wavelength values in nanometers (nm)

    • ’Absorbance’float

      Measured absorbance at each wavelength

    Example: >>> data = pd.DataFrame({ … “Wavelength”: [200, 250, 300, 350], … “Absorbance”: [0.2, 0.8, 1.1, 0.4] … })

  • application (str) –

    Defines the analysis mode. Supported applications:

    • ”plot” :

      Plot the UV–Vis spectrum (Absorbance vs. Wavelength).

    • ”beer_lambert” :

      Apply Beer–Lambert law to calculate molar concentration: A = ε × l × c, where: ε = molar extinction coefficient, l = optical path length (cm), c = concentration (M).

      Required keyword arguments: - molar_extinction_coefficient : float - path_length : float, optional (default=1.0)

    • ”peak_detection” or “identify_peaks” :

      Detect spectral peaks using scipy.signal.find_peaks. Optional keyword arguments: - height : float, threshold for peak height. - distance : int, minimum number of points between peaks.

    • ”band_gap” :

      Generate a Tauc plot for optical band gap determination. Uses the relation (ιhν)^n vs. hν, where n = 0.5 for direct and n = 2 for indirect transitions.

      Keyword arguments: - n : float, exponent type (default=0.5)

    • ”landau_max” :

      Identify the wavelength corresponding to maximum absorbance (Landau maximum). If Beer–Lambert parameters are provided, the function estimates the sample concentration at that point.

      Optional keyword arguments: - molar_extinction_coefficient : float - path_length : float, optional (default=1.0)

Keyword Arguments:
  • molar_extinction_coefficient (float, optional) – Required for Beer–Lambert law or Landau Max concentration estimation.

  • path_length (float, default=1.0) – Optical path length of the cuvette (in cm).

  • height (float, optional) – Minimum absorbance for peak detection.

  • distance (int, optional) – Minimum distance between adjacent detected peaks.

  • n (float, default=0.5) – Exponent in the Tauc plot for direct/indirect band gap transitions.

Returns:

Depends on the analysis mode:

  • ”plot” :

    Displays spectrum; returns None.

  • ”beer_lambert” :

    pandas.DataFrame with calculated concentration values.

  • ”peak_detection” / “identify_peaks” :

    pandas.DataFrame listing detected peak wavelengths and absorbances.

  • ”band_gap” :

    pandas.DataFrame with photon energy and Tauc Y-values.

  • ”landau_max” :

    dict with wavelength, absorbance, and (if applicable) concentration.

Return type:

object

Raises:
  • ValueError – If input format or application type is invalid.

  • KeyError – If required columns (‘Wavelength’, ‘Absorbance’) are missing.

Notes

  • Band gap energy (Eg) is estimated by extrapolating the linear portion of the Tauc plot to the energy axis.

  • The Landau maximum provides insights into π–π* or n–π* transitions.

  • Beer–Lambert analysis assumes linearity in the absorbance–concentration range.

  • Wavelengths must be sorted in ascending order for accurate results.

Examples

>>> data = pd.DataFrame({
...     "Wavelength": np.linspace(200, 800, 300),
...     "Absorbance": np.exp(-((np.linspace(200, 800, 300) - 400) / 50)**2)
... })
>>> UV_Visible_Analysis(data, "plot")
# Displays the UV–Vis spectrum.
>>> UV_Visible_Analysis(data, "peak_detection", height=0.2)
# Detects and highlights spectral peaks.
>>> UV_Visible_Analysis(data, "beer_lambert",
...     molar_extinction_coefficient=15000, path_length=1.0)
# Computes sample concentration using Beer–Lambert law.
>>> UV_Visible_Analysis(data, "band_gap", n=0.5)
# Displays the Tauc plot for band gap estimation.
>>> UV_Visible_Analysis(data, "landau_max",
...     molar_extinction_coefficient=20000, path_length=1.0)
# Identifies Landau maximum and estimates concentration.
PyGamLab.Data_Analysis.WAXS_Analysis(data, application, **kwargs)[source]

Perform Wide-Angle X-ray Scattering (WAXS) data analysis for crystallographic and nanostructural characterization.

This function analyzes WAXS diffraction patterns to determine structural information such as peak positions, d-spacings, peak widths, crystallite size, degree of crystallinity, and peak shape classification.

Parameters:
  • data (pandas.DataFrame or array-like) –

    Experimental WAXS dataset containing two columns: - ‘q’ : float

    Scattering vector (Å⁻¹) or 2θ values (degrees)

    • ’I’float

      Scattering intensity (a.u.)

    Example: >>> data = pd.DataFrame({ … “q”: [0.5, 1.0, 1.5, 2.0], … “I”: [200, 600, 300, 100] … })

  • application (str) –

    Defines the type of analysis to perform. Supported options include:

    • ”plot” :

      Plot the WAXS pattern (Intensity vs q or 2θ).

    • ”peak_position” :

      Detect the most intense diffraction peaks, compute their corresponding d-spacings using:

      d = 2π / q

      Returns a table of q values, d-spacings, and intensities.

    • ”peak_intensity” :

      Determine the intensity and integrated area under the strongest diffraction peaks, useful for semi-quantitative crystallinity assessment.

    • ”peak_width” :

      Compute full width at half maximum (FWHM) of main peaks and estimate crystallite size using the Scherrer equation:

      L = KΝ / (β cosθ)

      Also estimates overall percent crystallinity from integrated peak areas.

    • ”peak_shape” :

      Classify peak sharpness based on FWHM(2θ) and estimate the crystallinity percentage. Sharp peaks imply high crystallinity, broad peaks indicate amorphous domains.

  • Arguments (Optional Keyword)

  • ---------------------------

  • threshold (float, optional) – Minimum relative intensity (fraction of max) to detect peaks. Default = 0.1 (10% of max intensity).

  • top_n (int, optional) – Number of top peaks to consider. Default = 3.

  • wavelength (float, optional) – X-ray wavelength in ÅngstrĂśms (required for “peak_width” and “peak_shape”).

  • K (float, optional) – Scherrer constant, typically between 0.89–0.94. Default = 0.9.

  • width_threshold (float, optional) – Threshold in degrees for classifying peak shapes. Default = 2.0° (2θ).

Returns:

Depending on the analysis type: - “peak_position” → DataFrame of q, d-spacing, and intensity - “peak_intensity” → DataFrame of peak positions and intensities - “peak_width” → (DataFrame of peak properties, crystallinity_percent) - “peak_shape” → (DataFrame of peak classification, crystallinity_percent) - “plot” → None

Return type:

pandas.DataFrame or tuple

Raises:
  • ValueError – If an unsupported application is specified or if wavelength is missing for analyses that require it.

  • TypeError – If the input data format is invalid.

Notes

  • q and 2θ are related by: q = (4π / Îť) sin(θ)

  • d-spacing provides interplanar distances according to Bragg’s law.

  • Crystallite size estimation assumes negligible strain and instrumental broadening.

  • The degree of crystallinity is estimated from the ratio of crystalline (peak) area to total scattered intensity.

Examples

>>> WAXS_Analysis(data, "plot")
# Displays the WAXS pattern.
>>> WAXS_Analysis(data, "peak_position")
# Returns major peaks and corresponding d-spacings.
>>> WAXS_Analysis(data, "peak_intensity")
# Calculates integrated areas of main peaks.
>>> WAXS_Analysis(data, "peak_width", wavelength=1.54)
# Estimates FWHM, crystallite size, and crystallinity.
>>> WAXS_Analysis(data, "peak_shape", wavelength=1.54)
# Classifies peaks as sharp/broad and returns crystallinity percent.
PyGamLab.Data_Analysis.Water_Hardness(df)[source]

Evaluate water hardness based on metal content and pyrogenic compounds, filter out unsuitable water, calculate hardness (ppm), and plot results.

Parameters:

df (pandas.DataFrame) – DataFrame containing at least the following columns: - ‘name’: sample name - ‘Cu’, ‘Ni’, ‘Zn’, ‘pyro’, ‘Cya’, ‘Mg’, ‘Ca’

Returns:

  • Filtered DataFrame with suitable water samples

  • List of DataFrames containing names of unavailable water samples

  • Displays a bar plot of water hardness (ppm) vs sample names

Return type:

tuple

Example

>>> import pandas as pd
>>> df = pd.DataFrame({
... 'name':['W1','W2','W3'],
... 'Cu':[10,25,5],'Ni':[5,3,15],'Zn':[5,8,12],
... 'pyro':[50,120,90],'Cya':[1,3,0.5],'Mg':[10,15,5],'Ca':[20,25,15]})
>>> Water_Hardness(df)
PyGamLab.Data_Analysis.WearBar_Plot(df_list, S=300, F=5, work='bar')[source]

Calculate wear rate for multiple samples and plot as a bar chart.

Parameters:
  • df_list (list of pandas.DataFrame) – Each DataFrame must contain columns: - ‘weight before test’ - ‘weight after test’

  • S (float, optional) – Sliding distance in meters (default 300)

  • F (float, optional) – Normal force in Newtons (default 5)

  • work (str, optional) – Currently only ‘bar’ supported (default ‘bar’)

Returns:

Displays a bar plot of wear rates for the samples.

Return type:

None

Example

>>> df1 = pd.DataFrame({'weight before test':[5.0],'weight after test':[4.9]})
>>> df2 = pd.DataFrame({'weight before test':[4.8],'weight after test':[4.7]})
>>> WearBar_Plot([df1, df2])
PyGamLab.Data_Analysis.WearRate_Calculation(df, S, F, work='wear rate')[source]

Calculate wear rate of samples based on weight loss during a wear test.

Parameters:
  • df (pandas.DataFrame) – DataFrame containing two columns: - ‘weight before test’: sample weight before the test - ‘weight after test’: sample weight after the test

  • S (float) – Sliding distance during the test (in meters)

  • F (float) – Normal force applied during the test (in Newtons)

  • work (str, optional) – Type of calculation, default is ‘wear rate’

Returns:

Wear rate (WR) in units of mass/(force*distance)

Return type:

float

Example

>>> import pandas as pd
>>> df = pd.DataFrame({
... 'weight before test':[5.0,4.8,5.2],
... 'weight after test':[4.9,4.7,5.1]})
>>> WearRate_Calculation(df, S=100, F=50)
0.002
PyGamLab.Data_Analysis.XPS_Analysis(df, application='plot', sensitivity_factors=None, tolerance=1.5, peak_prominence=None, peak_distance=None, smoothing_window=11, smoothing_poly=3)[source]

Perform X-ray Photoelectron Spectroscopy (XPS) data analysis.

This function allows for plotting the XPS spectrum, returning raw data, performing surface composition analysis based on sensitivity factors, and detecting peaks with optional smoothing.

Parameters:
  • df (pd.DataFrame) – XPS data containing columns ‘eV’ (binding energy) and ‘Counts / s’ (intensity).

  • application (str, optional) – Mode of operation (default=’plot’): - ‘plot’ : Plot the XPS spectrum. - ‘data’ : Return raw energy and counts arrays. - ‘composition’ : Estimate atomic composition using peak areas and sensitivity factors. - ‘peak_detection’ : Detect peaks, optionally smooth the spectrum, and plot.

  • sensitivity_factors (dict, optional) – Element-specific sensitivity factors required for ‘composition’ application. Example: {‘C’: 1.0, ‘O’: 2.93, ‘Fe’: 3.5}

  • tolerance (float, optional) – Binding energy tolerance in eV for peak assignment (default=1.5 eV).

  • peak_prominence (float, optional) – Minimum prominence of peaks for detection (used in ‘composition’ and ‘peak_detection’).

  • peak_distance (int, optional) – Minimum distance between peaks in number of points (used in ‘composition’ and ‘peak_detection’).

  • smoothing_window (int, optional) – Window length for Savitzky-Golay smoothing (must be odd, default=11).

  • smoothing_poly (int, optional) – Polynomial order for Savitzky-Golay smoothing (default=3).

Returns:

  • If application=’plot’ : None (displays plot only)

  • If application=’data’ : tuple (energy, counts) as numpy arrays

  • If application=’composition’ : dict of atomic percentages {element: atomic %}

  • If application=’peak_detection’ : list of dicts with peak information, e.g. [{‘energy’: eV, ‘counts’: intensity, ‘smoothed_counts’: value,

    ’width’: FWHM, ‘start_energy’: eV_start, ‘end_energy’: eV_end}, …]

Return type:

varies

Raises:

ValueError –

  • If ‘df’ does not contain required columns - If ‘application’ is invalid - If sensitivity_factors are not provided for ‘composition’

Examples

# 1. Plot XPS spectrum >>> XPS_Analysis(df, application=’plot’)

# 2. Get raw data >>> energy, counts = XPS_Analysis(df, application=’data’)

# 3. Compute atomic composition >>> sensitivity_factors = {‘C’: 1.0, ‘O’: 2.93, ‘Fe’: 3.5} >>> composition = XPS_Analysis(df, application=’composition’, sensitivity_factors=sensitivity_factors) >>> composition {‘C’: 45.3, ‘O’: 32.1, ‘Fe’: 22.6}

# 4. Detect peaks and plot >>> peaks_info = XPS_Analysis(df, application=’peak_detection’, peak_prominence=50, smoothing_window=11) >>> peaks_info[0] {‘energy’: 284.8, ‘counts’: 1200, ‘smoothed_counts’: 1185, ‘width’: 1.2, ‘start_energy’: 284.0, ‘end_energy’: 285.6}

PyGamLab.Data_Analysis.XRD_Analysis(file, which, peak=0)[source]
Parameters:
  • file (str) – the variable in which you saved the .cvs file path

  • which (str) – which operation you want to perform on the file

  • peak (float, optional) – 2θ for the peak you want to analyse. The default is 0.

Returns:

fwhm – value of FWHM for the peak you specified.

Return type:

float

PyGamLab.Data_Analysis.XRD_ZnO(XRD, application)[source]
Parameters:
  • XRD (DataFrame) – Data containing XRD data.

  • application (str) – Type of application ‘plot’,’FWHM’,’Scherrer’. plot:To draw the figure. FWHM:Width at Half Maximum. Scherrer:To calculate the crystallite size.

  • Returns

  • FWHM

  • Scherrer

  • -------

  • None.

PyGamLab.Data_Analysis.XrdAnalysis(df, which, peak=0)[source]

Perform XRD (X-ray Diffraction) analysis on a given DataFrame containing ‘angle’ and ‘intensity’.

Parameters:
  • df (pd.DataFrame) – A pandas DataFrame with at least two columns: ‘angle’ and ‘intensity’.

  • which (str) – Operation to perform on the DataFrame. Options: - ‘plot’ : Plots the XRD pattern. - ‘fwhm’ : Calculates the Full Width at Half Maximum (FWHM) for a given peak.

  • peak (float, optional) – The 2θ angle of the peak to analyze. Default is 0.

Returns:

fwhm –

  • If which == ‘fwhm’, returns the FWHM value of the specified peak.

  • If which == ‘plot’, returns None.

Return type:

float or None

Example

>>> data = pd.DataFrame({'angle': [20, 21, 22, 23, 24],
...                      'intensity': [5, 20, 50, 20, 5]})
>>> xrdAnalysis(data, which='plot')   # Plots the XRD pattern
>>> xrdAnalysis(data, which='fwhm', peak=22)
0.5
PyGamLab.Data_Analysis.XrdZno(dataframe, application)[source]

Parameters: - dataframe: pandas.DataFrame

Data containing XRD data. Expected columns: [‘Angle’, ‘Det1Disc1’].

  • application: str

    One of [‘plot’, ‘FWHM’, ‘Scherrer’]. ‘plot’ → Draw the XRD pattern. ‘FWHM’ → Calculate Full Width at Half Maximum. ‘Scherrer’ → Calculate crystallite size using Scherrer equation.

Returns: - float or None

Returns FWHM (float) if application=’FWHM’. Returns crystallite size (float) if application=’Scherrer’. Returns None if application=’plot’.

PyGamLab.Data_Analysis.old_LN_S_E(df, operation)[source]

This function analyzes the elastic part of a true stress-strain curve.

Parameters:
  • df (pandas.DataFrame) – Must contain 2 columns: - ‘DL’ : elongation (length change in mm) - ‘F’ : force in Newtons

  • operation (str) –

    • ‘PLOT’ : plots the elastic region of the true stress-strain curve

    • ’YOUNG_MODULUS’ : calculates and returns Young’s Modulus (E)

Returns:

  • None if operation=’PLOT’

  • float if operation=’YOUNG_MODULUS’

PyGamLab.Data_Analysis.read_msa(filename)[source]