Data_Analysis ModuleÂś
Tools for data preprocessing, analysis, and visualization.
Features:
Read and preprocess data from files or DataFrames
Filtering, normalization, and feature extraction
Publication-ready plots: line, scatter, histogram, heatmap
68+ experimental analysis tools (NMR, XPS, XRD, UV-Vis, Raman)
Scientific constants, unit converters, and utilities
- PyGamLab.Data_Analysis.AerospaceAnalysis(dataframe, application)[source]
- Parameters:
dataframe (pandas.DataFrame) â Must contain two columns: [âNewtonâ, âAreaâ]. Values should be in Newtons (N) and square meters (m²).
application (str) â One of [âplotâ, âmaxPressureâ]. âplotâ â Plot Newton vs Area. âmaxPressureâ â Return maximum pressure value.
Returns
-------
None (float or) â Maximum pressure if application=âmaxPressureâ. None if application=âplotâ.
- PyGamLab.Data_Analysis.Auger_Electron_Spectroscopy_analysis(df, application=None, sensitivity_factors=None)[source]
Analyze and visualize Auger Electron Spectroscopy (AES) data.
This function provides options to plot AES spectra, detect peak positions, and estimate atomic percentages using sensitivity factors.
- Parameters:
df (pandas.DataFrame) â Input DataFrame containing AES data. Must include columns: - âEnergy (eV)â : Electron energy values in eV - âIntensity (Counts)â : Corresponding measured intensity
application (str, optional) â Type of analysis to perform: - âplotâ : Generates a professional plot of Intensity vs Energy. - âpeak_positionâ : Detects peaks and returns their energy positions and intensities. - âatomicâ : Calculates atomic percentages based on provided sensitivity factors.
sensitivity_factors (dict, optional) â Dictionary mapping element symbols to their sensitivity factors. Example: {âCâ: 0.25, âOâ: 0.66, âFeâ: 2.5}. Required if application=âatomicâ.
- Returns:
If application=âplotâ : None (displays plot)
- If application=âpeak_positionâdict with keys:
âPeak Positions (eV)â : numpy array of peak energies
âPeak Intensities (Counts)â : numpy array of peak intensities
- If application=âatomicâlist of dicts for each element, e.g.:
[{âElementâ: âCâ, âAtomic %â: 25.4}, {âElementâ: âOâ, âAtomic %â: 74.6}]
- Return type:
dict or list or None
- Raises:
ValueError â If application=âatomicâ and sensitivity_factors is not provided.
Examples
# 1. Plot AES spectrum >>> Auger_Electron_Spectroscopy_analysis(df, application=âplotâ)
# 2. Detect peak positions >>> peaks = Auger_Electron_Spectroscopy_analysis(df, application=âpeak_positionâ) >>> print(peaks) {âPeak Positions (eV)â: array([280, 530]), âPeak Intensities (Counts)â: array([150, 200])}
# 3. Estimate atomic composition >>> sensitivity = {âCâ: 0.25, âOâ: 0.66, âFeâ: 2.5} >>> composition = Auger_Electron_Spectroscopy_analysis(df, application=âatomicâ, sensitivity_factors=sensitivity) >>> print(composition) [{âElementâ: âCâ, âAtomic %â: 30.5}, {âElementâ: âOâ, âAtomic %â: 69.5}]
- PyGamLab.Data_Analysis.BET_Analysis(df, application, mass_of_sample=None, cross_sectional_area=None, T=None, Pa=None, total_surface_area=None, pore_volume=None)[source]
Perform BET (BrunauerâEmmettâTeller) analysis on adsorption data, including surface area determination, pore volume, and pore radius calculations.
- Parameters:
df (pd.DataFrame) â DataFrame containing adsorption data with columns: - âRelative Pressure (P/P0)â : relative pressure of adsorbate - âAdsorbed Volume (cm3/g STP)â : adsorbed gas volume
application (str) â Mode of operation. Options: - âplot_isothermâ : plots the adsorption isotherm. - âcalculate_surface_areaâ : plots BET plot and calculates the specific surface area. - âpore_volume_calculationâ : calculates the total pore volume. - âpore_radius_calculationsâ : calculates average pore radius.
mass_of_sample (float, optional) â Mass of the sample in grams. Required for âcalculate_surface_areaâ.
cross_sectional_area (float, optional) â Cross-sectional area of adsorbate molecule (m^2). Required for âcalculate_surface_areaâ.
T (float, optional) â Ambient temperature in Kelvin. Required for âpore_volume_calculationâ.
Pa (float, optional) â Ambient pressure in Pa. Required for âpore_volume_calculationâ.
total_surface_area (float, optional) â Total surface area (St in m^2) for pore radius calculation. Required for âpore_radius_calculationsâ.
pore_volume (float, optional) â Total pore volume (V_liq) in m^3/g. Can be used instead of recalculating from data.
- Returns:
Depending on the application, returns a dictionary with calculated values: - âcalculate_surface_areaâ : {âslopeâ: m, âinterceptâ: b, âv_mâ: vm, âconstantâ: c, âsbetâ: SBET} - âpore_volume_calculationâ : {âpore_volumeâ: V_liq} - âpore_radius_calculationsâ : {âpore_radius_nmâ: r_p} Returns None for simple plots or if calculations fail.
- Return type:
dict or None
Examples
# 1. Plot adsorption isotherm >>> BET_Analysis(df, application=âplot_isothermâ)
# 2. Calculate BET surface area >>> BET_Analysis(df, application=âcalculate_surface_areaâ, mass_of_sample=0.05, cross_sectional_area=0.162e-18) â BET Surface Area Calculation â Slope (m): 10.1234 Y-intercept (b): 2.3456 Monolayer Adsorbed Volume (vm): 0.1234 cm^3/g STP BET Constant (c): 5.32 Specific Surface Area (SBET): 45.67 m^2/g
# 3. Calculate pore volume >>> BET_Analysis(df, application=âpore_volume_calculationâ, T=77, Pa=101325) â Pore Volume Calculation â Volume of gas adsorbed (V_ads): 150.0 cm^3/g STP Total Pore Volume (V_liq): 0.000150 m^3/g
# 4. Calculate average pore radius >>> BET_Analysis(df, application=âpore_radius_calculationsâ, total_surface_area=45.67, pore_volume=0.000150) â Pore Radius Calculation â Total Pore Volume (V_liq): 0.000150 m^3/g Total Surface Area (S): 45.67 m^2 Average Pore Radius (r_p): 6.57 nm
- PyGamLab.Data_Analysis.CV(data, application)[source]
Perform Cyclic Voltammetry (CV) data analysis for electrochemical characterization.
This function provides core analytical and visualization tools for cyclic voltammetry experiments, including voltammogram plotting, oxidation/reduction peak detection, and peak shape analysis for assessing reversibility of redox processes.
- Parameters:
data (list of tuples, list of lists, or pandas.DataFrame) â
Experimental CV dataset containing the following columns or structure: - âEâ : float
Applied potential (V vs. reference electrode)
- âIâfloat
Measured current (A)
Example: >>> data = pd.DataFrame({ ⌠âEâ: [-0.5, -0.3, 0.0, 0.3, 0.5], ⌠âIâ: [-0.0001, 0.0003, 0.0012, 0.0005, -0.0002] ⌠})
application (str) â
Defines the analysis type. Supported options include:
- âplotâ :
Display the cyclic voltammogram (current vs. potential).
- âpeaksâ :
Detect and highlight oxidation and reduction peaks using scipy.signal.find_peaks with a default prominence of 0.001 A. The function identifies the most intense oxidation peak and up to two reduction peaks.
- âshapeâ :
Analyze the shape and symmetry of oxidation/reduction peaks to determine the reversibility of the redox process. It computes:
E_pa : anodic (oxidation) peak potential (V)
E_pc : cathodic (reduction) peak potential (V)
ÎE_p : peak separation (V)
|I_pc/I_pa| : peak current ratio
Based on electrochemical theory: - Reversible systems exhibit ÎE_p â 59 mV/n (for one-electron transfer)
and |I_pc/I_pa| â 1.
Quasi-reversible systems show moderate deviations.
Irreversible systems display large separations and asymmetric peaks.
- Returns:
The function primarily displays visualizations and prints analysis results directly to the console.
- Return type:
None
- Raises:
TypeError â If the input data format is invalid.
ValueError â If the specified application is not supported.
Notes
Ensure that potentials (E) are in ascending or cyclic order for accurate peak detection.
Peak prominence and smoothing parameters can be tuned for noisy data.
The reversibility classification is heuristic and assumes one-electron transfer unless otherwise known.
Examples
>>> data = pd.DataFrame({ ... "E": np.linspace(-0.5, 0.5, 200), ... "I": 0.001 * np.sin(4 * np.pi * np.linspace(-0.5, 0.5, 200)) ... }) >>> CV(data, "plot") # Displays the cyclic voltammogram.
>>> CV(data, "peaks") # Detects and highlights oxidation/reduction peaks.
>>> CV(data, "shape") # Computes ÎEp and |Ipc/Ipa| to infer redox reversibility.
- PyGamLab.Data_Analysis.Compression_TestAnalysis(df, operator, sample_name, density=0)[source]
Analyze compression test data: plot stress-strain curve or calculate maximum strength.
- Parameters:
df (pandas.DataFrame) â Compression test data containing at least two columns: - âeâ: strain - âS (Mpa)â: stress in MPa
operator (str) â Action to perform on data: - âplotâ: plots stress-strain diagram - âS_maxâ: returns maximum stress - âS_max/Densityâ: returns specific maximum stress (requires density != 0)
sample_name (str) â Name of the sample (used for plot label)
density (float, optional) â Density of the sample (needed for âS_max/Densityâ). Default is 0.
- Returns:
Maximum stress if operator is âS_maxâ. Specific maximum stress if operator is âS_max/Densityâ. None if operator is âplotâ.
- Return type:
float or None
Example
>>> df = pd.DataFrame({'e':[0,0.01,0.02],'S (Mpa)':[10,20,15]}) >>> Compression_TestAnalysis(df, 'S_max', 'Sample1') 20 >>> Compression_TestAnalysis(df, 'plot', 'Sample1')
- PyGamLab.Data_Analysis.DMTA_TestAnalysis(df, operator, sample_name)[source]
Analyze DMTA test data: find maxima or plot storage modulus, loss modulus, and tanδ.
- Parameters:
df (pandas.DataFrame) â DMTA test data containing at least these columns: - âFrequency (Hz)â - âEâ-Storage Modulus (Mpa)â - Column 13 (loss modulus) or specify proper column - âTanδâ
operator (str) â Action to perform on data: - âstorage_maxâ: returns maximum storage modulus - âloss_maxâ: returns maximum loss modulus - âtan_maxâ: returns maximum Tanδ - âplot_storageâ, âplot_lossâ, âplot_tanâ: plots corresponding data
sample_name (str) â Name of the sample (used for plot label)
- Returns:
Maximum value for storage, loss, or Tanδ if requested. None if plotting.
- Return type:
float or None
Example
>>> df = pd.DataFrame({ ... 'Frequency (Hz)':[1,10,100], ... "E'-Storage Modulus (Mpa)":[100,150,200], ... df.columns[13]:[10,20,30], # Loss modulus column ... 'Tanδ':[0.1,0.15,0.2]}) >>> DMTA_TestAnalysis(df, 'storage_max', 'Sample1') 200
- PyGamLab.Data_Analysis.DSC(data, application='plot', prominence=0.5, distance=5, sample_mass=1.0, heating_rate=1.0, orientation=None)[source]
Perform Differential Scanning Calorimetry (DSC) data processing, analysis, and visualization.
This function allows for automated DSC curve plotting, peak detection, transition temperature determination (Tg, Tm, Tc), enthalpy (ÎH) estimation, and kinetic analysis from experimental DSC datasets. The analysis can be adapted for both exothermic-up and endothermic-up instrument conventions.
- Parameters:
data (pandas.DataFrame) â Input DataFrame containing DSC measurement data. It must include one of: - Columns [âtâ, âValueâ] for time-based measurements, or - Columns [âTemperatureâ, âValueâ] for temperature-based measurements.
application (str, optional, default="plot") â The type of analysis or operation to perform. Supported options include: - âplotâ : Plot the raw DSC curve. - âpeak_detectionâ : Detect and label endothermic and exothermic peaks. - âTgâ : Estimate the glass transition temperature (Tg). - âTmâ : Determine the melting temperature (Tm). - âTcâ : Determine the crystallization temperature (Tc). - âdHâ : Compute enthalpy changes (ÎH) for detected events. - âkineticsâ : Estimate reaction onset, peak, endset, and corresponding ÎH.
prominence (float, optional, default=0.5) â Minimum prominence of peaks for detection. Higher values filter out smaller peaks. Passed to scipy.signal.find_peaks.
distance (int, optional, default=5) â Minimum number of data points between detected peaks. Helps to separate closely spaced transitions.
sample_mass (float, optional, default=1.0) â Sample mass in milligrams (mg). Used to normalize enthalpy (ÎH) values.
heating_rate (float, optional, default=1.0) â Heating or cooling rate in °C/min. Used to normalize ÎH for temperature-based data.
orientation (str or None, optional, default=None) â Defines the thermal orientation of the DSC instrument: - âexo_upâ : Exothermic events produce positive peaks. - âendo_upâ : Endothermic events produce positive peaks. If None, the user is prompted interactively to choose.
- Returns:
âplotâ : None
- âpeak_detectionâdict
- Contains coordinates of detected endothermic and exothermic peaks:
- {
âendothermicâ: [(x1, y1), (x2, y2), âŚ], âexothermicâ: [(x1, y1), (x2, y2), âŚ]
}
- âTgâ, âTmâ, âTcâfloat
The estimated transition temperature value in the same units as the x-axis.
- âdHâlist of tuples
Each tuple contains (Temperature, Signal, ÎH) for detected events.
- âkineticsâlist of dict
- Each dictionary contains:
- {
âOnsetâ: float, âPeakâ: float, âEndâ: float, âÎH (J/g)â: float
}
- Return type:
varies depending on application
- Raises:
ValueError â If the required data columns are missing or if application is not one of the supported analysis modes.
Notes
The function automatically handles both time-based (t) and temperature-based (Temperature) DSC data.
The orientation parameter affects sign convention in peak detection and ÎH calculation. For example, exo_up instruments produce positive exothermic peaks, while endo_up instruments produce negative ones.
The area under peaks (ÎH) is numerically integrated using the trapezoidal rule.
Examples
>>> import pandas as pd >>> data = pd.read_csv("sample_dsc.csv") >>> DSC(data, application="plot") # Displays the DSC curve.
>>> results = DSC(data, application="peak_detection", orientation="exo_up") >>> results["exothermic"] [(134.2, -0.023), (276.4, -0.018)]
>>> Tg = DSC(data, application="Tg", orientation="exo_up") Estimated Glass Transition Temperature (Tg): 65.12 °C
>>> dH_values = DSC(data, application="dH", sample_mass=5.0, heating_rate=10.0, orientation="endo_up") Enthalpy Changes (ÎH): Peak at 135.50 °C, ÎH â 25.432 J/g
- PyGamLab.Data_Analysis.Desulfurization_Rate(data, application)[source]
Analyze desulfurization rate with and without ultrasonic assistance.
- Parameters:
data (pandas.DataFrame) â A dataframe containing the following columns: - âTimeâ: Measurement times - âDesulfurization_With_Ultrasonicâ: Removal efficiency with ultrasonic - âDesulfurization_Without_Ultrasonicâ: Removal efficiency without ultrasonic
application (str) â Choose one of the following options: - âplotâ: plots the desulfurization with and without ultrasonic - âMax_Removal_With_Ultrasonicâ: returns maximum removal efficiency with ultrasonic - âMax_Removal_Without_Ultrasonicâ: returns maximum removal efficiency without ultrasonic
- Returns:
Returns the maximum value (float) if application is âMax_Removal_With_Ultrasonicâ or âMax_Removal_Without_Ultrasonicâ.
Returns None if application is âplotâ.
- Return type:
float or None
Examples
>>> import pandas as pd >>> df = pd.DataFrame({ ... "Time": [0, 10, 20, 30], ... "Desulfurization_With_Ultrasonic": [5, 20, 45, 60], ... "Desulfurization_Without_Ultrasonic": [3, 15, 35, 50] ... }) >>> Desulfurization_Rate(df, "Max_Removal_With_Ultrasonic") 60 >>> Desulfurization_Rate(df, "plot") # Displays plot
- PyGamLab.Data_Analysis.Dynamic_Light_Scattering_Analysis(df, application=None)[source]
Analyze and visualize Dynamic Light Scattering (DLS) data.
This function provides professional plotting of DLS data and extraction of key metrics such as the particle size corresponding to the maximum intensity.
- Parameters:
df (pandas.DataFrame) â Input DataFrame containing DLS data. Expected columns include: - âSize (nm)â : Particle size in nanometers - âIntensity (%)â : Corresponding intensity in percentage - âLag time (Âľs)â : Lag time for autocorrelation measurements - âAutocorrelationâ : Autocorrelation function values
application (str, optional) â
Type of analysis to perform: - âplotâ : Generate professional plots based on available columns.
If âSize (nm)â and âIntensity (%)â exist, plots Intensity vs Size.
If âLag time (Âľs)â and âAutocorrelationâ exist, plots Autocorrelation vs Lag time.
âmax_intensityâ : Returns the particle size corresponding to maximum intensity.
- Returns:
- If application=âmax_intensityâ:
Dictionary with keys: - âPeak Size (nm)â : particle size at maximum intensity - âPeak Intensity (%)â : intensity at that size
If application=âplotâ or None, returns None and displays plots.
- Return type:
dict or None
- Raises:
ValueError â
If required columns are missing for the selected application. - If application is invalid (not âplotâ or âmax_intensityâ).
Examples
# 1. Plot DLS Intensity vs Size >>> Dynamic_Light_Scattering_Analysis(df, application=âplotâ)
# 2. Plot Autocorrelation vs Lag time >>> Dynamic_Light_Scattering_Analysis(df_with_autocorr, application=âplotâ)
# 3. Get particle size at maximum intensity >>> result = Dynamic_Light_Scattering_Analysis(df, application=âmax_intensityâ) >>> print(result) {âPeak Size (nm)â: 120.5, âPeak Intensity (%)â: 85.2}
- PyGamLab.Data_Analysis.EDS_Analysis(file_path, application, elements=['C', 'O', 'Fe'])[source]
Perform analysis on Energy Dispersive X-ray Spectroscopy (EDS) data.
This function can plot the EDS spectrum, return raw data, quantify elemental composition, or detect peaks in the spectrum.
- Parameters:
file_path (str) â Path to the EDS data file in .msa format.
application (str) â Mode of operation: - âplotâ : Plot the EDS spectrum. - âdataâ : Return raw energy and counts arrays. - âquantifyâ : Estimate elemental weight and atomic percentages. - âfind_peakâ : Detect peaks in the spectrum and plot them.
elements (list of str, optional) â List of elements to quantify when application=âquantifyâ. Default is [âCâ,âOâ,âFeâ].
- Returns:
If application=âdataâ : tuple (energy, counts) as numpy arrays.
- If application=âquantifyâdict with keys:
âweight_percentâ : dict of elements and their weight percentages
âatomic_percentâ : dict of elements and their atomic percentages
If application=âfind_peakâ : list of tuples [(energy_keV, counts), âŚ] for detected peaks.
If application=âplotâ : None (displays plot only).
- Return type:
varies
- Raises:
ValueError â If the âapplicationâ argument is not one of âplotâ, âdataâ, âquantifyâ, or âfind_peakâ.
Examples
# 1. Plot the EDS spectrum >>> EDS_Analysis(âsample.msaâ, application=âplotâ)
# 2. Get raw energy and counts data >>> energy, counts = EDS_Analysis(âsample.msaâ, application=âdataâ)
# 3. Quantify elemental composition >>> results = EDS_Analysis(âsample.msaâ, application=âquantifyâ, elements=[âCâ,âOâ,âFeâ]) >>> results[âweight_percentâ] {âCâ: 12.3, âOâ: 30.1, âFeâ: 57.6} >>> results[âatomic_percentâ] {âCâ: 35.2, âOâ: 40.8, âFeâ: 24.0}
# 4. Find and plot peaks >>> peaks = EDS_Analysis(âsample.msaâ, application=âfind_peakâ) >>> peaks [(0.28, 100), (0.53, 250), (6.40, 1200)]
- PyGamLab.Data_Analysis.EELS_Analysis(data, application)[source]
Perform quantitative and visual analysis of Electron Energy Loss Spectroscopy (EELS) data.
This function allows detailed inspection of EELS spectra across different energy-loss regions, including Zero-Loss Peak (ZLP), low-loss, and core-loss regions. It supports both raw plotting and automated analysis such as peak detection, band gap estimation, plasmon peak identification, and fine structure analysis (ELNES/EXELFS).
- Parameters:
data (list of tuples/lists or pandas.DataFrame) â
Input EELS data. - If a list, each element should be (energy_loss, intensity). - If a DataFrame, it must contain columns:
âenergy_lossâ : float â Energy loss values in eV.
âIntensityâ : float â Measured intensity (arbitrary units).
application (str) â
Specifies the type of analysis to perform. Options include:
- âplotâ :
Simply plot the EELS spectrum for visual inspection.
- âZLPâ :
Analyze the Zero-Loss Peak (ZLP) region near 0 eV. Automatically detects the main elastic scattering peak and estimates:
Peak position (energy in eV)
Peak height (intensity)
Full Width at Half Maximum (FWHM) if determinable.
The results are printed and visualized with the smoothed curve and annotations.
- âlow_lossâ :
Analyze the Low-Loss Region (â5 to 50 eV) including pre-zero baseline. Performs:
Baseline smoothing and visualization
Detection of plasmon peaks (typically <25 eV)
Estimation of optical band gap (Eg) via derivative onset method.
Prints and plots plasmon peaks and band gap position.
- âcore_lossâ :
Analyze the Core-Loss (High-Loss) Region (>50 eV). Performs:
Edge onset detection using signal derivative
Step height estimation at the absorption edge
- Identification of fine structure features:
ELNES (Energy-Loss Near Edge Structure) within ~30 eV above onset
EXELFS (Extended Energy-Loss Fine Structure) oscillations beyond onset
Results include detected edges, peaks, and oscillations with visualized spectrum.
- Returns:
The function primarily displays plots and prints analysis results to the console. Key detected parameters (peak positions, FWHM, etc.) are reported in the output text.
- Return type:
None
Notes
Smoothing is performed using a Savitzky-Golay filter (scipy.signal.savgol_filter) with a default window length of 11 and polynomial order of 3.
Peak detection uses scipy.signal.find_peaks with adaptive height thresholds.
- Energy regions are automatically segmented as:
ZLP: around 0 eV
Low-loss: â5â50 eV
Core-loss: >50 eV
The function assumes intensity units are arbitrary and energy loss is in electronvolts (eV).
Examples
>>> import pandas as pd >>> data = pd.DataFrame({ ... "energy_loss": np.linspace(-10, 200, 500), ... "Intensity": np.random.random(500) * np.exp(-np.linspace(-10, 200, 500)/100) ... }) >>> EELS_Analysis(data, "plot") # Displays the EELS spectrum
>>> EELS_Analysis(data, "ZLP") # Detects and plots Zero-Loss Peak with FWHM estimation
>>> EELS_Analysis(data, "low_loss") # Identifies plasmon peaks and estimates band gap
>>> EELS_Analysis(data, "core_loss") # Detects absorption edge and ELNES/EXELFS features
- PyGamLab.Data_Analysis.EnergieAnalysis(dataframe, application)[source]
Parameters: - dataframe: pandas.DataFrame
Must contain motor energy data with columns [âAngle[°]â, âEnergieâ, âPower[mW]â, âTime for a Cycleâ].
- application: str
One of [âdrawâ, âcalculateâ]. âdrawâ â Plot energy vs angle. âcalculateâ â Calculate total consumption energy in Ws.
Returns: - float or None
Energy consumption in Ws if application=âcalculateâ. None if application=âdrawâ.
- PyGamLab.Data_Analysis.FTIR(data1, application, prominence=0.5, distance=10, save=False)[source]
OLD Version V1.00
- PyGamLab.Data_Analysis.Fatigue_Test_Analysis(data, application)[source]
Analyze fatigue test data and provide multiple metrics and plots.
- Parameters:
data (pandas.DataFrame) â Must contain columns: - âstress_amplitudeâ : Stress amplitude in MPa - ânumber_of_cyclesâ : Number of cycles to failure (N)
application (str) â Determines the operation: - âplotâ : S-N plot (Stress vs. Number of Cycles) - âmax stress amplitudeâ : Maximum stress amplitude - âfatigue strengthâ : Mean stress amplitude - âfatigue lifeâ : Mean number of cycles - âstress in one cycleâ : Basquinâs equation for stress at N=1 - âSaâ : Stress amplitude at N=1 - âfatigue limitâ : Cycle where stress becomes constant - âstd stressâ : Standard deviation of stress - âstd cyclesâ : Standard deviation of cycles
- Returns:
value â Result depending on the chosen application.
- Return type:
float or array-like or None
- PyGamLab.Data_Analysis.Find_MaxVerticalVelocity(df)[source]
Find the maximum vertical flow velocity and its location from simulation data, and plot velocity versus position.
- Parameters:
df (pandas.DataFrame) â DataFrame containing at least two columns: - âx(m)â: position in meters - âu(m/s)â: vertical velocity in m/s
- Returns:
(maximum velocity, location of maximum velocity)
- Return type:
tuple
Example
>>> import pandas as pd >>> df = pd.DataFrame({'x(m)':[0,0.5,1.0],'u(m/s)':[0.1,0.3,0.2]}) >>> Find_MaxVerticalVelocity(df) The maximum value of Flow Velocity for this problem is: 0.3 Also this maximum value occurs in this location: 0.5 (0.3, 0.5)
- PyGamLab.Data_Analysis.FtirAnalysis(dataframe, application, prominence=0.5, distance=10, save=False)[source]
Parameters: - dataframe: pandas.DataFrame
Raw FTIR data (expects one column with tab-separated values âX Yâ).
- application: str
One of [âplotâ, âpeakâ]. âplotâ will generate an FTIR plot. âpeakâ will detect and return peak positions and properties.
- prominence: float, default=0.5
Required prominence of peaks (used in peak detection).
- distance: int, default=10
Minimum horizontal distance (in number of samples) between peaks.
- save: bool, default=False
If True, save the generated plot.
- PyGamLab.Data_Analysis.Imerssion_Test(data, application)[source]
Analyze immersion test data for weight gain/loss over time.
- Parameters:
data (pandas.DataFrame) â A DataFrame containing immersion test results with the following columns: - âtimeâ - âMgâ - âMg_Hâ - âMg_Plâ - âMg_HPlâ
application (str) â
- The analysis to perform:
âplotâ : Plot the changes of weight (%) vs. time (days).
âMore_Bioactiveâ : Return the sample with the highest weight gain (more bioactive).
âLess_Bioactiveâ : Return the sample with the greatest weight loss (less bioactive).
- Returns:
If application == âplotâ, displays a plot and returns None.
If application == âMore_Bioactiveâ, returns the name of the most bioactive sample.
If application == âLess_Bioactiveâ, returns the name of the least bioactive sample.
- Return type:
None or str
Examples
>>> import pandas as pd >>> df = pd.DataFrame({ ... 'time': [1, 2, 3], ... 'Mg': [0.1, 0.2, 0.3], ... 'Mg_H': [0.2, 0.3, 0.5], ... 'Mg_Pl': [0.15, 0.25, 0.35], ... 'Mg_HPl': [0.25, 0.35, 0.45] ... }) >>> Imerssion_Test(df, 'More_Bioactive') 'Mg_HPl' >>> Imerssion_Test(df, 'Less_Bioactive') 'Mg' >>> Imerssion_Test(df, 'plot') # plots the data
- PyGamLab.Data_Analysis.Import_Data(File_Directory=None)[source]
- PyGamLab.Data_Analysis.LN_S_E(df, operation)[source]
This function analyzes the elastic part of a true stress-strain curve.
- Parameters:
df (pandas.DataFrame) â Must contain 2 columns: - âDLâ : elongation (length change in mm) - âFâ : force in Newtons
operation (str) â
âPLOTâ : plots the elastic region of the true stress-strain curve
âYOUNG_MODULUSâ : calculates and returns Youngâs Modulus (E)
- Returns:
None if operation=âPLOTâ
float if operation=âYOUNG_MODULUSâ
- PyGamLab.Data_Analysis.LoadPositionAnalysis(df, operation, area, length)[source]
Analyze Load-Position data: generate curves, calculate stress-strain, normalized stress-strain, or energy absorption density.
- Parameters:
df (pandas.DataFrame) â DataFrame containing two columns: - âLoad (kN)â: load values - âPosition (mm)â: position values
operation (str) â Operation to perform: - âLPCâ or âLoad-Position Curveâ - âSSCalâ or âStress-Strain Calculationâ - âSSCâ or âStress-Strain Curveâ - âNSSCalâ or âNormal Stress-Strain Calculationâ - âNSSCâ or âNormal Stress-Strain Curveâ - âEADCalâ or âEAD Calculationâ
area (float) â Cross-sectional area (mm²) for stress calculation
length (float) â Gauge length (mm) for strain calculation
- Returns:
Depends on operation: - Stress-Strain arrays for âSSCalâ and âNSSCalâ - Energy absorption density for âEADCalâ - None for plotting operations
- Return type:
np.ndarray or float or None
Example
>>> df = pd.DataFrame({'Load (kN)':[1,2,3],'Position (mm)':[0,1,2]}) >>> LoadPositionAnalysis(df, 'LPC', 100, 50) # Plot load-position curve >>> LoadPositionAnalysis(df, 'SSCal', 100, 50) # Returns Stress-Strain array >>> LoadPositionAnalysis(df, 'EADCal', 100, 50) # Returns energy absorption density
- PyGamLab.Data_Analysis.NMR_Analysis(df, application, peak_regions=None, peak_info=None)[source]
Analyze and visualize šH NMR spectra for different applications.
This function provides multiple modes: 1. Plotting the raw NMR spectrum (âplotâ). 2. Plotting the spectrum with integrated peak steps (âplot_with_integralsâ). 3. Estimating mole fractions of compounds in a mixture (âmixture_compositionâ). 4. Calculating percentage impurity of a compound (âcalculate_impurityâ).
- Parameters:
df (pd.DataFrame) â DataFrame containing NMR data with columns: - âppmâ : chemical shift values (x-axis) - âSpectrumâ : intensity values (y-axis)
application (str) â Mode of operation. Options: - âplotâ : generates a professional NMR spectrum plot. - âplot_with_integralsâ : generates a plot with integral steps (requires peak_regions). - âmixture_compositionâ : calculates mole fractions of compounds (requires peak_info). - âcalculate_impurityâ : calculates impurity percentage (requires peak_info with main and impurity info).
peak_regions (dict, optional) â Dictionary specifying integration regions for peaks (required for âplot_with_integralsâ). Format: {region_name: (start_ppm, end_ppm)}
peak_info (dict, optional) â
Dictionary with compound information for mixture analysis or impurity calculation. For âmixture_compositionâ:
{compound_name: {âregionâ: (start_ppm, end_ppm), âprotonsâ: int}}
- For âcalculate_impurityâ:
- {
âmain_compoundâ: {âregionâ: (start, end), âprotonsâ: int}, âimpurityâ: {âregionâ: (start, end), âprotonsâ: int}
}
- Returns:
The function either displays plots or prints calculated results.
- Return type:
None
Examples
# 1. Simple plot of NMR spectrum >>> NMR_Analysis(df, application=âplotâ)
# 2. Plot spectrum with integrals >>> peak_regions = {âpeak1â: (7.0, 7.5), âpeak2â: (3.5, 4.0)} >>> NMR_Analysis(df, application=âplot_with_integralsâ, peak_regions=peak_regions)
# 3. Mixture composition analysis >>> peak_info = { ⌠âCompoundAâ: {âregionâ: (7.0, 7.5), âprotonsâ: 5}, ⌠âCompoundBâ: {âregionâ: (3.5, 4.0), âprotonsâ: 3} ⌠} >>> NMR_Analysis(df, application=âmixture_compositionâ, peak_info=peak_info) â Mixture Composition â Mole Fraction of CompoundA: 0.62 Mole Fraction of CompoundB: 0.38
# 4. Impurity calculation >>> peak_info = { ⌠âmain_compoundâ: {âregionâ: (7.0, 7.5), âprotonsâ: 5}, ⌠âimpurityâ: {âregionâ: (3.5, 4.0), âprotonsâ: 1} ⌠} >>> NMR_Analysis(df, application=âcalculate_impurityâ, peak_info=peak_info) â Impurity Analysis â Main Compound Integral per Proton: 0.1234 Impurity Integral per Proton: 0.0123 Estimated Impurity: 9.09%
- PyGamLab.Data_Analysis.Oxygen_HeatCapacity_Analysis(df)[source]
Calculate enthalpy and entropy of oxygen from heat capacity data and plot Cp, enthalpy, and entropy versus temperature.
- Parameters:
df (pandas.DataFrame) â DataFrame containing at least two columns: - âTâ: Temperature values - âCpâ: Heat capacity at constant pressure
- Returns:
pandas.DataFrame â Original DataFrame with added âEnthalpyâ and âEntropyâ columns.
Also shows plots of â
Heat capacity vs temperature
Enthalpy and entropy vs temperature
Example
>>> df = pd.DataFrame({'T':[100,200,300],'Cp':[0.9,1.1,1.3]}) >>> Oxygen_HeatCapacity_Analysis(df)
- PyGamLab.Data_Analysis.ParticleSizeAnalysis(df, operation)[source]
Analyze particle size distribution: calculate average size or plot size distribution.
- Parameters:
df (pandas.DataFrame) â DataFrame containing at least two columns: - âsizeâ: particle sizes (nm) - âdistributionâ: intensity (%) corresponding to each size
operation (str) â Action to perform: - âcalculateâ: calculate and return the average particle size - âplotâ : plot the particle size distribution curve
- Returns:
Average particle size if operation=âcalculateâ, None if plotting.
- Return type:
float or None
Example
>>> import pandas as pd >>> df = pd.DataFrame({'size':[10,20,30],'distribution':[30,50,20]}) >>> ParticleSizeAnalysis(df, 'calculate') 20 >>> ParticleSizeAnalysis(df, 'plot') # Displays the plot
- PyGamLab.Data_Analysis.Photoluminescence_analysis(data_frame, application='plot')[source]
Perform photoluminescence (PL) data analysis and visualization.
This function analyzes a PL spectrum, identifies the main emission peak, calculates bandgap energy, estimates FWHM, and provides various plots.
- Parameters:
data_frame (pd.DataFrame) â DataFrame containing PL spectrum data with columns: - âwavelengthâ : wavelength in nanometers (nm) - âintensityâ : emission intensity (arbitrary units)
application (str, optional) â Specifies the type of analysis or visualization (default=âplotâ): - âplotâ : Plot the full PL spectrum. - âpeak_positionâ : Identify and return the wavelength of the main peak. - âpeak_intensityâ : Identify and return the intensity of the main peak. - âbandgap_energyâ : Calculate bandgap energy (eV) from the peak wavelength. - âfwhmâ : Calculate and return the full width at half maximum (FWHM) in nm.
- Returns:
âplotâ : None (displays a plot)
âpeak_positionâ : float, wavelength of main peak in nm
âpeak_intensityâ : float, intensity of main peak
âbandgap_energyâ : float, bandgap energy in eV
âfwhmâ : float, full width at half maximum in nm
{} : empty dictionary if no peak is detected or invalid application
- Return type:
varies
- Raises:
ValueError â
If the DataFrame does not contain required columns. - If an invalid application string is provided.
Notes
- Bandgap energy is calculated using Eg = h*c / Îť, where:
h : Planck constant (J¡s) c : speed of light (m/s) Ν : peak wavelength (m) e : elementary charge (C)
FWHM is estimated using linear interpolation and root-finding.
Examples
# 1. Plot PL spectrum >>> Photoluminescence_analysis(df, application=âplotâ)
# 2. Get peak wavelength >>> peak_wl = Photoluminescence_analysis(df, application=âpeak_positionâ) >>> print(fâPeak wavelength: {peak_wl:.2f} nmâ)
# 3. Get peak intensity >>> peak_int = Photoluminescence_analysis(df, application=âpeak_intensityâ) >>> print(fâPeak intensity: {peak_int:.3f}â)
# 4. Calculate bandgap energy >>> Eg = Photoluminescence_analysis(df, application=âbandgap_energyâ) >>> print(fâBandgap: {Eg:.3f} eVâ)
# 5. Calculate FWHM >>> fwhm = Photoluminescence_analysis(df, application=âfwhmâ) >>> print(fâFWHM: {fwhm:.2f} nmâ)
- PyGamLab.Data_Analysis.PolarizationAnalysis(df, work)[source]
Analyze polarization data: plot polarization curve or calculate corrosion potential.
- Parameters:
df (pandas.DataFrame) â DataFrame containing at least two columns: - âCurrent densityâ: current density in A/cm2 - âPotentialâ: potential in V vs Ag/AgCl
work (str) â Action to perform: - âplotâ: plots the polarization curve (log(current) vs potential) - âcorrosion potentialâ: returns the potential corresponding to the minimum current density
- Returns:
Corrosion potential in volts if work=âcorrosion potentialâ, None if plotting.
- Return type:
float or None
Example
>>> import pandas as pd >>> df = pd.DataFrame({'Current density':[1e-6,1e-5,1e-4],'Potential':[0.1,0.2,0.3]}) >>> PolarizationAnalysis(df, 'plot') # Displays the plot >>> PolarizationAnalysis(df, 'corrosion potential') 0.1
- PyGamLab.Data_Analysis.Polarization_Control(data, application)[source]
Analyze polymerization process data and either visualize trends or return key values.
- Parameters:
data (pd.DataFrame) â A DataFrame containing the following required columns: - âtimeâ (float or int): Time in seconds - âtempâ (float): Temperature in °C - âpressureâ (float): Pressure in Pa - âpercentâ (float): Reaction progress percentage (0â100)
application (str) â Selects the analysis/plotting mode. Options: - âtemp_timeâ : Plot Temperature vs Time - âpressure_timeâ : Plot Pressure vs Time - âpercent_timeâ : Plot Reaction Percent vs Time - â100% reactionâ : Return (temperature, pressure) when polymerization reaches 100% - âMax_pressureâ : Return maximum process pressure - âMax_tempâ : Return maximum process temperature
- Returns:
(temp, pressure) if application is â100% reactionâ
max pressure (float) if application is âMax_pressureâ
max temperature (float) if application is âMax_tempâ
None if plotting is performed
- Return type:
tuple | float | None
Examples
>>> df = pd.DataFrame({ ... 'time': [0, 10, 20, 30], ... 'temp': [25, 50, 75, 100], ... 'pressure': [1, 2, 3, 4], ... 'percent': [0, 30, 70, 100] ... }) >>> Polarization_Control(df, 'temp_time') # Plots Temperature vs Time >>> Polarization_Control(df, 'Max_temp') 100 >>> Polarization_Control(df, '100% reaction') (100, 4)
- PyGamLab.Data_Analysis.Pore_Size(df, A, P, Vis=0.00089, Density=1)[source]
Calculate the pore size of membranes and plot a Pore Size Chart.
- Parameters:
df (pandas.DataFrame) â DataFrame containing columns: âmembraneâ, âWwâ, âWdâ, âVâ, âqâ, âlâ Ww = weight of wet sample (g), Wd = weight of dry sample (g), V = sample volume (cm3) q = flow rate (m3/s), l = membrane thickness (m)
A (float) â Effective surface area of the membrane (m2)
P (float) â Operational pressure (Pa)
Vis (float, optional) â Water viscosity (Pa.s). Default is 8.9e-4
Density (float, optional) â Water density (g/cm3). Default is 1
- Returns:
Pore_Size â Array of pore size values in nm.
- Return type:
numpy.ndarray
Example
>>> df = pd.DataFrame({ ... 'membrane': ['M1', 'M2', 'M3'], ... 'Ww': [2.5, 3.0, 2.8], ... 'Wd': [2.0, 2.4, 2.3], ... 'V': [1.0, 1.2, 1.1], ... 'q': [1e-6, 1.2e-6, 0.9e-6], ... 'l': [1e-3, 1.1e-3, 0.9e-3] ... }) >>> pore_sizes = Pore_Size(df, A=0.01, P=2e5)
- PyGamLab.Data_Analysis.Porosity(df, Density=1)[source]
Calculate porosity of membranes and plot a Porosity Chart.
- Parameters:
df (pandas.DataFrame) â DataFrame containing columns: âmembraneâ, âWwâ, âWdâ, âVâ where Ww = weight of wet sample, Wd = weight of dry sample, V = sample volume.
Density (float, optional) â Water density (g/cm3). Default is 1.
- Returns:
Porosity â Array of porosity values for each membrane.
- Return type:
numpy.ndarray
Example
>>> df = pd.DataFrame({ ... 'membrane': ['M1', 'M2', 'M3'], ... 'Ww': [2.5, 3.0, 2.8], ... 'Wd': [2.0, 2.4, 2.3], ... 'V': [1.0, 1.2, 1.1] ... }) >>> porosity_values = Porosity(df)
- PyGamLab.Data_Analysis.PressureVolumeIdealGases(dataframe, application)[source]
Parameters: - dataframe: pandas.DataFrame
Must contain âpressureâ and âvolumeâ columns.
- application: str
- One of [âplotâ, âmin pressureâ, âmax pressureâ, âmin volumeâ,
âmax volumeâ, âaverage pressureâ, âaverage volumeâ, âtemperatureâ].
Returns: - float, pandas.Series, or None
Depending on the selected application.
- PyGamLab.Data_Analysis.Raman_Analysis(data, application)[source]
Perform quantitative and visual analysis of Raman spectroscopy data.
This function provides flexible tools for visualizing and analyzing Raman spectra. It supports basic spectrum plotting and automated peak detection for identifying characteristic Raman bands.
- Parameters:
data (list of tuples or list of lists) â
- Raman spectrum data, where each element corresponds to one measurement point:
(wavenumber, intensity)
- wavenumberfloat
Raman shift in inverse centimeters (cmâťÂš)
- intensityfloat
Measured Raman intensity in arbitrary units (a.u.)
Example: >>> data = [(100, 0.1), (150, 0.5), (200, 1.2)]
application (str) â
Defines the type of analysis to perform. Supported options:
- âplotâ :
Plot the Raman spectrum with labeled axes and gridlines for quick visual inspection.
- âpeak_detectâ :
Automatically detect and highlight prominent peaks in the Raman spectrum. Peak detection is performed using scipy.signal.find_peaks with:
Minimum peak height = 10% of maximum intensity
Minimum distance between peaks = 5 data points
The detected peaks are printed (wavenumber and intensity) and plotted with red markers.
- Raises:
ValueError â If the data format is invalid or the specified application is not supported.
- Returns:
The function generates plots and prints peak data to the console when applicable. No explicit return value.
- Return type:
None
Notes
The function assumes Raman shift values are given in cmâťÂš and intensity in arbitrary units.
The x-axis is plotted as Raman shift (increasing rightward). Uncomment the invert_xaxis() line to follow the traditional Raman plotting convention (decreasing Raman shift).
Peak detection parameters (height and distance) can be fine-tuned based on spectral resolution.
Examples
>>> import numpy as np >>> # Generate synthetic Raman data >>> wavenumbers = np.linspace(100, 2000, 500) >>> intensities = np.exp(-((wavenumbers - 1350)/40)**2) + 0.5*np.exp(-((wavenumbers - 1580)/30)**2) >>> data = list(zip(wavenumbers, intensities))
>>> Raman_Analysis(data, "plot") # Displays the Raman spectrum
>>> Raman_Analysis(data, "peak_detect") # Detects and highlights Raman peaks in the spectrum
- PyGamLab.Data_Analysis.Reaction_Conversion_Analysis(data, app)[source]
Analyze and visualize conversion data from a chemical reaction experiment.
- Parameters:
data (pandas.DataFrame) â A DataFrame that must contain the following columns: - âtimeâ : time in seconds - âtempâ : temperature in Celsius - âpressureâ : pressure in bar - âconvâ : conversion percentage
app (str) â Determines the action: - âPLOT_TEMPâ â plots Temperature vs. Time - âPLOT_PRESSUREâ â plots Pressure vs. Time - âPLOT_CONVERSIONâ â plots Conversion vs. Time - âMAXIMUM_CONVERSIONâ â returns index and values at maximum conversion
- Returns:
result â
If app=âMAXIMUM_CONVERSIONâ, returns the index of maximum conversion.
Otherwise, returns None (just shows plots).
- Return type:
int or None
- Raises:
TypeError â If app is not one of the accepted values.
Examples
>>> import pandas as pd >>> df = pd.DataFrame({ ... "time": [0, 1, 2, 3], ... "temp": [300, 310, 315, 320], ... "pressure": [1, 1.2, 1.3, 1.4], ... "conv": [10, 20, 30, 50] ... }) >>> Conversion_Analysis(df, "PLOT_TEMP") # plots Temperature vs Time >>> Conversion_Analysis(df, "MAXIMUM_CONVERSION") maximum of temperature is 320 maximum of conversion is 50 The temperature in maximum conversion is 320 and the pressure is 1.4 3
- PyGamLab.Data_Analysis.SAXS_Analysis(data, application)[source]
Perform Small-Angle X-ray Scattering (SAXS) data analysis for nanostructural characterization.
This function provides key analytical tools to extract structural information from SAXS profiles, including visualization, peak position detection, intensity integration, peak width (FWHM) determination, and Guinier (radius of gyration) analysis.
- Parameters:
data (pandas.DataFrame) â
Experimental SAXS dataset containing: - âqâ : float
Scattering vector magnitude (1/nm)
- âIâfloat
Scattered intensity I(q)
Example: >>> data = pd.DataFrame({ ⌠âqâ: [0.01, 0.02, 0.03, 0.04], ⌠âIâ: [300, 800, 400, 200] ⌠})
application (str) â
Defines the type of analysis to perform. Supported options include:
- âplotâ :
Plot I(q) vs. q to visualize the SAXS curve and scattering profile.
- âpeak_positionâ :
Identify the q position of the main scattering peak and calculate the corresponding real-space characteristic spacing: d = 2Ď / q_peak
- âpeak_intensityâ :
Quantify the intensity and integrated area under the most intense scattering peak using numerical integration (numpy.trapz).
- âpeak_widthâ :
Compute the full width at half maximum (FWHM) of the main scattering peak, which provides information about domain size and order distribution.
- ârogâ :
Perform Guinier analysis (low-q region) by linear fitting of ln I(q) vs. q² to estimate the radius of gyration (Rg) and I(0):
ln I(q) = ln I(0) â (Rg² * q²) / 3
- Returns:
Depending on the analysis: - âpeak_positionâ â (q_peak, d_spacing) - âpeak_intensityâ â (I_peak, area) - âpeak_widthâ â FWHM - ârogâ â (Rg, I0) - âplotâ â None
- Return type:
tuple or None
- Raises:
TypeError â If the input data is not a pandas DataFrame or lacks required columns.
ValueError â If the specified application is not supported or no peaks are detected.
Notes
q is defined as q = (4Ď/Îť) sin(θ), where θ is half the scattering angle.
The characteristic spacing (d) corresponds to periodicity or average interparticle distance.
FWHM can be used to estimate crystalline order (via Scherrer-like relations).
The Guinier approximation is valid only for q¡Rg < 1.3.
Examples
>>> SAXS_Analysis(data, "plot") # Displays the SAXS intensity profile.
>>> SAXS_Analysis(data, "peak_position") # Prints and plots q_peak and corresponding d-spacing.
>>> SAXS_Analysis(data, "peak_intensity") # Calculates peak height and integrated scattering area.
>>> SAXS_Analysis(data, "peak_width") # Determines full width at half maximum (FWHM) in q-space.
>>> SAXS_Analysis(data, "rog") # Performs Guinier analysis to estimate radius of gyration (Rg).
- PyGamLab.Data_Analysis.SICalculation(f_loc, P, PC, Density=1)[source]
This function is used for Separation Index Calculation
P : Pressure (bar)
Density : Feed Density(g/cm3)
PC : Pollutant concentration in Feed (g/L)
Returns Separation Index and Flux & Rejection & Rejection Charts
- PyGamLab.Data_Analysis.SI_Calculation(df, P, PC, Density=1)[source]
Calculate Separation Index (SI) and plot Flux, Rejection, and SI charts.
- Parameters:
df (pandas.DataFrame) â DataFrame containing columns: âMem Codeâ, âFluxâ, âRejectionâ.
P (float) â Pressure (bar)
PC (float) â Pollutant concentration in Feed (g/L)
Density (float, optional) â Feed Density (g/cm3), default is 1.
- Returns:
SI â Array of Separation Index values for each membrane.
- Return type:
numpy.ndarray
Example
>>> df = pd.DataFrame({ ... 'Mem Code': ['M1', 'M2', 'M3'], ... 'Flux': [90, 150, 250], ... 'Rejection': [0.5, 0.7, 0.8] ... }) >>> SI = SI_Calculation(df, P=5, PC=50)
- PyGamLab.Data_Analysis.Signal_To_Noise_Ratio(data, application)[source]
Calculate and optionally plot signal, noise, or SNR from experimental data.
- Parameters:
data (DataFrame) â
- Experimental data with columns:
1- âlocationâ: measurement locations 2- âSignal Strengthâ: signal power in dBm 3- âNoise Powerâ: noise power in dBm
application (str) â
- One of the following:
âplot signalâ - plots the signal column âplot noiseâ - plots the noise column âplot snrâ - plots the signal-to-noise ratio
- Returns:
mx â Maximum signal-to-noise ratio in dB
- Return type:
float
- PyGamLab.Data_Analysis.SolidificationStart(df, temp_sol)[source]
Determine if solidification has started based on temperature profile, and plot temperature along the centerline.
- Parameters:
df (pandas.DataFrame) â DataFrame containing at least two columns: - âx(m)â: position in meters - âT(K)â: temperature in Kelvin
temp_sol (float) â Solidus temperature of the material in Kelvin.
- Returns:
True if solidification has started (temperature <= solidus temperature), False otherwise.
- Return type:
bool
Example
>>> import pandas as pd >>> df = pd.DataFrame({'x(m)':[0,0.5,1],'T(K)':[1600,1550,1500]}) >>> SolidificationStart(df, 1520) The solidification process has started. True
- PyGamLab.Data_Analysis.StatisticalAnalysis(df, operation)[source]
Perform statistical analysis or plots on a DataFrame.
- Parameters:
df (pandas.DataFrame) â Input DataFrame with numeric features.
operation (str) â Operation to perform: - âstatisticsâ : prints min, max, median, quantiles, IQR, and z-score for each numeric feature - âhistogramâ : plots histograms for numeric features - âcorrelationâ: plots correlation heatmap - âpairplotâ : plots pairplot with regression lines
- Returns:
Prints statistics or displays plots.
- Return type:
None
Example
>>> import pandas as pd >>> df = pd.DataFrame({'A':[1,2,3,4],'B':[4,3,2,1]}) >>> StatisticalAnalysis(df, 'statistics') >>> StatisticalAnalysis(df, 'histogram') >>> StatisticalAnalysis(df, 'correlation') >>> StatisticalAnalysis(df, 'pairplot')
- PyGamLab.Data_Analysis.Stress_Strain1(df, operation, L0=90, D0=9)[source]
This function gets data and an operation . It plots Stress-Strain curve if the oepration is plot and finds the UTS value (which is the ultimate tensile strength) otherwise. ââââââââââ :type df: :param df: It has 2 columns: DL(which is length in mm) & F (which is the force in N). :type df: DataFrame :type operation: :param operation: It tells the function to whether PLOT the curve or find the UTS valu.
L0: initial length of the sample D0: initial diameter of the sample
- Return type:
The Stress-Strain curve or the amount of UTS
- PyGamLab.Data_Analysis.Stress_Strain2(input_file, which, count)[source]
This function claculates the stress and strain Parameters from load and elongation data âââ- input_file : .csv format
the file must be inserted in csv.
- whcihstr
please say which work we do ( plot or calculate?).
- count: int
please enter the yarn count in Tex
remember: gauge length has been set in 250 mm
- PyGamLab.Data_Analysis.Stress_Strain3(input_data, action)[source]
- PyGamLab.Data_Analysis.Stress_Strain4(file_path, D0, L0)[source]
This function uses the data file that contains length and force, calculates the engineering, true and yielding stress and strain and also draws a graph of these.
Parameters: D0(mm): First Qatar to calculate stress L0(mm): First Length to canculate strain F(N): The force applied to the object during this test DL(mm): Length changes
Returns: Depending on the operation selected, it returns calculated values, plots, advanced analysis, or saves results.
- PyGamLab.Data_Analysis.Stress_Strain5(input_data, action)[source]
- PyGamLab.Data_Analysis.Stress_Strain6(data, application)[source]
this function converts F and dD to Stress and Strain by thickness(1.55mm), width(3.2mm) and parallel length(35mm).
- Parameters:
data (DataFrame) â this DataFrame contains F(N) and dD(mm) received from the tensil test machine.
application (str) â application determines the expected output of Stress_Strain function.
- Returns:
return may be elongation at break, strength or a plot.
- Return type:
int, float or plot
- PyGamLab.Data_Analysis.TGA(data, application)[source]
Perform multi-mode Thermogravimetric Analysis (TGA) for material characterization.
This function enables comprehensive TGA data analysis for studying thermal stability, composition, surface modification, and reaction kinetics. It supports visualization, derivative thermogravimetry (DTG), decomposition step identification, and moisture or solvent content determination.
- Parameters:
data (pandas.DataFrame) â
Experimental TGA dataset with the following required columns: - âTempâ : float
Temperature in degrees Celsius (°C)
- âMassâfloat
Corresponding sample mass in percentage (%)
Example: >>> data = pd.DataFrame({ ⌠âTempâ: [25, 100, 200, 300], ⌠âMassâ: [100, 99.5, 80.2, 10.5] ⌠})
application (str) â
Defines the type of analysis to perform. Supported options include:
- âplotâ :
Plot the raw TGA curve (Mass vs. Temperature).
- âpeaksâ :
Compute and display the derivative thermogravimetry (DTG) curve and identify key decomposition peaks using scipy.signal.find_peaks.
- âstabilityâ :
Estimate the onset temperature of thermal degradation by tangent extrapolation from the baseline region.
- âmoistureâ :
Calculate moisture or solvent content based on mass loss before the first decomposition event (typically below 150 °C).
- âfunctionalizationâ :
Identify surface functionalization or modification steps by detecting multiple degradation peaks above 150 °C.
- âcompositionâ :
Estimate polymer and filler content from the initial and final mass values (residue analysis).
- âDTGâ :
Compute and plot the first derivative of the TGA curve (dM/dT) for insight into reaction rate behavior.
- âdecomposition_stepsâ :
Identify and quantify major decomposition events (DTG peaks), returning their temperatures and mass values.
- âkineticsâ :
Evaluate relative reaction rates and identify the fastest decomposition step (maximum |dM/dT| above 150 °C).
- Raises:
TypeError â If the input is not a pandas DataFrame.
ValueError â If required columns (âTempâ, âMassâ) are missing or if the specified application is not supported.
- Returns:
Depends on the application:
- âplotâ :
Displays the TGA curve; returns None.
- âpeaksâ :
DataFrame containing detected DTG peak temperatures and intensities.
- âstabilityâ :
Dictionary with onset temperature and mass at onset.
- âmoistureâ :
Dictionary with moisture content, cutoff temperature, and mass loss.
- âfunctionalizationâ :
DataFrame listing detected modification steps.
- âcompositionâ :
Dictionary with polymer and filler content percentages.
- âDTGâ :
DataFrame of temperatures and corresponding dM/dT values.
- âdecomposition_stepsâ :
DataFrame of decomposition step information.
- âkineticsâ :
Dictionary with step-wise reaction rate data and the fastest decomposition step.
- Return type:
object
Notes
TGA data should be preprocessed to ensure monotonic temperature increase.
The function uses numerical differentiation (np.gradient) for DTG calculations.
Peak prominence thresholds can be adjusted to improve detection sensitivity.
Onset temperatures are approximate and depend on the slope estimation method.
Examples
>>> import pandas as pd, numpy as np >>> T = np.linspace(25, 800, 300) >>> M = 100 - 0.05*(T - 25) + 10*np.exp(-((T-400)/50)**2) >>> data = pd.DataFrame({"Temp": T, "Mass": M})
>>> TGA(data, "plot") # Displays the TGA curve.
>>> peaks_info = TGA(data, "peaks") >>> print(peaks_info.head())
>>> stability = TGA(data, "stability") >>> print(stability)
- PyGamLab.Data_Analysis.Tensile_Analysis(dataframe, gauge_length=1, width=1, thickness=1, application='plot-force', save=False)[source]
Parameters: - dataframe: raw data from Excel (Force vs Displacement) - gauge_length: Initial length of the sample in mm - width: Width of the sample in mm - thickness: Thickness of the sample in mm - application: âplot-forceâ or âplot-stressâ - save: True to save the plot - show_peaks: True to annotate peaks (e.g. UTS) - fname: Filename to save if save=True
- PyGamLab.Data_Analysis.Tortuosity(df, Density=1)[source]
Calculate the pore tortuosity of membranes and plot a Tortuosity Chart.
- Parameters:
df (pandas.DataFrame) â DataFrame containing columns: âmembraneâ, âWwâ, âWdâ, âVâ where Ww = weight of wet sample, Wd = weight of dry sample, V = sample volume.
Density (float, optional) â Water density (g/cm3). Default is 1.
- Returns:
Tortuosity â Array of tortuosity values for each membrane.
- Return type:
numpy.ndarray
Example
>>> df = pd.DataFrame({ ... 'membrane': ['M1', 'M2', 'M3'], ... 'Ww': [2.5, 3.0, 2.8], ... 'Wd': [2.0, 2.4, 2.3], ... 'V': [1.0, 1.2, 1.1] ... }) >>> tort_values = Tortuosity(df)
- PyGamLab.Data_Analysis.UV_Visible_Analysis(data, application, **kwargs)[source]
Perform multi-mode UVâVisible Spectroscopy analysis for optical and electronic characterization.
This function provides tools for analyzing UVâVis absorbance spectra, including visualization, BeerâLambert law concentration estimation, peak identification, Landau maximum detection, and Tauc plot-based band gap estimation.
- Parameters:
data (pandas.DataFrame or dict) â
Experimental UVâVis dataset containing the columns: - âWavelengthâ : float
Wavelength values in nanometers (nm)
- âAbsorbanceâfloat
Measured absorbance at each wavelength
Example: >>> data = pd.DataFrame({ ⌠âWavelengthâ: [200, 250, 300, 350], ⌠âAbsorbanceâ: [0.2, 0.8, 1.1, 0.4] ⌠})
application (str) â
Defines the analysis mode. Supported applications:
- âplotâ :
Plot the UVâVis spectrum (Absorbance vs. Wavelength).
- âbeer_lambertâ :
Apply BeerâLambert law to calculate molar concentration: A = Îľ Ă l Ă c, where: Îľ = molar extinction coefficient, l = optical path length (cm), c = concentration (M).
Required keyword arguments: - molar_extinction_coefficient : float - path_length : float, optional (default=1.0)
- âpeak_detectionâ or âidentify_peaksâ :
Detect spectral peaks using scipy.signal.find_peaks. Optional keyword arguments: - height : float, threshold for peak height. - distance : int, minimum number of points between peaks.
- âband_gapâ :
Generate a Tauc plot for optical band gap determination. Uses the relation (ιhν)^n vs. hν, where n = 0.5 for direct and n = 2 for indirect transitions.
Keyword arguments: - n : float, exponent type (default=0.5)
- âlandau_maxâ :
Identify the wavelength corresponding to maximum absorbance (Landau maximum). If BeerâLambert parameters are provided, the function estimates the sample concentration at that point.
Optional keyword arguments: - molar_extinction_coefficient : float - path_length : float, optional (default=1.0)
- Keyword Arguments:
molar_extinction_coefficient (float, optional) â Required for BeerâLambert law or Landau Max concentration estimation.
path_length (float, default=1.0) â Optical path length of the cuvette (in cm).
height (float, optional) â Minimum absorbance for peak detection.
distance (int, optional) â Minimum distance between adjacent detected peaks.
n (float, default=0.5) â Exponent in the Tauc plot for direct/indirect band gap transitions.
- Returns:
Depends on the analysis mode:
- âplotâ :
Displays spectrum; returns None.
- âbeer_lambertâ :
pandas.DataFrame with calculated concentration values.
- âpeak_detectionâ / âidentify_peaksâ :
pandas.DataFrame listing detected peak wavelengths and absorbances.
- âband_gapâ :
pandas.DataFrame with photon energy and Tauc Y-values.
- âlandau_maxâ :
dict with wavelength, absorbance, and (if applicable) concentration.
- Return type:
object
- Raises:
ValueError â If input format or application type is invalid.
KeyError â If required columns (âWavelengthâ, âAbsorbanceâ) are missing.
Notes
Band gap energy (Eg) is estimated by extrapolating the linear portion of the Tauc plot to the energy axis.
The Landau maximum provides insights into ĎâĎ* or nâĎ* transitions.
BeerâLambert analysis assumes linearity in the absorbanceâconcentration range.
Wavelengths must be sorted in ascending order for accurate results.
Examples
>>> data = pd.DataFrame({ ... "Wavelength": np.linspace(200, 800, 300), ... "Absorbance": np.exp(-((np.linspace(200, 800, 300) - 400) / 50)**2) ... }) >>> UV_Visible_Analysis(data, "plot") # Displays the UVâVis spectrum.
>>> UV_Visible_Analysis(data, "peak_detection", height=0.2) # Detects and highlights spectral peaks.
>>> UV_Visible_Analysis(data, "beer_lambert", ... molar_extinction_coefficient=15000, path_length=1.0) # Computes sample concentration using BeerâLambert law.
>>> UV_Visible_Analysis(data, "band_gap", n=0.5) # Displays the Tauc plot for band gap estimation.
>>> UV_Visible_Analysis(data, "landau_max", ... molar_extinction_coefficient=20000, path_length=1.0) # Identifies Landau maximum and estimates concentration.
- PyGamLab.Data_Analysis.WAXS_Analysis(data, application, **kwargs)[source]
Perform Wide-Angle X-ray Scattering (WAXS) data analysis for crystallographic and nanostructural characterization.
This function analyzes WAXS diffraction patterns to determine structural information such as peak positions, d-spacings, peak widths, crystallite size, degree of crystallinity, and peak shape classification.
- Parameters:
data (pandas.DataFrame or array-like) â
Experimental WAXS dataset containing two columns: - âqâ : float
Scattering vector (Ă âťÂš) or 2θ values (degrees)
- âIâfloat
Scattering intensity (a.u.)
Example: >>> data = pd.DataFrame({ ⌠âqâ: [0.5, 1.0, 1.5, 2.0], ⌠âIâ: [200, 600, 300, 100] ⌠})
application (str) â
Defines the type of analysis to perform. Supported options include:
- âplotâ :
Plot the WAXS pattern (Intensity vs q or 2θ).
- âpeak_positionâ :
Detect the most intense diffraction peaks, compute their corresponding d-spacings using:
d = 2Ď / q
Returns a table of q values, d-spacings, and intensities.
- âpeak_intensityâ :
Determine the intensity and integrated area under the strongest diffraction peaks, useful for semi-quantitative crystallinity assessment.
- âpeak_widthâ :
Compute full width at half maximum (FWHM) of main peaks and estimate crystallite size using the Scherrer equation:
L = KΝ / (β cosθ)
Also estimates overall percent crystallinity from integrated peak areas.
- âpeak_shapeâ :
Classify peak sharpness based on FWHM(2θ) and estimate the crystallinity percentage. Sharp peaks imply high crystallinity, broad peaks indicate amorphous domains.
Arguments (Optional Keyword)
---------------------------
threshold (float, optional) â Minimum relative intensity (fraction of max) to detect peaks. Default = 0.1 (10% of max intensity).
top_n (int, optional) â Number of top peaks to consider. Default = 3.
wavelength (float, optional) â X-ray wavelength in Ă ngstrĂśms (required for âpeak_widthâ and âpeak_shapeâ).
K (float, optional) â Scherrer constant, typically between 0.89â0.94. Default = 0.9.
width_threshold (float, optional) â Threshold in degrees for classifying peak shapes. Default = 2.0° (2θ).
- Returns:
Depending on the analysis type: - âpeak_positionâ â DataFrame of q, d-spacing, and intensity - âpeak_intensityâ â DataFrame of peak positions and intensities - âpeak_widthâ â (DataFrame of peak properties, crystallinity_percent) - âpeak_shapeâ â (DataFrame of peak classification, crystallinity_percent) - âplotâ â None
- Return type:
pandas.DataFrame or tuple
- Raises:
ValueError â If an unsupported application is specified or if wavelength is missing for analyses that require it.
TypeError â If the input data format is invalid.
Notes
q and 2θ are related by: q = (4Ď / Îť) sin(θ)
d-spacing provides interplanar distances according to Braggâs law.
Crystallite size estimation assumes negligible strain and instrumental broadening.
The degree of crystallinity is estimated from the ratio of crystalline (peak) area to total scattered intensity.
Examples
>>> WAXS_Analysis(data, "plot") # Displays the WAXS pattern.
>>> WAXS_Analysis(data, "peak_position") # Returns major peaks and corresponding d-spacings.
>>> WAXS_Analysis(data, "peak_intensity") # Calculates integrated areas of main peaks.
>>> WAXS_Analysis(data, "peak_width", wavelength=1.54) # Estimates FWHM, crystallite size, and crystallinity.
>>> WAXS_Analysis(data, "peak_shape", wavelength=1.54) # Classifies peaks as sharp/broad and returns crystallinity percent.
- PyGamLab.Data_Analysis.Water_Hardness(df)[source]
Evaluate water hardness based on metal content and pyrogenic compounds, filter out unsuitable water, calculate hardness (ppm), and plot results.
- Parameters:
df (pandas.DataFrame) â DataFrame containing at least the following columns: - ânameâ: sample name - âCuâ, âNiâ, âZnâ, âpyroâ, âCyaâ, âMgâ, âCaâ
- Returns:
Filtered DataFrame with suitable water samples
List of DataFrames containing names of unavailable water samples
Displays a bar plot of water hardness (ppm) vs sample names
- Return type:
tuple
Example
>>> import pandas as pd >>> df = pd.DataFrame({ ... 'name':['W1','W2','W3'], ... 'Cu':[10,25,5],'Ni':[5,3,15],'Zn':[5,8,12], ... 'pyro':[50,120,90],'Cya':[1,3,0.5],'Mg':[10,15,5],'Ca':[20,25,15]}) >>> Water_Hardness(df)
- PyGamLab.Data_Analysis.WearBar_Plot(df_list, S=300, F=5, work='bar')[source]
Calculate wear rate for multiple samples and plot as a bar chart.
- Parameters:
df_list (list of pandas.DataFrame) â Each DataFrame must contain columns: - âweight before testâ - âweight after testâ
S (float, optional) â Sliding distance in meters (default 300)
F (float, optional) â Normal force in Newtons (default 5)
work (str, optional) â Currently only âbarâ supported (default âbarâ)
- Returns:
Displays a bar plot of wear rates for the samples.
- Return type:
None
Example
>>> df1 = pd.DataFrame({'weight before test':[5.0],'weight after test':[4.9]}) >>> df2 = pd.DataFrame({'weight before test':[4.8],'weight after test':[4.7]}) >>> WearBar_Plot([df1, df2])
- PyGamLab.Data_Analysis.WearRate_Calculation(df, S, F, work='wear rate')[source]
Calculate wear rate of samples based on weight loss during a wear test.
- Parameters:
df (pandas.DataFrame) â DataFrame containing two columns: - âweight before testâ: sample weight before the test - âweight after testâ: sample weight after the test
S (float) â Sliding distance during the test (in meters)
F (float) â Normal force applied during the test (in Newtons)
work (str, optional) â Type of calculation, default is âwear rateâ
- Returns:
Wear rate (WR) in units of mass/(force*distance)
- Return type:
float
Example
>>> import pandas as pd >>> df = pd.DataFrame({ ... 'weight before test':[5.0,4.8,5.2], ... 'weight after test':[4.9,4.7,5.1]}) >>> WearRate_Calculation(df, S=100, F=50) 0.002
- PyGamLab.Data_Analysis.XPS_Analysis(df, application='plot', sensitivity_factors=None, tolerance=1.5, peak_prominence=None, peak_distance=None, smoothing_window=11, smoothing_poly=3)[source]
Perform X-ray Photoelectron Spectroscopy (XPS) data analysis.
This function allows for plotting the XPS spectrum, returning raw data, performing surface composition analysis based on sensitivity factors, and detecting peaks with optional smoothing.
- Parameters:
df (pd.DataFrame) â XPS data containing columns âeVâ (binding energy) and âCounts / sâ (intensity).
application (str, optional) â Mode of operation (default=âplotâ): - âplotâ : Plot the XPS spectrum. - âdataâ : Return raw energy and counts arrays. - âcompositionâ : Estimate atomic composition using peak areas and sensitivity factors. - âpeak_detectionâ : Detect peaks, optionally smooth the spectrum, and plot.
sensitivity_factors (dict, optional) â Element-specific sensitivity factors required for âcompositionâ application. Example: {âCâ: 1.0, âOâ: 2.93, âFeâ: 3.5}
tolerance (float, optional) â Binding energy tolerance in eV for peak assignment (default=1.5 eV).
peak_prominence (float, optional) â Minimum prominence of peaks for detection (used in âcompositionâ and âpeak_detectionâ).
peak_distance (int, optional) â Minimum distance between peaks in number of points (used in âcompositionâ and âpeak_detectionâ).
smoothing_window (int, optional) â Window length for Savitzky-Golay smoothing (must be odd, default=11).
smoothing_poly (int, optional) â Polynomial order for Savitzky-Golay smoothing (default=3).
- Returns:
If application=âplotâ : None (displays plot only)
If application=âdataâ : tuple (energy, counts) as numpy arrays
If application=âcompositionâ : dict of atomic percentages {element: atomic %}
If application=âpeak_detectionâ : list of dicts with peak information, e.g. [{âenergyâ: eV, âcountsâ: intensity, âsmoothed_countsâ: value,
âwidthâ: FWHM, âstart_energyâ: eV_start, âend_energyâ: eV_end}, âŚ]
- Return type:
varies
- Raises:
ValueError â
If âdfâ does not contain required columns - If âapplicationâ is invalid - If sensitivity_factors are not provided for âcompositionâ
Examples
# 1. Plot XPS spectrum >>> XPS_Analysis(df, application=âplotâ)
# 2. Get raw data >>> energy, counts = XPS_Analysis(df, application=âdataâ)
# 3. Compute atomic composition >>> sensitivity_factors = {âCâ: 1.0, âOâ: 2.93, âFeâ: 3.5} >>> composition = XPS_Analysis(df, application=âcompositionâ, sensitivity_factors=sensitivity_factors) >>> composition {âCâ: 45.3, âOâ: 32.1, âFeâ: 22.6}
# 4. Detect peaks and plot >>> peaks_info = XPS_Analysis(df, application=âpeak_detectionâ, peak_prominence=50, smoothing_window=11) >>> peaks_info[0] {âenergyâ: 284.8, âcountsâ: 1200, âsmoothed_countsâ: 1185, âwidthâ: 1.2, âstart_energyâ: 284.0, âend_energyâ: 285.6}
- PyGamLab.Data_Analysis.XRD_Analysis(file, which, peak=0)[source]
- Parameters:
file (str) â the variable in which you saved the .cvs file path
which (str) â which operation you want to perform on the file
peak (float, optional) â 2θ for the peak you want to analyse. The default is 0.
- Returns:
fwhm â value of FWHM for the peak you specified.
- Return type:
float
- PyGamLab.Data_Analysis.XRD_ZnO(XRD, application)[source]
- Parameters:
XRD (DataFrame) â Data containing XRD data.
application (str) â Type of application âplotâ,âFWHMâ,âScherrerâ. plot:To draw the figure. FWHM:Width at Half Maximum. Scherrer:To calculate the crystallite size.
Returns
FWHM
Scherrer
-------
None.
- PyGamLab.Data_Analysis.XrdAnalysis(df, which, peak=0)[source]
Perform XRD (X-ray Diffraction) analysis on a given DataFrame containing âangleâ and âintensityâ.
- Parameters:
df (pd.DataFrame) â A pandas DataFrame with at least two columns: âangleâ and âintensityâ.
which (str) â Operation to perform on the DataFrame. Options: - âplotâ : Plots the XRD pattern. - âfwhmâ : Calculates the Full Width at Half Maximum (FWHM) for a given peak.
peak (float, optional) â The 2θ angle of the peak to analyze. Default is 0.
- Returns:
fwhm â
If which == âfwhmâ, returns the FWHM value of the specified peak.
If which == âplotâ, returns None.
- Return type:
float or None
Example
>>> data = pd.DataFrame({'angle': [20, 21, 22, 23, 24], ... 'intensity': [5, 20, 50, 20, 5]}) >>> xrdAnalysis(data, which='plot') # Plots the XRD pattern >>> xrdAnalysis(data, which='fwhm', peak=22) 0.5
- PyGamLab.Data_Analysis.XrdZno(dataframe, application)[source]
Parameters: - dataframe: pandas.DataFrame
Data containing XRD data. Expected columns: [âAngleâ, âDet1Disc1â].
- application: str
One of [âplotâ, âFWHMâ, âScherrerâ]. âplotâ â Draw the XRD pattern. âFWHMâ â Calculate Full Width at Half Maximum. âScherrerâ â Calculate crystallite size using Scherrer equation.
Returns: - float or None
Returns FWHM (float) if application=âFWHMâ. Returns crystallite size (float) if application=âScherrerâ. Returns None if application=âplotâ.
- PyGamLab.Data_Analysis.old_LN_S_E(df, operation)[source]
This function analyzes the elastic part of a true stress-strain curve.
- Parameters:
df (pandas.DataFrame) â Must contain 2 columns: - âDLâ : elongation (length change in mm) - âFâ : force in Newtons
operation (str) â
âPLOTâ : plots the elastic region of the true stress-strain curve
âYOUNG_MODULUSâ : calculates and returns Youngâs Modulus (E)
- Returns:
None if operation=âPLOTâ
float if operation=âYOUNG_MODULUSâ
- PyGamLab.Data_Analysis.read_msa(filename)[source]