The aim of this tutorial is to show how to query, extract data, and plug them in Pyhton tools and custom code
There are some examples of analysing and plotting data, but it is up to the user
to search the web for tutorials on data analisys, pattern recognition, machine learning, ...
To install NDLab:
go to the GitHub page and read the How to install section
In some examples this Notebook makes use of two external packages: pandas and plotly.
You can of course use NDLab without them, they just make plotting easier.
To install plotly:
$ pip install plotly
$ pip install ipywidgets
$ jupyter labextension install jupyterlab-plotly
To install pandas:
$ pip install pandas
(You will have to restart jupyer lab)
Let's start importing NDLab:
# NuclearDataLab
import ndlab as nl
# this is needed only for using autocompletion when typing
from ndlaborm import *
Just type the properties you want. For example, if you need the nuclide charge radius, type NUCLIDE.CHARGE_RADIUS
NUCLIDE.ABUNDANCE, CUM_FY.FAST_YIELD_UNC, DR_ALPHA.HINDRANCE, DR_BETAM.PARENT_LEVEL.JP
(<ndlaborm.Column at 0x7f9cd0037e20>, <ndlaborm.Column at 0x7f9d3094ff10>, <ndlaborm.Column at 0x7f9d00b9bbe0>, <ndlaborm.Column at 0x7f9cd003e3d0>)
Here the list of entites one can query
NUCLIDE, LEVEL, GAMMA, L_DECAY, CUM_FY, IND_FY
plus the set of decay radiations:
DR_ALPHA, DR_GAMMA, DR_BETAM, DR_BETAP, DR_ANTI_NU, DR_NU, DR_X, DR_ANNHIL, DR_AUGER, DR_CONV_EL, DR_DELAYED, DR_PHOTON_TOTAL
In the picture below the link-fields are in green (to nuclide) , or red (to level).
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
plt.subplots(figsize=(24, 8))
plt.imshow(mpimg.imread("../docs/_static/ndlab_rel.png"))
<matplotlib.image.AxesImage at 0x7f9d111c79d0>
print(description(NUCLIDE))
NUCLIDE: properties of the nuclide in its ground state ABUNDANCE *(q)* natural abundance, in mole fraction ATOMIC_MASS *(Q)* atomic mass, in micro AMU BETA_DECAY_EN *(Q)* energy available for beta decay [keV] BINDING_EN *(Q)* binding energy per nucleon [keV] CHARGE_RADIUS *(Q)* Root-mean-square of the nuclear charge radius, expressed in fm. ELEM_SYMBOL *(S)* symbol of the element MASS_EXCESS *(Q)* mass excess [keV] N number of neutrons NUC_ID *(S)* identifier, e.g. 135XE QA *(Q)* alpha decay Q energy [keV] QBMN *(Q)* beta- + n decay energy [keV] QEC *(Q)* electron capture Q-value [keV] S2N *(Q)* 2-neutron separation energy [keV] S2P *(Q)* 2-proton separation energy [keV] SN *(Q)* neutron separation energy [keV] SP *(Q)* proton separation energy [keV] Z number of protons
The simplest way to retrieve data is to call the nl.csv_data(fields, filter)
function, or the nl.json_data(fields, filter)
function
fields = "GAMMA.ENERGY"
filter = "GAMMA.START_LEVEL.JP = '2+' "
# print only the beginning
print(nl.csv_data(fields, filter)[0:100])
energy 1004.1 535.61 369.1 528.248 1063.78 327.0 400.17 768.77 928.34 822.7 1750.8 1358.3 1897.4 665
plt.subplots(figsize=(28, 10))
plt.imshow(mpimg.imread("../docs/_static/krane.png"))
<matplotlib.image.AxesImage at 0x7f9d11bf2100>
This is a summary of the conditions:
Nuclide : Z even, N even
------------------- Start level: Jp = 2+ (2ndoccurrence)
↓
↓ Gamma: E2 + M1 multipolarity
↓
------------------- End level: Jp = 2+ (1stoccurrence)
Filter criteria
having E2+M1 multipolarity
Fields
<div>
fields = "GAMMA.NUC.Z as z, GAMMA.NUC.N as n , abs( GAMMA.MIXING_RATIO / GAMMA.ENERGY / 0.835 * 1000) as m"
filter = "( GAMMA.NUC.Z % 2 = 0 ) and ( GAMMA.NUC.N % 2 = 0 ) " # gamma from even-even nuclides
filter += " and GAMMA.START_LEVEL.JP = '2+' and GAMMA.START_LEVEL.JP_ORDER = 2 "# starts from Jp = 2+ , 2nd occurrence
filter += " and GAMMA.END_LEVEL.JP = '2+' and GAMMA.END_LEVEL.JP_ORDER = 1 " # ends at Jp = 2+ , 1st occurrence
filter += " and ( GAMMA.MULTIPOLARITY = 'E2+M1' or GAMMA.MULTIPOLARITY = 'M1+E2' )" # E2 + M1 multipolarity
print(nl.csv_data(fields, filter)[0:250])
z,n,m 42,58,9.975354526366994 44,56,5.386445353834941 46,56,3.429496832798434 46,64,12.527246761706712 48,64,6.8939591200633314 48,68,2.568086302491941 50,68,3.446048067657999 52,66,37.3482562318861 58,74,21.697049165151792 60,76, 56,84,0.79115097632
So far only
the NDLab package has been used, this is enough to retrieve data and export them in CSV or Json
What follows shows how to embed NDLab in Pandas and Plotly
Run the cell below which loads the packages and defines a convenience function
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Plotly
import plotly.graph_objects as go
import plotly.express as px
The Y axis of the Krane's picture above needs Log(Δ).
Using pandas, numpy, and matplotlib data are prepared and plotted.
With fields
and filter
of the above practice, the following cell loads the data and shows part of the dataframe
fields = " GAMMA.NUC.Z as z, GAMMA.NUC.N as n, abs( GAMMA.MIXING_RATIO / GAMMA.ENERGY / 0.835 * 1000 ) as y " # z, n, mixing / energy
filter = "(GAMMA.NUC.Z % 2 = 0 ) AND (GAMMA.NUC.N % 2 = 0 ) " # even-even nuclides
filter += " and GAMMA.START_LEVEL.JP = '2+' and GAMMA.START_LEVEL.JP_ORDER = 2 "# starts from Jp = 2+ , 2nd occurrence
filter += " and GAMMA.END_LEVEL.JP = '2+' and GAMMA.END_LEVEL.JP_ORDER = 1 " # ends at Jp = 2+ , 1st occurrence
filter += " and ( GAMMA.MULTIPOLARITY = 'E2+M1' or GAMMA.MULTIPOLARITY = 'M1+E2' )" # E2 + M1 multipolarity
# load the data into a dataframe
df = nl.pandas_df(fields, filter, pd)
df
z | n | y | |
---|---|---|---|
0 | 42 | 58 | 9.975355 |
1 | 44 | 56 | 5.386445 |
2 | 46 | 56 | 3.429497 |
3 | 46 | 64 | 12.527247 |
4 | 48 | 64 | 6.893959 |
... | ... | ... | ... |
133 | 40 | 50 | 0.266848 |
134 | 42 | 52 | 2.411730 |
135 | 42 | 54 | 0.732317 |
136 | 38 | 58 | 3.461286 |
137 | 40 | 56 | 0.226757 |
138 rows × 3 columns
Now the data are plotted by calculating Z + N for the x, and log(Δ) the the y
plt.scatter( df["z"] + df["n"], np.log(df["y"])*10, color='red' )
<matplotlib.collections.PathCollection at 0x7f9d123b46a0>
It is worth to superimpose the new data with the old ones
plt.subplots(figsize=(28, 10))
plt.scatter( df["z"] + df["n"], np.log(df["y"])*10, color='red')
plt.imshow(mpimg.imread("../docs/_static/krane_2.png"), extent=[38, 205, -63, 79])
<matplotlib.image.AxesImage at 0x7f9d1240ff10>
The paper claims that there are minima at closed shells
On a 2D plot it is difficult to see where the closed shells in N or Z are placed
The cell below uses plotly to plot a 3D picture where nuclides with closed shells are in red:
Plotly graphs are interactive, can be rotated and zoomed
# condition for closed-shell
filter2 = filter + " and ( GAMMA.NUC.Z in (2,8,20,28,50,82,126) or GAMMA.NUC.N in (2,8,20,28,50,82,126) )"
# load data
df2 = nl.pandas_df(fields, filter2, pd)
# plot 3D
fig = go.Figure(data=[
# all nuclides
go.Scatter3d(
x=df["z"], y=df["n"], z=np.log10(df.y)*10,
mode='markers', marker=dict(size=8, color='blue')
)
,
# the magic ones in red
go.Scatter3d(
x=df2["z"], y=df2["n"], z=np.log10(df2.y)*10,
mode='markers',marker=dict(size=8,color="red")
)
])
# set layout properties
fig.update_layout(margin=dict(l=0, r=0, b=30, t=0) ,width=800, height=800,
scene = dict(xaxis_title='Z',yaxis_title='N',zaxis_title='Delta')
).show()
fields = "GAMMA.ENERGY as energy"
condition = "GAMMA.ENERGY > 0 AND GAMMA.ENERGY < 2000"
dfg = nl.pandas_df(fields,condition,pd)
px.histogram( dfg, x="energy", nbins=200).show()
The picture above relies on your brain to find patterns and outliers, whilst
the cell below shows how to automatise the process.
The assumption is that the curve should be smooth, meaning that the first dervative should not have sudden jumps.
Comments in cell explain the steps
fields = "GAMMA.ENERGY as energy"
condition = "GAMMA.ENERGY BETWEEN 0 AND 2000"
# load a pandas dataframe using the convenience funtion defined above
a = nl.pandas_df(fields,condition,pd)
# bin the data (consult the web for further examples and explanations)
dft = pd.cut(a[a.columns[0]], bins=300).apply(lambda x: x.mid).value_counts(sort=False).to_frame()
dft = dft.rename(columns={"energy": "counts"})
# get the derivative
dft["delta"] = abs(dft.diff(periods = 1).counts )
# visualise the outlier (consult the web to check Plotly tools for data processing)
fig = px.box(dft, y=dft.delta).show()
# print the energy where the outlier occurs
print("Outlier energy ", dft.index[dft.index.get_loc(dft.delta.idxmax()) - 1 ])
Outlier energy 510.00550000000004
In the plot the transition type is not decoded: 1NU stands for 1st non-unique, etc...
fields = "DR_BETA.TRANS_TYPE as t , DR_BETA.LOGFT as l"
filter = " DR_BETA.LOGFT != 0" # this is just to avoid empty values
df = nl.pandas_df(fields, filter,pd)
# group by transition type
ans = [pd.DataFrame(y) for x, y in df.groupby(df.columns[0], as_index=False)]
# plot log_ft for each transition type
fig = go.Figure()
for a in ans:
tst = pd.cut(a['l'], bins=125).apply(lambda x: x.mid).value_counts(sort=False)
fig.add_trace(go.Scatter(x=tst.index.values, y = tst, mode="markers",name=a.iloc[0, 0]))
fig.update_layout(width=800, height=400, xaxis_range=[3,11] ).show()
fields = " DR_BETA.LOGFT as log_ft"
filter = "( DR_BETA.PARENT_LEVEL.JP = '0+') and DR_BETA.TRANS_TYPE ='A'"
df = nl.pandas_df(fields, filter, pd)
px.histogram(df, x=df.log_ft, nbins=80).update_layout(width=350, height=350).show()
With pandas it is easy to dump the data in almost any format. In this way one is then free to use them as input for other tools
Check the web for how to use the various writing - reading methods listed below
filter = ("LEVEL.NUC.N % 2 = 0 AND LEVEL.NUC.Z % 2 = 0 AND LEVEL.SEQNO = 1 AND LEVEL.JP != '2+' AND LEVEL.JP_METHOD = JP_STRONG ")
df = nl.pandas_df_nl(nl.levels(filter), pd)
df
# .to_csv()
# .to_json('', orient='records', lines=True) or f.to_json(orient='records')
# .to_hdf('', 'df')
# .to_excel()
# .to_sql()
# .read_json()
# .read_sql()
# .read_csv()
# .read_excel()
# .read_hdf()
# .read_table
#. .read_html
z | n | nucid | l_seqno | energy | energy_unc | energy_limit | half_life | half_life_unc | half_life_limit | ... | jp_str | quadrupole_em | quadrupole_em_unc | quadrupole_em_limit | dipole_mm | dipole_mm_unc | dipole_mm_limit | questionable | configuration | isospin | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 36 | 64 | 100KR | 1 | 309.00 | 10.0000 | = | 0.00 | 0.00 | = | ... | (2+) | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
1 | 38 | 62 | 100SR | 1 | 129.18 | 0.0009 | = | 3.91 | 0.16 | = | ... | (2+) | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
2 | 38 | 64 | 102SR | 1 | 126.00 | 0.2000 | = | 3.00 | 1.20 | = | ... | (2+) | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
3 | 40 | 64 | 104ZR | 1 | 139.30 | 0.0300 | = | 2.00 | 0.30 | = | ... | (2+) | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
4 | 52 | 54 | 106TE | 1 | 664.80 | 0.0300 | = | 0.00 | 0.00 | = | ... | (2+) | 0 | 0 | = | 0 | 0 | = | Y | NaN | NaN |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
155 | 40 | 56 | 96ZR | 1 | 1581.64 | 0.0006 | = | 38.00 | 0.70 | = | ... | 0+ | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
156 | 48 | 50 | 98CD | 1 | 1395.10 | 0.0200 | = | 0.00 | 0.00 | = | ... | (2+) | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
157 | 36 | 62 | 98KR | 1 | 329.00 | 7.0000 | = | 0.00 | 0.00 | = | ... | (2+) | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
158 | 42 | 56 | 98MO | 1 | 734.75 | 0.0004 | = | 21.80 | 0.90 | = | ... | 0+ | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
159 | 40 | 58 | 98ZR | 1 | 854.06 | 0.0006 | = | 64.00 | 7.00 | = | ... | 0+ | 0 | 0 | = | 0 | 0 | = | NaN | NaN | NaN |
160 rows × 28 columns
Often one wants to write had hoc processing code that differs from filtering, grouping, binning, or any other manipulation. For example, calculate the energy balance of a decay.
Ndlab provides as a set of python classes and functions pushing the data extraction to the background. The functions accept the filter parameter (default no filter) to restrict the selection.
delayed_n = nl.dr_delayeds( "DR_DELAYED.TYPE = DELAY_N ")
parent_nucs = [ ld.parent for ld in delayed_n]
parent_nucs = nl.remove_doublers(parent_nucs)
The following is a list of hands-on examples. To see the full list of classes and function, please consult the GUIDE
With NDLab's set of classes and functions, you do not need to take care of the data retrieval. See how you can calculate the energy balance of a decay. Stop the video to check the code...
# Decay energy balance: quality assessment of the data
filter = ""
# load a nuclide using its identifier
nuc = nl.nuclide("135XE")
# the decays() function loads an array with all the decays. Take the first one
decay = nuc.decays[0]
# all possible radiation types for the chosen decay
rads = [
decay.xs(), # X-ray
decay.gammas(), # Gamma
decay.convels(), # Conversion electron
decay.augers(), # Auger
decay.alphas(), # Alpha
decay.betas_m(), # B-
decay.betas_p(), # B+
decay.anti_nus(), # anti-neutrino
decay.nus(), # neutrino
decay.annihil() # annihilation
]
# measured energy for each radiation type
rads_en = [(sum([r[i].energy * r[i].intensity for i in range(len(r))])) for r in rads]
# total measured energy
tot_en = sum(rads_en) / 100
# intensities are per 100 decay of the parent, need to renormalise Q with branching ratio
q_br = decay.q_togs * decay.perc/100
# difference with deposited energy
delta = ((q_br - tot_en)/q_br)*100
print("Branching Ratio: ", decay.perc/100 )
print("Energy accounted for [keV]" , tot_en )
print("Qb- * B.R. [keV] " , q_br)
print("Delta [%] " , int(delta.value * 100) /100)
print("\nNotice the built-in uncertainty propagation")
Branching Ratio: 1.0+/-0 Energy accounted for [keV] 1164+/-28 Qb- * B.R. [keV] 1169+/-4 Delta [%] 0.39 Notice the built-in uncertainty propagation
nuc = nl.nuclide("135CS")
print('Daughters and H-l [s]')
[print(d.nucid, d.gs.half_life_sec) for d in nuc.daughters]
print('\nParents and H-l [s]')
[print(d.nucid, d.gs.half_life_sec) for d in nuc.parents]
Daughters and H-l [s] 135BA 0.0+/-0 135CS (7.3+/-0.9)e+13 Parents and H-l [s] 135XE (3.290+/-0.007)e+04 135XE (3.290+/-0.007)e+04 135CS (7.3+/-0.9)e+13
[None, None, None]
Let's say that the level #15 of Xe-135 is populated by some reaction.
The following code gives the list of the levels populated by the gamma cascade, with energy and intensity
def cascade( my_level):
global todo
global levels
global done
for g in my_level.gammas() :
if(g.end_level.l_seqno != 0 and not (g.end_level.l_seqno in todo)):
todo[g.end_level.l_seqno] = g.end_level
if not g.end_level.l_seqno in result:
result[g.end_level.l_seqno] = result[my_level.l_seqno] * g.rel_photon_intens/100
else:
result[g.end_level.l_seqno] += result[my_level.l_seqno] * g.rel_photon_intens/100
if not my_level.l_seqno in done:
done.append(my_level.l_seqno)
todo = dict(sorted(todo.items(), reverse=True))
myrun = todo.copy()
for r in myrun:
if( not r in done):
cascade(levels[r])
return
nuc_id = "135XE"
start_level = 15
levels = nl.nuclide(nuc_id).levels()
result = {start_level : 1.0}
todo = {start_level : levels[start_level] }
done = []
cascade(levels[start_level])
# just printing
line = '|'.join(str(x).ljust(24) for x in [ "energy [keV] ", " population %", "level #"])
print(line +'\n')
for k in sorted(result):
line = '|'.join(str(x).ljust(24) for x in [str(levels[k].energy ), str(result[k]*100.0),levels[k].pk])
print(line)
energy [keV] | population % |level # 0.0+/-0 | 100+/-8 |135XE-0 288.455000+/-0.000015 | 0.0041+/-0.0007 |135XE-1 526.551000+/-0.000013 | 34+/-5 |135XE-2 1131.512000+/-0.000011 | 7+/-5 |135XE-3 1260.416000+/-0.000013 | 0.137+/-0.024 |135XE-4 1565.288000+/-0.000016 | 37+/-5 |135XE-8 1927.294000+/-0.000023 |100.0 |135XE-15
Nuclides often have many decay modes, either because of metastable states or because of a single energy state has more branchings
In gamma spectroscopy it is useful to have the total intensity of an energy line in the decay of a parent, regardless of the decay mode
The cell below selects Eu-152 and calls the dr_photon_total
function which performs the task. Note that the function is called with a filter which cuts intensityes less than 2%
df = nl.pandas_df_nl(nl.nuclide("152EU").decays[0].dr_photon_tot("DR_PHOTON_TOTAL.INTENSITY > 2"), pd)
go.Figure().add_trace(go.Scatter(x=df['energy'], y = df['intensity'], mode = "markers")).update_layout(xaxis_title='Energy [keV]',
yaxis_title='Intensity [%]').show()
df
parent_nucid | parent_l_seqno | energy | energy_unc | energy_limit | intensity | intensity_unc | intensity_limit | type | count | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 152EU | 0 | 6.3540 | 0.0000 | = | 14.000 | 0.500 | = | X | 1.0 |
1 | 152EU | 0 | 39.5220 | 0.0000 | = | 20.870 | 0.200 | = | X | 1.0 |
2 | 152EU | 0 | 40.1170 | 0.0000 | = | 37.800 | 0.300 | = | X | 1.0 |
3 | 152EU | 0 | 45.5230 | 0.0000 | = | 11.810 | 0.160 | = | X | 1.0 |
4 | 152EU | 0 | 45.9980 | 0.0000 | = | 14.850 | 0.200 | = | X | 1.0 |
5 | 152EU | 0 | 46.5750 | 0.0000 | = | 3.050 | 0.070 | = | X | 1.0 |
6 | 152EU | 0 | 121.7817 | 0.0003 | = | 28.530 | 0.160 | = | G | 1.0 |
7 | 152EU | 0 | 244.6974 | 0.0008 | = | 7.550 | 0.040 | = | G | 1.0 |
8 | 152EU | 0 | 344.2785 | 0.0012 | = | 26.590 | 0.200 | = | G | 1.0 |
9 | 152EU | 0 | 411.1165 | 0.0012 | = | 2.237 | 0.013 | = | G | 1.0 |
10 | 152EU | 0 | 443.9606 | 0.0016 | = | 2.827 | 0.014 | = | G | 1.0 |
11 | 152EU | 0 | 778.9045 | 0.0024 | = | 12.930 | 0.080 | = | G | 1.0 |
12 | 152EU | 0 | 867.3800 | 0.0300 | = | 4.230 | 0.030 | = | G | 1.0 |
13 | 152EU | 0 | 964.0570 | 0.0050 | = | 14.510 | 0.070 | = | G | 1.0 |
14 | 152EU | 0 | 1085.8370 | 0.0100 | = | 10.110 | 0.050 | = | G | 1.0 |
15 | 152EU | 0 | 1112.0760 | 0.0030 | = | 13.670 | 0.080 | = | G | 1.0 |
16 | 152EU | 0 | 1408.0130 | 0.0030 | = | 20.870 | 0.090 | = | G | 1.0 |
See that one can refer to a column in a dataframe by its position, in this way one has a template to plot 3d
# convenience function to plot 3D
def plot3d(fields, condition):
df = nl.pandas_df(fields,condition, pd)
fig = go.Figure(data=[
go.Scatter3d(
x=df[df.columns[0]], y = df[df.columns[1]], z = df[df.columns[2]],
mode='markers', marker=dict(size=8, color='blue') )
])
fig.update_layout(height=1000 ,scene=dict(zaxis=dict( type='log'))).show()
return df
fields = "LEVEL.NUC.Z , LEVEL.NUC.N , LEVEL.HALF_LIFE_SEC"
condition = "LEVEL.ENERGY = 0"
plot3d(fields, condition)
z | n | half_life_sec | |
---|---|---|---|
0 | 47 | 53 | 1.206000e+02 |
1 | 48 | 52 | 4.910000e+01 |
2 | 49 | 51 | 5.650000e+00 |
3 | 36 | 64 | 7.000000e-03 |
4 | 42 | 58 | 2.212141e+26 |
... | ... | ... | ... |
3349 | 92 | 122 | 5.200000e-04 |
3350 | 78 | 87 | 2.600000e-04 |
3351 | 80 | 90 | 8.000000e-05 |
3352 | 9 | 4 | 4.517205e-22 |
3353 | 17 | 35 | NaN |
3354 rows × 3 columns
fields = "DR_DELAYED.PARENT.Z , DR_DELAYED.PARENT.N , DR_DELAYED.PARENT_LEVEL.HALF_LIFE_SEC"
condition = "DR_DELAYED.TYPE = 'DN' and DR_DELAYED.PARENT.Z > 10 and DR_DELAYED.PARENT.Z < 100"
plot3d(fields, condition)
z | n | half_life_sec | |
---|---|---|---|
0 | 37 | 63 | 0.052 |
1 | 37 | 63 | 0.052 |
2 | 37 | 63 | 0.052 |
3 | 37 | 63 | 0.052 |
4 | 37 | 63 | 0.052 |
... | ... | ... | ... |
387 | 37 | 62 | 0.054 |
388 | 37 | 62 | 0.054 |
389 | 37 | 62 | 0.054 |
390 | 37 | 62 | 0.054 |
391 | 37 | 62 | 0.054 |
392 rows × 3 columns
For each nuclide, the properties daughters_chain
and parents_chain
contain the offstrings and the ancestors, respectively
The example shows the seamless interplay between sets of NDLab classes and pandas dataframes
# get the offsprings as NDLab class instances
dau = [n.gs for n in nl.nuclide("241AM").daughters_chain]
# dump in a dataframe
df = nl.pandas_df_nl(dau,pd)
# Plot. Hover on a point to see more info
px.scatter(df,
x= df.index,
y= df.half_life_sec,
hover_data=["nucid"],
labels={ "index": "Daughter # ", "y":"Half-life [s]"},
log_y=True).show()
# practice 1
DR_BETA.PARENT_LEVEL.JP
# practice 2
fields = " GAMMA.ENERGY "
filter = " GAMMA.START_LEVEL.JP = '2+' "
#print(csv_data(fields, filter))
#print(json_data(fields, filter))
# practice 3
fields = "GAMMA.NUC.Z as z, GAMMA.NUC.N as n , abs( GAMMA.MIXING_RATIO / GAMMA.ENERGY / 0.835 * 1000) as b"
filter = "( GAMMA.NUC.Z % 2 = 0 ) and ( GAMMA.NUC.N % 2 = 0) " # even-even nuclides
filter += " and GAMMA.START_LEVEL.JP = '2+' and GAMMA.START_LEVEL.JP_ORDER = 2 " # starts from Jp=2+ ,2nd occurrence
filter += " and GAMMA.END_LEVEL.JP = '2+' and GAMMA.END_LEVEL.JP_ORDER = 1 " # ends at Jp = 2+ ,1st occurrence
filter += " and (GAMMA.MULTIPOLARITY = 'E2+M1'or GAMMA.MULTIPOLARITY = 'M1+E2') " # E2 + M1 multipolarity
#print(csv_data(fields, filter))