NDLab
Version 0


The aim of this tutorial is to show how to query, extract data, and plug them in Pyhton tools and custom code
There are some examples of analysing and plotting data, but it is up to the user to search the web for tutorials on data analisys, pattern recognition, machine learning, ...

To install NDLab:
go to the GitHub page and read the How to install section

In some examples this Notebook makes use of two external packages: pandas and plotly. You can of course use NDLab without them, they just make plotting easier.
To install plotly:

    $ pip install plotly
    $ pip install ipywidgets
    $ jupyter labextension install jupyterlab-plotly

To install pandas:

    $ pip install pandas

(You will have to restart jupyer lab)

This tutorial has two parts:¶

  • 1st - Work with data
    deals with extraction, analysis, plotting, and exporting of the data
  • 2nd - Write you own code
    deals with functions for writing code and building algorithms, like decay energy balance, decay chains, etc...

Let's start importing NDLab:

In [1]:
# NuclearDataLab
import ndlab as nl

# this is needed only for using autocompletion when typing
from ndlaborm import * 
1st - Work with data

Just type the properties you want. For example, if you need the nuclide charge radius, type NUCLIDE.CHARGE_RADIUS

Practice
Use the autofilling to see what is availabe: start typing 'NU' and hit the tab key to load NUCLIDE, then add '.', to see all the fields.
For the moment no code will be run, it is just to test the input method
In [2]:
NUCLIDE.ABUNDANCE, CUM_FY.FAST_YIELD_UNC, DR_ALPHA.HINDRANCE, DR_BETAM.PARENT_LEVEL.JP
Out[2]:
(<ndlaborm.Column at 0x7f9cd0037e20>,
 <ndlaborm.Column at 0x7f9d3094ff10>,
 <ndlaborm.Column at 0x7f9d00b9bbe0>,
 <ndlaborm.Column at 0x7f9cd003e3d0>)

Here the list of entites one can query
NUCLIDE, LEVEL, GAMMA, L_DECAY, CUM_FY, IND_FY
plus the set of decay radiations:
DR_ALPHA, DR_GAMMA, DR_BETAM, DR_BETAP, DR_ANTI_NU, DR_NU, DR_X, DR_ANNHIL, DR_AUGER, DR_CONV_EL, DR_DELAYED, DR_PHOTON_TOTAL

Practice
Try the autofilling with some of entities to see what they provide.
For the moment no code will be run, it is just to test the input method
Entities can have links to other entities
For example `GAMMA.START_LEVEL` gives access to the start level of the gamma, and if you type `GAMMA.START_LEVEL.ENERGY` you will refer to its energy

In the picture below the link-fields are in green (to nuclide) , or red (to level).

In [3]:
import matplotlib.pyplot as plt 
import matplotlib.image as mpimg
plt.subplots(figsize=(24, 8))
plt.imshow(mpimg.imread("../docs/_static/ndlab_rel.png"))
Out[3]:
<matplotlib.image.AxesImage at 0x7f9d111c79d0>
Practice 1
Try to reach the angular momentum J of a level that decays emitting beta radiation (DR_BETA)

to see the solution, go to the cell at the very bottom
In [ ]:
 
Practice
use print(description(NUCLIDE)), or any of the other names, to get the details the available fields and links

Note:
- (Q) tells that there are also the _UNC and _LIMIT fields. For example SP, SP_UNC, SP_LIMIT
- (q) tells that there is the _UNC field. For example ABUNDANCE, ABUNDANCE_UNC
- (S) tells that the fields is a string. For example NUCID, or JP
In [4]:
print(description(NUCLIDE))
NUCLIDE: properties of the nuclide in its ground state

ABUNDANCE 	*(q)* natural abundance, in mole fraction
ATOMIC_MASS 	*(Q)* atomic mass, in micro AMU
BETA_DECAY_EN 	*(Q)* energy available for beta decay [keV]
BINDING_EN 	*(Q)* binding energy per nucleon [keV]
CHARGE_RADIUS 	*(Q)* Root-mean-square of the nuclear charge radius, expressed in fm. 
ELEM_SYMBOL 	*(S)* symbol of the element
MASS_EXCESS 	*(Q)* mass excess [keV]
N 		number of neutrons
NUC_ID 		*(S)* identifier, e.g. 135XE
QA 		*(Q)* alpha decay Q energy [keV]
QBMN 		*(Q)* beta- + n decay energy [keV]
QEC 		*(Q)* electron capture Q-value [keV]
S2N 		*(Q)* 2-neutron separation energy [keV]
S2P 		*(Q)* 2-proton separation energy [keV]
SN 		*(Q)* neutron separation energy [keV]
SP 		*(Q)* proton separation energy [keV]
Z 		number of protons

1.1   How to build a data request ...
You will be using the grammar above to get the data.
You need to specify what fields to retrieve and what filter, if any, applies.
For example to get all the energies of the γ-rays that start from a level with Jπ = 2 +
`fields = " GAMMA.ENERGY "`
`filter = " GAMMA.START_LEVEL.JP = '2+' "` The filter can be empty, but that in some cases can lead to a lengthy retrieval
Attention:
Enclose the fields and filter between double commas " "

To take advantage of the autofilling, first write everything without commas, then place them when you are ready.
This happens in a notebook, whilst on command line and other developing environments the autofilling works also within commas
... and get the data

The simplest way to retrieve data is to call the nl.csv_data(fields, filter) function, or the nl.json_data(fields, filter) function

Practice 2
use the fields and filter variable above to call the csv_data function (place it in a print to have a nicer display), then try the json_data function

to see the solution, go to the bottom cell
In [5]:
fields = "GAMMA.ENERGY"
filter = "GAMMA.START_LEVEL.JP = '2+' "
# print only the beginning
print(nl.csv_data(fields, filter)[0:100])
energy
1004.1
535.61
369.1
528.248
1063.78
327.0
400.17
768.77
928.34
822.7
1750.8
1358.3
1897.4
665
Some rules:

1) Only one entity
In a list of fields, you must use the same entity:
fields = "NUCLIDE.Z, GAMMA.ENERGY" is not allowed, use instead
fields = "GAMMA.NUC.Z, GAMMA.ENERGY"

2) Two dots max
Do not use more than 2 dots:
DR_BETA.DAUGHTER_FED_LEVEL.NUC.ABUNDANCE is not allowed: 3 dots.
Often you can do the same with 2 dots, in this case
DR_BETA.DAUGHTER.ABUNDANCE

3) Multiple fields
must be seperated by comma

4) Multiple conditions
Use the logical operators and , or to join conditions in the filter. Matematical operators are = > < %
In general, one can use SQL operators allowed by SQLite

Tips:
You can build functions with the fields:
fields = "GAMMA.MIXING_RATIO / GAMMA.ENERGY"

You can give alias for convenience
fields = "GAMMA.MIXING_RATIO / GAMMA.ENERGY as m"
A complex example
Let's consider this picture from K. S. Krane Phys. Rev. C 10, 1197 (1974)
In [6]:
plt.subplots(figsize=(28, 10))
plt.imshow(mpimg.imread("../docs/_static/krane.png"))
Out[6]:
<matplotlib.image.AxesImage at 0x7f9d11bf2100>

This is a summary of the conditions:

Nuclide : Z even, N even

------------------- Start level: Jp = 2+     (2ndoccurrence)
   ↓
   ↓        Gamma:   E2 + M1 multipolarity
   ↓
------------------- End level: Jp = 2+     (1stoccurrence)


Filter criteria

  • gammas from even-even nuclides
  • starting from the second Jp = 2+ level
  • ending at first Jp = 2+ level
  • having E2+M1 multipolarity

    Fields

  • Δ = | mixing ratio / energy | * 835 , and A

Practice 3
Retrieve the data to reproduce the picture
The "fields" variable, and the first condition of the "filter", are already given.

- Remember the and to join conditions
- Use the autofilling and then add " at the beginning and at the end

to see the solution go to the cell at the very bottom

How to specify conditions
Condition on text fields must be placed within commas ':
GAMMA.START_LEVEL.JP = '2+'

Condition on numeric fields do not have commas:
GAMMA.ENERGY > 2000

In case of doubt, use print(description(GAMMA)) to see the field type

<div>
In [7]:
fields = "GAMMA.NUC.Z as z, GAMMA.NUC.N as n , abs( GAMMA.MIXING_RATIO / GAMMA.ENERGY / 0.835 * 1000) as m"

filter  = "( GAMMA.NUC.Z % 2 = 0 ) and ( GAMMA.NUC.N % 2 = 0 ) " # gamma from even-even nuclides
filter += " and GAMMA.START_LEVEL.JP = '2+' and GAMMA.START_LEVEL.JP_ORDER = 2 "# starts from Jp = 2+ , 2nd occurrence
filter += " and GAMMA.END_LEVEL.JP = '2+' and GAMMA.END_LEVEL.JP_ORDER = 1 "   # ends at     Jp = 2+ , 1st occurrence
filter += " and ( GAMMA.MULTIPOLARITY = 'E2+M1' or  GAMMA.MULTIPOLARITY = 'M1+E2' )" # E2 + M1 multipolarity


print(nl.csv_data(fields, filter)[0:250])
z,n,m
42,58,9.975354526366994
44,56,5.386445353834941
46,56,3.429496832798434
46,64,12.527246761706712
48,64,6.8939591200633314
48,68,2.568086302491941
50,68,3.446048067657999
52,66,37.3482562318861
58,74,21.697049165151792
60,76,
56,84,0.79115097632
1.2   Plotting

So far only the NDLab package has been used, this is enough to retrieve data and export them in CSV or Json
What follows shows how to embed NDLab in Pandas and Plotly

Run the cell below which loads the packages and defines a convenience function

In [8]:
import numpy as np  
import pandas as pd 
import matplotlib.pyplot as plt 
import matplotlib.image as mpimg

# Plotly
import plotly.graph_objects as go
import plotly.express as px
   


The Y axis of the Krane's picture above needs Log(Δ). Using pandas, numpy, and matplotlib data are prepared and plotted.

With fields and filter of the above practice, the following cell loads the data and shows part of the dataframe

In [9]:
fields = " GAMMA.NUC.Z as z, GAMMA.NUC.N as n, abs( GAMMA.MIXING_RATIO / GAMMA.ENERGY / 0.835 * 1000 ) as y "   # z, n, mixing / energy

filter  = "(GAMMA.NUC.Z % 2 = 0 ) AND (GAMMA.NUC.N % 2 = 0 ) "   # even-even nuclides
filter += " and GAMMA.START_LEVEL.JP = '2+' and GAMMA.START_LEVEL.JP_ORDER = 2 "# starts from Jp = 2+ , 2nd occurrence
filter += " and GAMMA.END_LEVEL.JP = '2+' and GAMMA.END_LEVEL.JP_ORDER = 1 "   # ends at     Jp = 2+ , 1st occurrence
filter += " and ( GAMMA.MULTIPOLARITY = 'E2+M1' or  GAMMA.MULTIPOLARITY = 'M1+E2' )" # E2 + M1 multipolarity

# load the data into a dataframe
df = nl.pandas_df(fields, filter, pd)
df
Out[9]:
z n y
0 42 58 9.975355
1 44 56 5.386445
2 46 56 3.429497
3 46 64 12.527247
4 48 64 6.893959
... ... ... ...
133 40 50 0.266848
134 42 52 2.411730
135 42 54 0.732317
136 38 58 3.461286
137 40 56 0.226757

138 rows × 3 columns

Now the data are plotted by calculating Z + N for the x, and log(Δ) the the y

In [10]:
plt.scatter( df["z"] + df["n"],  np.log(df["y"])*10, color='red' )
Out[10]:
<matplotlib.collections.PathCollection at 0x7f9d123b46a0>

It is worth to superimpose the new data with the old ones

In [11]:
plt.subplots(figsize=(28, 10))
plt.scatter(   df["z"] + df["n"],  np.log(df["y"])*10, color='red')
plt.imshow(mpimg.imread("../docs/_static/krane_2.png"), extent=[38, 205, -63, 79])
Out[11]:
<matplotlib.image.AxesImage at 0x7f9d1240ff10>

The paper claims that there are minima at closed shells
On a 2D plot it is difficult to see where the closed shells in N or Z are placed
The cell below uses plotly to plot a 3D picture where nuclides with closed shells are in red:

  • A second dataset is loaded with only the closed shells by adding a condition to the filter
  • The first and second dataset are plotted togeter

Plotly graphs are interactive, can be rotated and zoomed

In [12]:
# condition for closed-shell 
filter2 = filter + " and ( GAMMA.NUC.Z  in (2,8,20,28,50,82,126) or GAMMA.NUC.N in (2,8,20,28,50,82,126) )"

# load data
df2 = nl.pandas_df(fields, filter2, pd)

# plot 3D
fig = go.Figure(data=[
    # all nuclides
    go.Scatter3d(
        x=df["z"], y=df["n"], z=np.log10(df.y)*10, 
        mode='markers', marker=dict(size=8, color='blue')
    )
 ,   
    # the magic ones in red
    go.Scatter3d(
        x=df2["z"], y=df2["n"], z=np.log10(df2.y)*10, 
        mode='markers',marker=dict(size=8,color="red")
    )
  
])

# set layout properties
fig.update_layout(margin=dict(l=0, r=0, b=30, t=0) ,width=800, height=800,    
                  scene = dict(xaxis_title='Z',yaxis_title='N',zaxis_title='Delta') 
                 ).show()

Performance test

Get the energy distribution of the entire set of gamma transitions:
The following steps are performed with a couple lines of code

  • 260 000 gamma lines in the db are taken,
  • a condition on each line is applied,
  • the results are ordered,
  • the results are binned,
  • an interactive plot is shown
In [13]:
fields =    "GAMMA.ENERGY as energy"
condition = "GAMMA.ENERGY > 0 AND GAMMA.ENERGY < 2000"

dfg =  nl.pandas_df(fields,condition,pd)

px.histogram( dfg,  x="energy", nbins=200).show()
Look at the binning above, is there anything relevant for clinical imaging of PET nuclides ?
1.3   Data analysis

The picture above relies on your brain to find patterns and outliers, whilst the cell below shows how to automatise the process.
The assumption is that the curve should be smooth, meaning that the first dervative should not have sudden jumps.
Comments in cell explain the steps

In [14]:
fields =    "GAMMA.ENERGY as energy"
condition = "GAMMA.ENERGY BETWEEN 0 AND 2000"

# load a pandas dataframe using the convenience funtion defined above
a =  nl.pandas_df(fields,condition,pd)

# bin the data (consult the web for further examples and explanations)
dft = pd.cut(a[a.columns[0]], bins=300).apply(lambda x: x.mid).value_counts(sort=False).to_frame()
dft = dft.rename(columns={"energy": "counts"})

# get the derivative
dft["delta"] = abs(dft.diff(periods = 1).counts )

# visualise the outlier (consult the web to check Plotly tools for data processing)
fig = px.box(dft, y=dft.delta).show()

# print the energy where the outlier occurs
print("Outlier energy ", dft.index[dft.index.get_loc(dft.delta.idxmax()) - 1 ])
Outlier energy  510.00550000000004
A few examples of data retrieval with filter to conclude part one of the tutorial
1.3.1   Plot log_ft values grouped by transition type

In the plot the transition type is not decoded: 1NU stands for 1st non-unique, etc...

In [15]:
fields = "DR_BETA.TRANS_TYPE as t , DR_BETA.LOGFT as l"
filter = " DR_BETA.LOGFT != 0" # this is just to avoid empty values

df =  nl.pandas_df(fields, filter,pd)

# group by transition type
ans = [pd.DataFrame(y) for x, y in df.groupby(df.columns[0], as_index=False)]


# plot log_ft for each transition type 
fig = go.Figure()
for a in ans:
    tst = pd.cut(a['l'], bins=125).apply(lambda x: x.mid).value_counts(sort=False)
    fig.add_trace(go.Scatter(x=tst.index.values, y = tst, mode="markers",name=a.iloc[0, 0]))

fig.update_layout(width=800, height=400, xaxis_range=[3,11] ).show()
1.3.2   Plot log ft for allowed transitions form a parent level with Jp = '0+'
In [16]:
fields =  " DR_BETA.LOGFT as log_ft"
filter =  "( DR_BETA.PARENT_LEVEL.JP = '0+') and DR_BETA.TRANS_TYPE ='A'"

df = nl.pandas_df(fields, filter, pd)

px.histogram(df, x=df.log_ft, nbins=80).update_layout(width=350, height=350).show()
1.3.3   Extract level information for all even-even nuclei with the first excited state is NOT 2+.

With pandas it is easy to dump the data in almost any format. In this way one is then free to use them as input for other tools
Check the web for how to use the various writing - reading methods listed below

In [17]:
filter = ("LEVEL.NUC.N % 2 = 0 AND LEVEL.NUC.Z % 2 = 0 AND LEVEL.SEQNO = 1 AND LEVEL.JP != '2+' AND LEVEL.JP_METHOD = JP_STRONG ")
 
df = nl.pandas_df_nl(nl.levels(filter), pd)

df

#  .to_csv()
#  .to_json('', orient='records', lines=True) or  f.to_json(orient='records')
#  .to_hdf('', 'df')
#  .to_excel()  
#  .to_sql()

#  .read_json()
#  .read_sql()
#  .read_csv()
#  .read_excel()
#  .read_hdf()
#  .read_table
#. .read_html
Out[17]:
z n nucid l_seqno energy energy_unc energy_limit half_life half_life_unc half_life_limit ... jp_str quadrupole_em quadrupole_em_unc quadrupole_em_limit dipole_mm dipole_mm_unc dipole_mm_limit questionable configuration isospin
0 36 64 100KR 1 309.00 10.0000 = 0.00 0.00 = ... (2+) 0 0 = 0 0 = NaN NaN NaN
1 38 62 100SR 1 129.18 0.0009 = 3.91 0.16 = ... (2+) 0 0 = 0 0 = NaN NaN NaN
2 38 64 102SR 1 126.00 0.2000 = 3.00 1.20 = ... (2+) 0 0 = 0 0 = NaN NaN NaN
3 40 64 104ZR 1 139.30 0.0300 = 2.00 0.30 = ... (2+) 0 0 = 0 0 = NaN NaN NaN
4 52 54 106TE 1 664.80 0.0300 = 0.00 0.00 = ... (2+) 0 0 = 0 0 = Y NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
155 40 56 96ZR 1 1581.64 0.0006 = 38.00 0.70 = ... 0+ 0 0 = 0 0 = NaN NaN NaN
156 48 50 98CD 1 1395.10 0.0200 = 0.00 0.00 = ... (2+) 0 0 = 0 0 = NaN NaN NaN
157 36 62 98KR 1 329.00 7.0000 = 0.00 0.00 = ... (2+) 0 0 = 0 0 = NaN NaN NaN
158 42 56 98MO 1 734.75 0.0004 = 21.80 0.90 = ... 0+ 0 0 = 0 0 = NaN NaN NaN
159 40 58 98ZR 1 854.06 0.0006 = 64.00 7.00 = ... 0+ 0 0 = 0 0 = NaN NaN NaN

160 rows × 28 columns

Note: do not focus on the format, the format does not matter. Focus instead on the data model, the data model matters
2nd - Functions and programming

Often one wants to write had hoc processing code that differs from filtering, grouping, binning, or any other manipulation. For example, calculate the energy balance of a decay.

Ndlab provides as a set of python classes and functions pushing the data extraction to the background. The functions accept the filter parameter (default no filter) to restrict the selection.

Note: Only the NDLab packages are needed in the following. One needs Pandas or Plotly just when using a dataframe or a plot

Warning: the filter can refer only to the entity retrieved: dr_gammas(" DR_GAMMA.ENERGY > 2000 ")

Tip: use the remove_doublers function incase teh selection returns double entries :
delayed_n = nl.dr_delayeds( "DR_DELAYED.TYPE = DELAY_N ") parent_nucs = [ ld.parent for ld in delayed_n] parent_nucs = nl.remove_doublers(parent_nucs)
Whitout this call, a parent emitting many neutron will appear multiple times

The following is a list of hands-on examples. To see the full list of classes and function, please consult the GUIDE

2.1   Decay energy balance

With NDLab's set of classes and functions, you do not need to take care of the data retrieval. See how you can calculate the energy balance of a decay. Stop the video to check the code...

In [18]:
# Decay energy balance: quality assessment of the data
filter = ""

# load a nuclide using its identifier
nuc = nl.nuclide("135XE") 


# the decays() function loads an array with all the decays. Take the first one
decay = nuc.decays[0]

# all possible radiation types for the chosen decay
rads = [
        decay.xs(),          # X-ray
        decay.gammas(),      # Gamma
        decay.convels(),     # Conversion electron
        decay.augers(),      # Auger
        decay.alphas(),      # Alpha
        decay.betas_m(),     # B-
        decay.betas_p(),     # B+
        decay.anti_nus(),    # anti-neutrino
        decay.nus(),         # neutrino
        decay.annihil()     # annihilation
       ]

# measured energy for each radiation type
rads_en = [(sum([r[i].energy * r[i].intensity for i in range(len(r))])) for r in rads]

# total measured energy
tot_en = sum(rads_en) / 100

# intensities are per 100 decay of the parent, need to renormalise Q with branching ratio
q_br = decay.q_togs * decay.perc/100

# difference with deposited energy
delta = ((q_br - tot_en)/q_br)*100

print("Branching Ratio:          ", decay.perc/100 )
print("Energy accounted for [keV]" , tot_en )
print("Qb- * B.R. [keV]          " , q_br)
print("Delta [%]                  " , int(delta.value * 100) /100)
print("\nNotice the built-in uncertainty propagation")
Branching Ratio:            1.0+/-0
Energy accounted for [keV]  1164+/-28
Qb- * B.R. [keV]            1169+/-4
Delta [%]                   0.39

Notice the built-in uncertainty propagation


2.2  Ancestors - offsprings chain
In [19]:
nuc = nl.nuclide("135CS") 

print('Daughters and H-l [s]')
[print(d.nucid, d.gs.half_life_sec) for d in nuc.daughters]
    

print('\nParents and H-l [s]')
[print(d.nucid, d.gs.half_life_sec) for d in nuc.parents]
    
Daughters and H-l [s]
135BA  0.0+/-0
135CS  (7.3+/-0.9)e+13

Parents and H-l [s]
135XE  (3.290+/-0.007)e+04
135XE  (3.290+/-0.007)e+04
135CS  (7.3+/-0.9)e+13
Out[19]:
[None, None, None]
2.3   Level population fraction after a gamma cascade

Let's say that the level #15 of Xe-135 is populated by some reaction.

The following code gives the list of the levels populated by the gamma cascade, with energy and intensity

In [20]:
def cascade( my_level):
        global todo
        global levels
        global done
        
         
        for g in my_level.gammas() : 
            
            if(g.end_level.l_seqno != 0 and not (g.end_level.l_seqno in todo)):
                todo[g.end_level.l_seqno] = g.end_level
            
            if not g.end_level.l_seqno in result: 
                result[g.end_level.l_seqno] = result[my_level.l_seqno] * g.rel_photon_intens/100      
            else:
                result[g.end_level.l_seqno] += result[my_level.l_seqno] * g.rel_photon_intens/100         
        
        if not my_level.l_seqno in done: 
            done.append(my_level.l_seqno)
            
        todo = dict(sorted(todo.items(), reverse=True))  
        
        myrun = todo.copy()
       
        for r in myrun:
            if( not r in done):
                cascade(levels[r])

        return 



nuc_id = "135XE"
start_level = 15

levels = nl.nuclide(nuc_id).levels() 
result = {start_level : 1.0}
todo = {start_level : levels[start_level] }
done = []

cascade(levels[start_level])

# just printing
line = '|'.join(str(x).ljust(24) for x in [ "energy [keV] ", " population %", "level #"])
print(line +'\n')
for k in sorted(result):
    line = '|'.join(str(x).ljust(24) for x in [str(levels[k].energy ), str(result[k]*100.0),levels[k].pk])
    print(line)
  
energy [keV]            | population %           |level #                 

 0.0+/-0                | 100+/-8                |135XE-0                 
 288.455000+/-0.000015  | 0.0041+/-0.0007        |135XE-1                 
 526.551000+/-0.000013  | 34+/-5                 |135XE-2                 
 1131.512000+/-0.000011 | 7+/-5                  |135XE-3                 
 1260.416000+/-0.000013 | 0.137+/-0.024          |135XE-4                 
 1565.288000+/-0.000016 | 37+/-5                 |135XE-8                 
 1927.294000+/-0.000023 |100.0                   |135XE-15                
2.4  Photon total intensities

Nuclides often have many decay modes, either because of metastable states or because of a single energy state has more branchings

In gamma spectroscopy it is useful to have the total intensity of an energy line in the decay of a parent, regardless of the decay mode

The cell below selects Eu-152 and calls the dr_photon_total function which performs the task. Note that the function is called with a filter which cuts intensityes less than 2%

In [21]:
df = nl.pandas_df_nl(nl.nuclide("152EU").decays[0].dr_photon_tot("DR_PHOTON_TOTAL.INTENSITY > 2"), pd)

go.Figure().add_trace(go.Scatter(x=df['energy'], y = df['intensity'], mode = "markers")).update_layout(xaxis_title='Energy [keV]',
yaxis_title='Intensity [%]').show()

df
Out[21]:
parent_nucid parent_l_seqno energy energy_unc energy_limit intensity intensity_unc intensity_limit type count
0 152EU 0 6.3540 0.0000 = 14.000 0.500 = X 1.0
1 152EU 0 39.5220 0.0000 = 20.870 0.200 = X 1.0
2 152EU 0 40.1170 0.0000 = 37.800 0.300 = X 1.0
3 152EU 0 45.5230 0.0000 = 11.810 0.160 = X 1.0
4 152EU 0 45.9980 0.0000 = 14.850 0.200 = X 1.0
5 152EU 0 46.5750 0.0000 = 3.050 0.070 = X 1.0
6 152EU 0 121.7817 0.0003 = 28.530 0.160 = G 1.0
7 152EU 0 244.6974 0.0008 = 7.550 0.040 = G 1.0
8 152EU 0 344.2785 0.0012 = 26.590 0.200 = G 1.0
9 152EU 0 411.1165 0.0012 = 2.237 0.013 = G 1.0
10 152EU 0 443.9606 0.0016 = 2.827 0.014 = G 1.0
11 152EU 0 778.9045 0.0024 = 12.930 0.080 = G 1.0
12 152EU 0 867.3800 0.0300 = 4.230 0.030 = G 1.0
13 152EU 0 964.0570 0.0050 = 14.510 0.070 = G 1.0
14 152EU 0 1085.8370 0.0100 = 10.110 0.050 = G 1.0
15 152EU 0 1112.0760 0.0030 = 13.670 0.080 = G 1.0
16 152EU 0 1408.0130 0.0030 = 20.870 0.090 = G 1.0
2.5   Ground states Half-lives as function of N and Z

See that one can refer to a column in a dataframe by its position, in this way one has a template to plot 3d

In [22]:
# convenience function to plot 3D
def plot3d(fields, condition):
    df =  nl.pandas_df(fields,condition, pd)

    fig = go.Figure(data=[
    
    go.Scatter3d(
        x=df[df.columns[0]], y = df[df.columns[1]], z = df[df.columns[2]],
        mode='markers', marker=dict(size=8, color='blue') )

    ])

    fig.update_layout(height=1000 ,scene=dict(zaxis=dict( type='log'))).show()
    return df

fields =    "LEVEL.NUC.Z , LEVEL.NUC.N , LEVEL.HALF_LIFE_SEC"
condition = "LEVEL.ENERGY = 0"

plot3d(fields, condition)    
Out[22]:
z n half_life_sec
0 47 53 1.206000e+02
1 48 52 4.910000e+01
2 49 51 5.650000e+00
3 36 64 7.000000e-03
4 42 58 2.212141e+26
... ... ... ...
3349 92 122 5.200000e-04
3350 78 87 2.600000e-04
3351 80 90 8.000000e-05
3352 9 4 4.517205e-22
3353 17 35 NaN

3354 rows × 3 columns

2.6   Half-life of nuclides with 10 < Z < 100, emitting delayed neutrons after beta- decay
In [23]:
fields = "DR_DELAYED.PARENT.Z , DR_DELAYED.PARENT.N , DR_DELAYED.PARENT_LEVEL.HALF_LIFE_SEC"
condition = "DR_DELAYED.TYPE = 'DN' and DR_DELAYED.PARENT.Z > 10 and DR_DELAYED.PARENT.Z < 100"

plot3d(fields, condition)
Out[23]:
z n half_life_sec
0 37 63 0.052
1 37 63 0.052
2 37 63 0.052
3 37 63 0.052
4 37 63 0.052
... ... ... ...
387 37 62 0.054
388 37 62 0.054
389 37 62 0.054
390 37 62 0.054
391 37 62 0.054

392 rows × 3 columns

2.7   Half-life of nuclides in the decay chain of Am-241

For each nuclide, the properties daughters_chain and parents_chain contain the offstrings and the ancestors, respectively

The example shows the seamless interplay between sets of NDLab classes and pandas dataframes

In [24]:
# get the offsprings as NDLab class instances
dau = [n.gs for n in nl.nuclide("241AM").daughters_chain]

# dump in a dataframe
df = nl.pandas_df_nl(dau,pd)

# Plot. Hover on a point to see more info
px.scatter(df, 
           x= df.index, 
           y= df.half_life_sec,
           hover_data=["nucid"],
           labels={ "index": "Daughter # ", "y":"Half-life [s]"},
           log_y=True).show()
3nd - Solutions to the exercises
In [25]:
# practice 1
DR_BETA.PARENT_LEVEL.JP

# practice 2
fields = " GAMMA.ENERGY "

filter = " GAMMA.START_LEVEL.JP = '2+' "
#print(csv_data(fields, filter))
#print(json_data(fields, filter))

# practice 3
fields = "GAMMA.NUC.Z as z, GAMMA.NUC.N as n , abs( GAMMA.MIXING_RATIO / GAMMA.ENERGY / 0.835 * 1000) as b"

filter  = "( GAMMA.NUC.Z % 2 = 0 ) and ( GAMMA.NUC.N % 2 = 0) "                     # even-even nuclides
filter +=  " and GAMMA.START_LEVEL.JP = '2+' and GAMMA.START_LEVEL.JP_ORDER = 2 "   # starts from Jp=2+ ,2nd occurrence
filter +=  " and GAMMA.END_LEVEL.JP = '2+'   and GAMMA.END_LEVEL.JP_ORDER = 1 "     # ends at Jp = 2+ ,1st occurrence
filter +=  " and (GAMMA.MULTIPOLARITY = 'E2+M1'or GAMMA.MULTIPOLARITY = 'M1+E2') "  # E2 + M1 multipolarity

#print(csv_data(fields, filter))
In [ ]: