This section describes how to query the database, and how to plug the results into data analysis packages like pandas , or in plotting tools like plotly.

A Notebook tutorial is available in ndlab-tutorial.ipynb, with also a static HTML version

Data can also be processed by your own code. For this purpose a set of classes, and functions to start with, will be described in the Programming section

0. Cheat sheet

Functions for data retrieval, you will see later the detailed explanation:

import ndlab as nl
# optional
import pandas as pd


fields = "NUCLIDE.Z, NUCLIDE.ATOMIC_MASS"   # what to retrieve
filter = "NUCLIDE.NUC_ID = '135XE'"         # condition(s) to apply

csv = nl.csv_data(fields, filter)     # csv
json = nl.json_data(fields, filter)   # json
df = nl.pandas_df(fields, filter, pd) # dataframe

nucs = nuclides('NUCLIDE.Z = 2')  # list of nl objects

csv =  nl.csv_nl(nucs)         # csv
df =  nl.pandas_df_nl(nucs,pd) # dataframe
json = df.to_json()         # json (or csv, excel, ...) from dataframe

1. Retrieval functions

You just need to specify what to retrieve and what filter, if any, to apply. The datamodel allows you to express these two parameters in a way close to the physics of the problem.

Let’s start with some examples :

import ndlab as nl

# what to retrieve
fields = "NUCLIDE.Z, NUCLIDE.ATOMIC_MASS"

# condition(s) to apply
filter = "NUCLIDE.NUC_ID = '135XE'"

# get data as csv
csv = nl.csv_data(fields, filter)

print(csv)

>>z,atomic_mass
>>54,134907231.441

now json …

json = nl.json_data(fields, filter)
print(json)

>>[ {'z':'54'  ,'atomic_mass':'134907231.441'}]

… and using pandas

import pandas as pd

df = nl.pandas_df(fields, filter, pd)
print(df)

>>    z   atomic_mass
>>0  54  1.349072e+08

We have just seen three functions:

1. ndlab.csv_data() : data in csv
2. ndlab.json_data() : data in json
3. ndlab.pandas_df() : pandas dataframe (the pandas module needs to be already imported)

These functions take two parameters, which are detailed in sections 2 and 3 :

  • fields specifies which data to retrieve

  • filter specifies which conditions, if any, should be applied to filter the data

A powerful way to handle data is to use the third function and create a pandas dataframe. Besides allowing data manipulation and analysis, it offers the option to save the data in various formats.

For example, to save into an Excel file, just do

import pandas as pd

df = nd.pandas_df(fields, filter, pd)

df.to_excel('excel_data.xlsx')

If you prefer to directly use SQL, refer to section 8

The following sections describe the grammar for constructing field and filter using the nuclear data model on which NDLab is built

2. The fields parameter

It is a comma-separated, uppercase list of the quantites you want to retrieve. For example fields = "NUCLIDE.ABUNDANCE , NUCLIDE.ATOMIC_MASS"
or
fields = "GAMMA.ENERGY , GAMMA.MULTIPOLARITY , GAMMA.MIXING_RATIO".

The simplest pattern for a field is entity dot quantity

Entities are the basic blocks of the model : NUCLIDE, GAMMA, LEVEL, DR_ALPHA … The ndm reference section provides the full list
Quantities store the information available for each entity.
A field can also be like GAMMA.START_LEVEL.ENERGY or DR_ALPHA.DAUGHTER_FED_LEVEL.ENERGY.
This pattern is entity dot link dot quantity.
Links allow to jump from one entity to another related one. In this case from a gamma to its start level.
In this way the NDLab translates nuclear physics language into the data model. These link fields are marked with (L) in the ndm reference

Often there is a set of three fields describing a property, let’s take for example the gamma energy:

  • GAMMA.ENERGY : the value

  • GAMMA.ENERGY_UNC : the uncertainity

  • GAMMA.ENERGY_LIMIT : the limit (>, < , etc)

Then for a full specification of the gamma energy you should use fields = "GAMMA.ENERGY , GAMMA.ENERGY_UNC , GAMMA.ENERGY_LIMIT".

In the ndm reference these fields are flagged with (Q). Quantities lacking the _LIMIT field are flagged with (q)

Tip

To inspect the avalilable quantities, use the autocomplete feature avaliable in any development tool, for example type “GAMMA.” to see all the gamma-related fields.

If you want to use this feature, you must import the orm module in this way:

from ndlaborm import *

When coding in a Jupyter Notebook the autocompletion will not work within quotes. First write the variables : fields = GAMMA.ENERGY , GAMMA.MIXING_RATIO, then add the quotes fields = " GAMMA.ENERGY , GAMMA.MIXING_RATIO "

Attention

Rules for the fields parameter

  1. use uppercase

  2. maximum two dots : entity.quantity or entity.link.quantity

  3. a list of fields must have comma-separated items

  4. a list must refer to only one entity: fields = "GAMMA.ENERGY , NUCLIDE.Z " is not allowed. Try to use the links instead : "GAMMA.ENERGY , GAMMA.NUC.Z"`

  5. simple functions are allowed NUCLIDE.Z + NUCLIDE.N

  6. use as to alias for convenience in further references NUCLIDE.Z + NUCLIDE.N as a

  7. use .ALL to retrieve all the quantities avaliable, e.g. NUCLIDE.ALL

If you do not want to use the autocompletion, you do not need to import the ndlaborm module.
But if ndlaborm was imported using an alias, e.g. import ndlaborm as no, when using the autocompletion pay attention not include the alias in the fields variable. fields = "no.GAMMA.ENERGY" is not valid, remove the ‘no.’.

Tip

In case of doubt, or unexpected errors, one can use the following two functions

# returns false if there are issues
nl.is_query_ok(fields, filter)

# returns a list with the errors found
errors = nl.query_check(fields, filter)

3. The filter parameter

Often one needs to apply some conditions to filter the data, for example filter = "GAMMA.NUC.Z = 5" will filter only the gamma transitions of Z = 5 nuclides. Here below the rules to follow when writing a filter

Attention

Rules for the filter parameter

  1. use uppercase

  2. maximum two dots : entity.quantity or entity.link.quantity

  3. conditions can by joined using the and , or logical operators

  4. conditions must refer to only one entity: fields = "GAMMA.ENERGY > 2000 and  NUCLIDE.Z = 5 " is not allowed. Try to use the links instead : "GAMMA.ENERGY > 2000 and GAMMA.NUC.Z = 5"`

  5. simple functions are allowed NUCLIDE.Z + NUCLIDE.N = 20

  6. text conditions must be enclosed in single quotes GAMMA.MULTIPOLARITY = 'E2+M1'. In the ndm reference , these quantities are flagged with (S)

Allowed comparison operators are: = > < != >= <= , for example GAMMA.ENERGY >= 1000

Further operators

  • use in to filter a set of values, e.g. NUCLIDE.NUC_ID IN ( '235U', '135XE' ) (notice the single quotes), or NUCLIDE.Z IN (5, 10)

  • use like to filter the content of a text, e.g. LEVEL.JP like '%2+%' will filter the jp values containing the string “2+”. See jp values for more

Tip

In case of doubt, or unexpected errors, one can use the following two functions

# returns false if there are issues
nl.is_query_ok(fields, filter)

# returns a list with the errors found
errors = nl.query_check(fields, filter)

4. Examples

This set of examples provides a summary or the rules above, and can be used as a template for building other retrievals

import ndlab as nl
from ndlaborm import * # to use the autocompletion feature

# Z , N and Half-life of all ground states
fields = "LEVEL.NUC.Z , LEVEL.NUC.N , LEVEL.HALF_LIFE_SEC"
filter = "LEVEL.ENERGY = 0" # or also "LEVEL.SEQNO = 0"

# Z , N and Half-life of nuclides with 10 < Z < 100 that decay by beta-delayed neutron emission
fields = "DR_DELAYED.PARENT.Z , DR_DELAYED.PARENT.N , DR_DELAYED.PARENT_LEVEL.HALF_LIFE_SEC"
filter = "DR_DELAYED.TYPE = DELAY_N and DR_DELAYED.PARENT.Z > 10 and DR_DELAYED.PARENT.Z < 100"

# Z as a magic number
filter = "NUCLIDE.Z  in (2,8,20,28,50,82,126)"

# Z, N, and  mixing-ratio divided by energy, with a quite complex filter
fields = "GAMMA.NUC.Z as z, GAMMA.NUC.N as n , abs( GAMMA.MIXING_RATIO / GAMMA.ENERGY ) as r"

filter  =  " ( GAMMA.NUC.Z % 2 = 0 ) and ( GAMMA.NUC.N % 2 = 0 ) "                  # even-even nuclides
filter +=  " and GAMMA.START_LEVEL.JP = '2+' and GAMMA.START_LEVEL.JP_ORDER = 2 "   # starts from Jp=2+ ,2nd occurrence
filter +=  " and GAMMA.END_LEVEL.JP = '2+'   and GAMMA.END_LEVEL.JP_ORDER = 1 "     # ends at Jp = 2+ ,1st occurrence
filter +=  " and (GAMMA.MULTIPOLARITY = 'E2+M1'or GAMMA.MULTIPOLARITY = 'M1+E2') "  # E2 + M1 multipolarity

Warning

The ground states properties (Half-life, Jp, etc…) are in the LEVEL, with SEQNO = 0. For example the G.S. of Xe-135 is LEVEL.NUC.NUC_ID = '135XE' and LEVEL.SEQNO = 0

5. J p values

This section details the handling of J p. Please read it at least once, then you might just consult the wrap-up

ENSDF Jp assignment

The angular momentum handling requires some further explanation. In many cases there is a unique J p, given as, e.g., 2+ ; but there are cases where the evaluated value is (2+), meaning that the assignement is based on weak arguments, or even, for example, 0+,(1,2), where there are many possible values, mixing strong and weak arguments. Please consult the ENSDF guide pag. 101 for more about weak and strong criteria

The condition LEVEL.JP = '2+' will include only the cases is which J p has a single value, with strong assignment

If you are willing to relax the criteria, you can use the following way

LEVEL.JP LIKE '%2+%': this will include any J p assignements in which 2+ is present.

Just for completeness, there is the field LEVEL.JP_REASON which can be LEVEL.JP_REASON = JP_STRONG or LEVEL.JP_REASON = JP_WEAK. Using LEVEL.JP_REASON = JP_STRONG will restrinct to strong assignments, but you might still get multipel values, like 0+,1+

There is a further caveat: there are cases where the value is, say, (0+),1,2,3,4(+) . If one is interested in knowing if such assignment is compatible with the value, say, 2+, NDlab at the moment does not offer a solution, and the user should develop custom code for further processing the results.

In the following you see how to address the problem with the RIPL fields

RIPL Jp assignment

For the J p ambiguous cases, and, in some cases, even when the J p was not assigned, NDlab reports a unique J p value based on the solution devised by RIPL (Reference Input Parameter Library), where the single value is chosen using a spin continuous statistical distribution.

The distribution is constructed in the following way:

  • Using gamma transitions having final levels of known spin, or

  • Using 10 known spins in level schema

When the original evaluation assignes multiple values, the statistical distribution is constrained to one of the admissible values

The fields containing RIPL assignments are LEVEL.J and LEVEL.P

Attention

the field LEVEL.JP_METHOD tells how the RIPL assignment was chosen (see the constants)

  • RIPL_J_UNIQUE - the original assignment is already unique

  • RIPL_J_DISTRIBUTION_GAMMA - spin distribution generated using gamma transitions

  • RIPL_J_DISTRIBUTION_CONSTRAIN - spin distribution was constrained to a set of evaluated options (e.g. 0,1,2,3)

  • RIPL_J_DISTRIBUTION - no experimental information, value chosen with spin distribution generated using 10 known spin values

  • RIPL_J_UNKNOWN - nothing possible, spin unknow and then J = -1 (default value for unknown spin)

Refer to the IAEA Tecdoc 1506, Section 3 for the complete discussion

Tip

the filter LEVEL.JP = '2+' is equivalent to LEVEL.J = 2' and LEVEL.P = RIPL_P_PLUS and LEVEL.JP_METHOD = RIPL_J_UNIQUE

Furher details on the RIPL database relevant for NDLab can be found here

Wrap-up : combining ENSDF and RIPL

The situation is getting esoteric, then let’s summarise the main options:

LEVEL.JP = '2+' J p exactly 2 +, no doubts
LEVEL.JP LIKE '%2+%' and LEVEL.JP_METHOD = RIPL_J_UNIQUE J p is 2 + or (2 +)
LEVEL.JP LIKE '%2+%' and LEVEL.JP_REASON = JP_STRONG J p can be 2 +, but also 2 +, 3 + , …
LEVEL.JP LIKE '%2+%' J p as above, but also (2 +), 3 + , …

For example, let’s say you want to get the log_ft of beta decays, but only from levels having a definite Jp because later you might want to group the data by Jp, or transitin type. Then do this:

fields =  " DR_BETA.LOGFT, DR_BETA.PARENT_LEVEL.JP, DR_BETA.TRANS_TYPE "
filter =  " DR_BETA.PARENT_LEVEL.JP_REASON = JP_STRONG and DR_BETA.PARENT_LEVEL.JP_METHOD = RIPL_J_UNIQUE "

If you need a single value, and do not care about weak or strong assignements:

fields =  " DR_BETA.LOGFT, DR_BETA.PARENT_LEVEL.JP, DR_BETA.TRANS_TYPE "
filter =  " DR_BETA.PARENT_LEVEL.JP_METHOD = RIPL_J_UNIQUE "

If you need a unique value, even when the ENSDF evaluation is not unique, then use the RIPL assignment, knowing that this might or might not be accurate

fields =  " DR_BETA.LOGFT, DR_BETA.PARENT_LEVEL.J,  DR_BETA.PARENT_LEVEL.P, DR_BETA.TRANS_TYPE "
filter =  " DR_BETA.PARENT_LEVEL.JP_METHOD != RIPL_J_UNKNOWN "

6. Decay modes

The various decay radiations (DR_GAMMA, DR_BETAM, etc…) and L_DECAY have the MODE field which indicates the type of decay. Use the set of constants DECAY_* to specify a decay mode, for example L_DECAY.MODE = DECAY_Bp

The section constants provides the list of decays

7. Three lines of code

Three lines of code to extract intensity vs energy for gammas emitted by the decay of Am-241, save the data in Excel, and then generate a plot. Refer to the Notebook ndlab-tutorial.ipynb for more

import ndlab as nl
from ndlaborm import *
import pandas as pd
import plotly.express as px

df = nl.pandas_df("DR_GAMMA.ENERGY , DR_GAMMA.INTENSITY ", "DR_GAMMA.PARENT.NUC_ID = '241AM'", pd)

df.to_excel('am241.xlsx')

px.scatter(df, x="energy", y="intensity", log_y=True).show()

8. SQL through Python

The underlying database is SQLite, see the database description. It is possible to directly write SQL to extract data, using these functions. For example:

pd.read_sql(nd.query_build(fields, filter), nd.query_con())

One can go forward in two ways:

  1. using again the function pd.read_sql( {your sql}, query_con()), but instead of relying on NDLab to build the query, you write your own

  2. using the Python sqlite3 module , like in the example below which does not involve pandas:

SQL = "select * from nuclides"
result_set  = query_con().execute()
keys = self._result_keys(result)

field_name = []
for k in keys:
   # k is a string with the field name

# this could be the header of a csv
csv_title = [','.join(keys)]

i_range = range(len(keys))
for r in result_set:
    # r[n] has the value of the n-th field of the fetched row
    # do something with r[n], like build a csv string
    csv = ",".join(( str(r[i]) if  str(r[i]) != 'None' else '')for i in i_range)

Tip

Use the ndlab.query_build() function to get an initial sql that can be refined. For example, try to print the sql statements produced by the examples above:

9. SQL direct connection

Using DB Browser for SQLite it is possible to manage the local database, see the database description