Working with Pandas DataFrames
Contents
Working with Pandas DataFrames#
Overview
Questions:
How can I use pandas to examine data?
Objectives:
Load a csv file into a pandas dataframe
Learn the different ways to access information in a dataframe.
Learn how to perform calculations on a column of data.
Learn about built-in plotting in a dataframe.
Pandas is a Python library used for data analysis and manipulation. Within the world of data science, it is a ubiquitous and widely used library. If you are learning how to analyze data in Python, it will be almost impossible to avoid pandas.
For this workshop, we will be using two types of data structues - NumPy arrays and pandas DataFrames. Pandas dataframes are specifically for data which is two dimensional (rows and columns). NumPy arrays, while similar in some ways, are meant to work with higher dimensional data. We’ll see this in a future lesson.
Pandas is very powerful. In this session, we’ll be learning how to access information in pandas dataframes and how to do some basic manipulation and analysis. The first step in using pandas is importing it. Typically when pandas is imported, it is shortened to pd
.
import pandas as pd
The first data set we are looking at in this workshop is solubility data.
This is a data set which gives molecular descriptors along with solubility calculated using the ESOL method.
The data was obtained from moleculenet and you can read more about the method here.
The dataset has been downloaded and placed in your data
folder.
Later in the workshop, we’ll be seing how we can fit this model ourselves using Python, but for now we are just going to examine the data using pandas. The data is saved in a csv file. CSV stands for “comma separated values”. This is a file format where commas separate values. You can also typically read files like this into excel. In fact, initially in this session you may not yet see what advantages pandas offers over Excel. However, functionalites in pandas will be more useful once we start cleaning and fitting data.
df = pd.read_csv("data/delaney-processed.csv")
Examining the data#
Now that we have used the pandas read_csv
function, our data is in a variable called df
. This variable is a pandas dataframe. This means it has lots of special functionalities thanks to pandas. For example, you can preview the first 5 rows using the .head
function.
df.head()
Compound ID | ESOL predicted log solubility in mols per litre | Minimum Degree | Molecular Weight | Number of H-Bond Donors | Number of Rings | Number of Rotatable Bonds | Polar Surface Area | measured log solubility in mols per litre | smiles | |
---|---|---|---|---|---|---|---|---|---|---|
0 | Amigdalin | -0.974 | 1 | 457.432 | 7 | 3 | 7 | 202.32 | -0.77 | OCC3OC(OCC2OC(OC(C#N)c1ccccc1)C(O)C(O)C2O)C(O)... |
1 | Fenfuram | -2.885 | 1 | 201.225 | 1 | 2 | 2 | 42.24 | -3.30 | Cc1occc1C(=O)Nc2ccccc2 |
2 | citral | -2.579 | 1 | 152.237 | 0 | 0 | 4 | 17.07 | -2.06 | CC(C)=CCCC(C)=CC(=O) |
3 | Picene | -6.618 | 2 | 278.354 | 0 | 5 | 0 | 0.00 | -7.87 | c1ccc2c(c1)ccc3c2ccc4c5ccccc5ccc43 |
4 | Thiophene | -2.232 | 2 | 84.143 | 0 | 1 | 0 | 0.00 | -1.33 | c1ccsc1 |
The .info
function will give information about the columns and the data type of those columns. The data type will become very important later as we work with more data.
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1128 entries, 0 to 1127
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Compound ID 1128 non-null object
1 ESOL predicted log solubility in mols per litre 1128 non-null float64
2 Minimum Degree 1128 non-null int64
3 Molecular Weight 1128 non-null float64
4 Number of H-Bond Donors 1128 non-null int64
5 Number of Rings 1128 non-null int64
6 Number of Rotatable Bonds 1128 non-null int64
7 Polar Surface Area 1128 non-null float64
8 measured log solubility in mols per litre 1128 non-null float64
9 smiles 1128 non-null object
dtypes: float64(4), int64(4), object(2)
memory usage: 88.2+ KB
Pandas assigns data types to columns, and will do its best to decide the data column for each column based on what is in the column.
You will see that this dataframe has 1128
values in each column.
We see that the column Minimum Degree
has the data type of int64
. Here, int
means integer
and 64
means 64 bit
. The 64 bit
refers to the amount of computer memory the variable can occupy. It won’t really be important for us. Similarly, float64
means 64 bit floating point
. These are decimal numbers.
The other column names which read object
are not numeric. They might be strings or they might be something else. We’ll discuss more later.
The describe
function can be used on a dataframe to quickly see statistics about columns with numerical data. If you look at the columns that statistics are computed for and compare to the data type shown from info
, you will see that we only get statistics for columns which had int64
or float64
data types.
df.describe()
ESOL predicted log solubility in mols per litre | Minimum Degree | Molecular Weight | Number of H-Bond Donors | Number of Rings | Number of Rotatable Bonds | Polar Surface Area | measured log solubility in mols per litre | |
---|---|---|---|---|---|---|---|---|
count | 1128.000000 | 1128.000000 | 1128.000000 | 1128.000000 | 1128.000000 | 1128.000000 | 1128.000000 | 1128.000000 |
mean | -2.988192 | 1.058511 | 203.937074 | 0.701241 | 1.390957 | 2.177305 | 34.872881 | -3.050102 |
std | 1.683220 | 0.238560 | 102.738077 | 1.089727 | 1.318286 | 2.640974 | 35.383593 | 2.096441 |
min | -9.702000 | 0.000000 | 16.043000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | -11.600000 |
25% | -3.948250 | 1.000000 | 121.183000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | -4.317500 |
50% | -2.870000 | 1.000000 | 182.179000 | 0.000000 | 1.000000 | 1.000000 | 26.300000 | -2.860000 |
75% | -1.843750 | 1.000000 | 270.372000 | 1.000000 | 2.000000 | 3.000000 | 55.440000 | -1.600000 |
max | 1.091000 | 2.000000 | 780.949000 | 11.000000 | 8.000000 | 23.000000 | 268.680000 | 1.580000 |
Accessing Data#
One great thing about pandas dataframes is how easy it is to access information. Pandas allows you to access information in a dataframe using both data indexes and names.
Pandas dataframes have rows and columns, you can see how many rows and columns using .shape
. This will return the shape as (num_rows, num_columns)
.
df.shape
(1128, 10)
Accessing with rows and column numbers#
The .iloc
function#
Access on a particular row and column using .iloc
followed by square brackets and the row and column numbers you want to access. If you only put one number, it will be assumed to be the row number you want.
The following gets row number 35.
df.iloc[35]
Compound ID Valeraldehyde
ESOL predicted log solubility in mols per litre -1.103
Minimum Degree 1
Molecular Weight 86.134
Number of H-Bond Donors 0
Number of Rings 0
Number of Rotatable Bonds 3
Polar Surface Area 17.07
measured log solubility in mols per litre -0.85
smiles CCCCC=O
Name: 35, dtype: object
We can also specify a number of columns by adding a second number to the slicing.
df.iloc[35, :3]
Compound ID Valeraldehyde
ESOL predicted log solubility in mols per litre -1.103
Minimum Degree 1
Name: 35, dtype: object
Accessing with names#
Each dataframe has an index (rows) and columns. The rows and columns have names. For the columns, these are indicated in .head
by the bold row at the top of the dataframe. The row names are similarly listed in bold in the left of the dataframe. The index can be named, but by default it is usually just numbered.
You can see information about the index or the columns using df.index
or df.columns
.
df.index
RangeIndex(start=0, stop=1128, step=1)
df.columns
Index(['Compound ID', 'ESOL predicted log solubility in mols per litre',
'Minimum Degree', 'Molecular Weight', 'Number of H-Bond Donors',
'Number of Rings', 'Number of Rotatable Bonds', 'Polar Surface Area',
'measured log solubility in mols per litre', 'smiles'],
dtype='object')
Some of these column names are very long, and we might not want to type them when we want to reference data. We will rename some of these columns to make accessing the data more convenient.
df.rename( columns = {
"ESOL predicted log solubility in mols per litre": "ESOL solubility (mol/L)",
"measured log solubility in mols per litre" : "measured solubility (mol/L)"
}, inplace=True)
To access part of a dataframe using the index or column names, would use the loc
function.
Because our index names are just numbers, this doesn’t look that different than iloc
when we are accessing rows.
The .loc
function#
df.loc[35]
Compound ID Valeraldehyde
ESOL solubility (mol/L) -1.103
Minimum Degree 1
Molecular Weight 86.134
Number of H-Bond Donors 0
Number of Rings 0
Number of Rotatable Bonds 3
Polar Surface Area 17.07
measured solubility (mol/L) -0.85
smiles CCCCC=O
Name: 35, dtype: object
However, we can now use column names
df.loc[35, "smiles"]
'CCCCC=O'
Using column names#
You can pull a column of several columns of information using the syntax
df[column_name]
To pull several columns, do
df[[colname1, colname2]]
For example
df["Compound ID"]
0 Amigdalin
1 Fenfuram
2 citral
3 Picene
4 Thiophene
...
1123 halothane
1124 Oxamyl
1125 Thiometon
1126 2-Methylbutane
1127 Stirofos
Name: Compound ID, Length: 1128, dtype: object
df[["Compound ID", "smiles"]]
Compound ID | smiles | |
---|---|---|
0 | Amigdalin | OCC3OC(OCC2OC(OC(C#N)c1ccccc1)C(O)C(O)C2O)C(O)... |
1 | Fenfuram | Cc1occc1C(=O)Nc2ccccc2 |
2 | citral | CC(C)=CCCC(C)=CC(=O) |
3 | Picene | c1ccc2c(c1)ccc3c2ccc4c5ccccc5ccc43 |
4 | Thiophene | c1ccsc1 |
... | ... | ... |
1123 | halothane | FC(F)(F)C(Cl)Br |
1124 | Oxamyl | CNC(=O)ON=C(SC)C(=O)N(C)C |
1125 | Thiometon | CCSCCSP(=S)(OC)OC |
1126 | 2-Methylbutane | CCC(C)C |
1127 | Stirofos | COP(=O)(OC)OC(=CCl)c1cc(Cl)c(Cl)cc1Cl |
1128 rows × 2 columns
Filtering Data#
There are two ways filtering in dataframes are typically done. Both are shown here for completeness, and because you will see both when looking at code others have written.
In the first method, the strategy is that you first establish where something is true within a dataframe, then use that to filter the dataframe.
df["Polar Surface Area"] >17
0 True
1 True
2 True
3 False
4 False
...
1123 False
1124 True
1125 True
1126 False
1127 True
Name: Polar Surface Area, Length: 1128, dtype: bool
This gives us a list of values with either True
or False
that can be used to index into the dataframe.
df[df["Polar Surface Area"] > 17]
Compound ID | ESOL solubility (mol/L) | Minimum Degree | Molecular Weight | Number of H-Bond Donors | Number of Rings | Number of Rotatable Bonds | Polar Surface Area | measured solubility (mol/L) | smiles | |
---|---|---|---|---|---|---|---|---|---|---|
0 | Amigdalin | -0.974 | 1 | 457.432 | 7 | 3 | 7 | 202.32 | -0.770 | OCC3OC(OCC2OC(OC(C#N)c1ccccc1)C(O)C(O)C2O)C(O)... |
1 | Fenfuram | -2.885 | 1 | 201.225 | 1 | 2 | 2 | 42.24 | -3.300 | Cc1occc1C(=O)Nc2ccccc2 |
2 | citral | -2.579 | 1 | 152.237 | 0 | 0 | 4 | 17.07 | -2.060 | CC(C)=CCCC(C)=CC(=O) |
7 | Estradiol | -4.138 | 1 | 272.388 | 2 | 4 | 0 | 40.46 | -5.030 | CC12CCC3C(CCc4cc(O)ccc34)C2CCC1O |
9 | Rotenone | -5.246 | 1 | 394.423 | 0 | 5 | 3 | 63.22 | -4.420 | COc5cc4OCC3Oc2c1CC(Oc1ccc2C(=O)C3c4cc5OC)C(C)=C |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
1121 | Phthalonitrile | -1.717 | 1 | 128.134 | 0 | 1 | 0 | 47.58 | -2.380 | N#Cc1ccccc1C#N |
1122 | m-Nitrotoluene | -2.640 | 1 | 137.138 | 0 | 1 | 1 | 43.14 | -2.440 | Cc1cccc(c1)N(=O)=O |
1124 | Oxamyl | -0.908 | 1 | 219.266 | 1 | 0 | 1 | 71.00 | 0.106 | CNC(=O)ON=C(SC)C(=O)N(C)C |
1125 | Thiometon | -3.323 | 1 | 246.359 | 0 | 0 | 7 | 18.46 | -3.091 | CCSCCSP(=S)(OC)OC |
1127 | Stirofos | -4.320 | 1 | 365.964 | 0 | 1 | 5 | 44.76 | -4.522 | COP(=O)(OC)OC(=CCl)c1cc(Cl)c(Cl)cc1Cl |
758 rows × 10 columns
This can also be accomplished using the pandas query
function. When using the query
function, you write your query as a string. If your column name has a space, you must surround it with backticks.
df.query("`Polar Surface Area` > 17")
Compound ID | ESOL solubility (mol/L) | Minimum Degree | Molecular Weight | Number of H-Bond Donors | Number of Rings | Number of Rotatable Bonds | Polar Surface Area | measured solubility (mol/L) | smiles | |
---|---|---|---|---|---|---|---|---|---|---|
0 | Amigdalin | -0.974 | 1 | 457.432 | 7 | 3 | 7 | 202.32 | -0.770 | OCC3OC(OCC2OC(OC(C#N)c1ccccc1)C(O)C(O)C2O)C(O)... |
1 | Fenfuram | -2.885 | 1 | 201.225 | 1 | 2 | 2 | 42.24 | -3.300 | Cc1occc1C(=O)Nc2ccccc2 |
2 | citral | -2.579 | 1 | 152.237 | 0 | 0 | 4 | 17.07 | -2.060 | CC(C)=CCCC(C)=CC(=O) |
7 | Estradiol | -4.138 | 1 | 272.388 | 2 | 4 | 0 | 40.46 | -5.030 | CC12CCC3C(CCc4cc(O)ccc34)C2CCC1O |
9 | Rotenone | -5.246 | 1 | 394.423 | 0 | 5 | 3 | 63.22 | -4.420 | COc5cc4OCC3Oc2c1CC(Oc1ccc2C(=O)C3c4cc5OC)C(C)=C |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
1121 | Phthalonitrile | -1.717 | 1 | 128.134 | 0 | 1 | 0 | 47.58 | -2.380 | N#Cc1ccccc1C#N |
1122 | m-Nitrotoluene | -2.640 | 1 | 137.138 | 0 | 1 | 1 | 43.14 | -2.440 | Cc1cccc(c1)N(=O)=O |
1124 | Oxamyl | -0.908 | 1 | 219.266 | 1 | 0 | 1 | 71.00 | 0.106 | CNC(=O)ON=C(SC)C(=O)N(C)C |
1125 | Thiometon | -3.323 | 1 | 246.359 | 0 | 0 | 7 | 18.46 | -3.091 | CCSCCSP(=S)(OC)OC |
1127 | Stirofos | -4.320 | 1 | 365.964 | 0 | 1 | 5 | 44.76 | -4.522 | COP(=O)(OC)OC(=CCl)c1cc(Cl)c(Cl)cc1Cl |
758 rows × 10 columns
Check Your Understanding
Rewrite the following query to use the mask syntax for filtering data.
df.query("`Number of H-Bond Donors`>3")
Solution
df[df["Number of H-Bond Donors"] > 3]
df.query("smiles.str.contains('=')")
Compound ID | ESOL solubility (mol/L) | Minimum Degree | Molecular Weight | Number of H-Bond Donors | Number of Rings | Number of Rotatable Bonds | Polar Surface Area | measured solubility (mol/L) | smiles | |
---|---|---|---|---|---|---|---|---|---|---|
1 | Fenfuram | -2.885 | 1 | 201.225 | 1 | 2 | 2 | 42.24 | -3.300 | Cc1occc1C(=O)Nc2ccccc2 |
2 | citral | -2.579 | 1 | 152.237 | 0 | 0 | 4 | 17.07 | -2.060 | CC(C)=CCCC(C)=CC(=O) |
8 | Dieldrin | -4.533 | 1 | 380.913 | 0 | 5 | 0 | 12.53 | -6.290 | ClC4=C(Cl)C5(Cl)C3C1CC(C2OC12)C3C4(Cl)C5(Cl)Cl |
9 | Rotenone | -5.246 | 1 | 394.423 | 0 | 5 | 3 | 63.22 | -4.420 | COc5cc4OCC3Oc2c1CC(Oc1ccc2C(=O)C3c4cc5OC)C(C)=C |
10 | 2-pyrrolidone | 0.243 | 1 | 85.106 | 1 | 1 | 0 | 29.10 | 1.070 | O=C1CCCN1 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
1118 | Isobutyl acetate | -1.463 | 1 | 116.160 | 0 | 0 | 2 | 26.30 | -1.210 | CC(C)COC(=O)C |
1122 | m-Nitrotoluene | -2.640 | 1 | 137.138 | 0 | 1 | 1 | 43.14 | -2.440 | Cc1cccc(c1)N(=O)=O |
1124 | Oxamyl | -0.908 | 1 | 219.266 | 1 | 0 | 1 | 71.00 | 0.106 | CNC(=O)ON=C(SC)C(=O)N(C)C |
1125 | Thiometon | -3.323 | 1 | 246.359 | 0 | 0 | 7 | 18.46 | -3.091 | CCSCCSP(=S)(OC)OC |
1127 | Stirofos | -4.320 | 1 | 365.964 | 0 | 1 | 5 | 44.76 | -4.522 | COP(=O)(OC)OC(=CCl)c1cc(Cl)c(Cl)cc1Cl |
595 rows × 10 columns
Sorting Data#
Pandas allows you to easily sort data using the sort_values
method. Inside of the function call you list the column you would like to sort by. By default, the values will be sorted from lowest to highest (or ascending
). This method will return
a sorted dataframe.
df.sort_values("Number of H-Bond Donors")
Compound ID | ESOL solubility (mol/L) | Minimum Degree | Molecular Weight | Number of H-Bond Donors | Number of Rings | Number of Rotatable Bonds | Polar Surface Area | measured solubility (mol/L) | smiles | |
---|---|---|---|---|---|---|---|---|---|---|
563 | Dimethyl phthalate | -2.347 | 1 | 194.186 | 0 | 1 | 2 | 52.60 | -1.660 | COC(=O)c1ccccc1C(=O)OC |
632 | 1,5-Dimethlnapthalene | -4.147 | 1 | 156.228 | 0 | 2 | 0 | 0.00 | -4.679 | Cc1cccc2c(C)cccc12 |
634 | Acenapthylene | -3.682 | 2 | 152.196 | 0 | 3 | 0 | 0.00 | -3.960 | C1=Cc2cccc3cccc1c23 |
635 | Ethyl butyrate | -2.254 | 1 | 144.214 | 0 | 0 | 5 | 26.30 | -1.280 | CCCCCOC(=O)CC |
637 | Benzo(a)pyrene | -6.007 | 2 | 252.316 | 0 | 5 | 0 | 0.00 | -8.699 | c1ccc2c(c1)cc3ccc4cccc5ccc2c3c45 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
276 | Sorbitol | 0.647 | 1 | 182.172 | 6 | 0 | 5 | 121.38 | 1.090 | OCC(O)C(O)C(O)C(O)CO |
0 | Amigdalin | -0.974 | 1 | 457.432 | 7 | 3 | 7 | 202.32 | -0.770 | OCC3OC(OCC2OC(OC(C#N)c1ccccc1)C(O)C(O)C2O)C(O)... |
441 | Lactose | 1.071 | 1 | 342.297 | 8 | 2 | 4 | 189.53 | -0.244 | OCC1OC(OC2C(O)C(O)C(O)OC2CO)C(O)C(O)C1O |
1025 | Sucrose | 0.310 | 1 | 342.297 | 8 | 2 | 5 | 189.53 | 0.790 | OCC2OC(OC1(CO)OC(CO)C(O)C1O)C(O)C(O)C2O |
298 | Raffinose | 0.496 | 1 | 504.438 | 11 | 3 | 8 | 268.68 | -0.410 | OCC1OC(CO)(OC2OC(COC3OC(CO)C(O)C(O)C3O)C(O)C(O... |
1128 rows × 10 columns
If you include more than one column name, the dataframe will be sorted by multiple columns. First, it will be sorted by the first column indicated, then that sort will be sorted by the second. Consider the following examples. We first sort by Number of H-Bond Donors
. Within that sort, we sort by Molecular Weight
.
df_sorted = df.sort_values(["Number of H-Bond Donors", "Molecular Weight"])
df_sorted.head()
Compound ID | ESOL solubility (mol/L) | Minimum Degree | Molecular Weight | Number of H-Bond Donors | Number of Rings | Number of Rotatable Bonds | Polar Surface Area | measured solubility (mol/L) | smiles | |
---|---|---|---|---|---|---|---|---|---|---|
934 | Methane | -0.636 | 0 | 16.043 | 0 | 0 | 0 | 0.0 | -0.90 | C |
953 | Ethyne | -0.252 | 1 | 26.038 | 0 | 0 | 0 | 0.0 | 0.29 | C#C |
689 | Ethylene | -0.815 | 1 | 28.054 | 0 | 0 | 0 | 0.0 | -0.40 | C=C |
600 | Ethane | -1.132 | 1 | 30.070 | 0 | 0 | 0 | 0.0 | -1.36 | CC |
202 | Propyne | -0.672 | 1 | 40.065 | 0 | 0 | 0 | 0.0 | -0.41 | CC#C |
Note the bolded indices to the left of the dataframe. After we perform our sort, these are no longer in order. In the sorted dataframe, loc
and iloc
do not return the same values using the same number.
df_sorted.loc[934]
Compound ID Methane
ESOL solubility (mol/L) -0.636
Minimum Degree 0
Molecular Weight 16.043
Number of H-Bond Donors 0
Number of Rings 0
Number of Rotatable Bonds 0
Polar Surface Area 0.0
measured solubility (mol/L) -0.9
smiles C
Name: 934, dtype: object
df_sorted.iloc[934]
Compound ID Bensulide
ESOL solubility (mol/L) -4.99
Minimum Degree 1
Molecular Weight 397.524
Number of H-Bond Donors 1
Number of Rings 1
Number of Rotatable Bonds 10
Polar Surface Area 64.63
measured solubility (mol/L) -4.2
smiles CC(C)OP(=S)(OC(C)C)SCCNS(=O)(=O)c1ccccc1
Name: 330, dtype: object
Grouping Data#
Performing Operations on Multiple Cells#
Broadcasting#
Pandas dataframes have the convenient feature that they use something called broadcasting
. This means that if you are doing something like subtracting a number, multiplying, etc to a column or dataframe of information, it can be done all at once instead of with a for
loop. Consider if we wanted to express the molecular surface area in \(nm^2\) instead of Å\(^2\). To do this, we would need to divide each value by 100.
Instead of writing a for
loop that does this, we can just write the following code. This will return a pandas Series (one dimensional dataframe).
df["Polar Surface Area"] / 100
0 2.0232
1 0.4224
2 0.1707
3 0.0000
4 0.0000
...
1123 0.0000
1124 0.7100
1125 0.1846
1126 0.0000
1127 0.4476
Name: Polar Surface Area, Length: 1128, dtype: float64
To save it as a new column, we need to capture the output in a column. You can create a new column in a dataframe using the following syntax.
df["Polar Surface Area (nm^2)"] = df["Polar Surface Area"] / 100
You can also add, subtract, or multiply two columns with one another. For example, we might want to calculate the difference between the predicted and observed values for this solubility model.
df["ESOL solubility (mol/L)"] - df["measured solubility (mol/L)"]
0 -0.204
1 0.415
2 -0.519
3 1.252
4 -0.902
...
1123 -0.898
1124 -1.014
1125 -0.232
1126 0.935
1127 0.202
Length: 1128, dtype: float64
Check Your Understanding
Save the difference between the predicted and measured solubility in a column named “difference”.
Solution
df["difference"] = df["ESOL solutbility (mol/L)"] - df["measured solubility (mol/L)"]
The .applymap
method#
Sometimes the operations you want to do on a dataframe can’t be achieved with broadcasting. One might think to use a for
loop in times like these. However, many modern Python libraries try to make it so that you will rarely need to use a loop.
In pandas, if you would like to perform an operation on every cell of a dataframe, you can use the applymap
command. For example, if we wanted to know the number of letters in a compound name, we would use the python function len
. The length function only works on strings, so let’s see how we would use it on two columns which are strings.
df[["smiles", "Compound ID"]].applymap(len)
smiles | Compound ID | |
---|---|---|
0 | 54 | 9 |
1 | 22 | 8 |
2 | 20 | 6 |
3 | 34 | 6 |
4 | 7 | 9 |
... | ... | ... |
1123 | 16 | 9 |
1124 | 25 | 6 |
1125 | 17 | 9 |
1126 | 7 | 14 |
1127 | 37 | 8 |
1128 rows × 2 columns
You can also do this with custom functions. Consider this function which counts the number of C’s in a string. For a SMILE string we’ll consider that it counts the number of carbons.
def count_carbons(smiles):
carbon_count = 0
for letter in smiles:
if letter.lower() == "c":
carbon_count += 1
return carbon_count
df[["smiles", "Compound ID"]].applymap(count_carbons)
smiles | Compound ID | |
---|---|---|
0 | 20 | 0 |
1 | 12 | 0 |
2 | 10 | 1 |
3 | 22 | 1 |
4 | 4 | 0 |
... | ... | ... |
1123 | 3 | 0 |
1124 | 7 | 0 |
1125 | 6 | 0 |
1126 | 5 | 0 |
1127 | 14 | 0 |
1128 rows × 2 columns
The .apply
method - for rows and columns#
The apply method is used to apply a function to either the rows or columns of a dataframe. If you use this on a single column, the function will be applied to every value.
df["Compound ID"].apply(len)
0 9
1 8
2 6
3 6
4 9
..
1123 9
1124 6
1125 9
1126 14
1127 8
Name: Compound ID, Length: 1128, dtype: int64
df[["smiles", "Compound ID"]].apply(len)
smiles 1128
Compound ID 1128
dtype: int64
Axes and Shapes#
We’ve already discussed the rows and columns in a dataframe. Also relevant here is the idea of an dataframe axis. Most functions in pandas can be used on either the rows or the columns, and can be specified by using the keyword axis
. Typically, by default this axis will be set to 0 (rows).
Axis 0 runs down the rows, while axis 1 runs across the columns. When thinking about this, consider the iloc
function for indexing. You specify the index for the row in element 0 of this function (axis 0) and the index for the column in element 1.
The apply
method works on either rows or elements. Consider the result of what happens if we apply this function to axis 1, or the columns. Can you explain why we are seeing this answer?
df[["smiles", "Compound ID"]].apply(len, axis=1)
0 2
1 2
2 2
3 2
4 2
..
1123 2
1124 2
1125 2
1126 2
1127 2
Length: 1128, dtype: int64
For example, when we calculate the mean, you will get an average for every column. You do not get the average of all values. This is because the function is applied to axis 0, which runs down the dataframe.
df.mean()
/tmp/ipykernel_1674/3698961737.py:1: FutureWarning: The default value of numeric_only in DataFrame.mean is deprecated. In a future version, it will default to False. In addition, specifying 'numeric_only=None' is deprecated. Select only valid columns or specify the value of numeric_only to silence this warning.
df.mean()
ESOL solubility (mol/L) -2.988192
Minimum Degree 1.058511
Molecular Weight 203.937074
Number of H-Bond Donors 0.701241
Number of Rings 1.390957
Number of Rotatable Bonds 2.177305
Polar Surface Area 34.872881
measured solubility (mol/L) -3.050102
Polar Surface Area (nm^2) 0.348729
dtype: float64
We can switch this to giving an average for each row by setting the axis to 1.
df.mean(axis=1)
/tmp/ipykernel_1674/3676274908.py:1: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
df.mean(axis=1)
0 75.336800
1 27.078044
2 18.870967
3 30.096222
4 9.286778
...
1123 21.562556
1124 32.574889
1125 29.621067
1126 7.636222
1127 45.481067
Length: 1128, dtype: float64
You’ll notice that when we run a function on axis 0, our returned value has the same number of values as the number of columns in our original dataframe. The converse is true for applying across axis 1.
If we select a column, the mean of all of the values are returned. This makes sense intuitively - it is how we might expect the function to behave. However, the function behaves this way because the function is acting on axis 0.
df["Polar Surface Area"].mean()
34.87288120567376
df["Polar Surface Area"].shape
(1128,)
For this example, there is only one number here describing the shape of the dataframe. This is because selecting a single column of data in DataFrame results in a sightly different data type. Instead of a Pandas DataFrame, we hae a Pandas Series. Knowing and understanding the difference between a DataFrame and a Series will be important as you work with Pandas. There are many functions which are shared between the two data types, but there are also some that are specific to one data type or another.
The concepts covered in this chapter are helpful for data exploration, and will become essential as we continue to work with pandas.
Key Points
Pandas is used to examine, process, and analyze tabular data.
Pandas has many useful features that allow easy data manipulaiton and extraction