python学习——pandas的拼接操作

pandas的拼接分为两种:

  • 级联:pd.concat, pd.append
  • 合并:pd.merge, pd.join

0. 回顾numpy的级联

============================================

练习12:

  1. 生成2个3*3的矩阵,对其分别进行两个维度上的级联

============================================

In [19]:

import numpy as np
import pandas as pd
from pandas import Series,DataFrame

In [20]:

nd = np.random.randint(0,10,size=(3,3))
nd

Out[20]:

array([[6, 3, 4],
       [3, 9, 8],
       [8, 7, 8]])

In [24]:

np.concatenate((nd,nd),axis=0)#0代表行间操作

Out[24]:

array([[6, 3, 4],
       [3, 9, 8],
       [8, 7, 8],
       [6, 3, 4],
       [3, 9, 8],
       [8, 7, 8]])

In [25]:

np.concatenate([nd,nd],axis=1)#1代表列间操作,()huo[]效果一样

Out[25]:

array([[6, 3, 4, 6, 3, 4],
       [3, 9, 8, 3, 9, 8],
       [8, 7, 8, 8, 7, 8]])

为方便讲解,我们首先定义一个生成DataFrame的函数:

In [26]:

def make_df(inds,cols):
    #字典的key作为列名进行展示
    data = {key:[key+str(i) for i in inds]for key in cols}
    
    return DataFrame(data,index=inds,columns=cols)

In [28]:

make_df([1,2],list('AB'))

Out[28]:

AB
1A1B1
2A2B2

1. 使用pd.concat()级联

pandas使用pd.concat函数,与np.concatenate函数类似,只是多了一些参数:

pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
          keys=None, levels=None, names=None, verify_integrity=False,
          copy=True)

1) 简单级联

和np.concatenate一样,优先增加行数(默认axis=0)

In [29]:

df1 = make_df([0,1],list('AB'))
df2 = make_df([2,3],list('AB'))

In [30]:

display(df1,df2)
AB
0A0B0
1A1B1
AB
2A2B2
3A3B3

可以通过设置axis来改变级联方向

In [31]:

pd.concat([df1,df2])

Out[31]:

AB
0A0B0
1A1B1
2A2B2
3A3B3

In [32]:

pd.concat((df1,df2),axis = 1)

Out[32]:

ABAB
0A0B0NaNNaN
1A1B1NaNNaN
2NaNNaNA2B2
3NaNNaNA3B3

注意index在级联时可以重复

也可以选择忽略ignore_index,重新索引

In [34]:

pd.concat((df1,df2),axis=1,ignore_index=True)

Out[34]:

0123
0A0B0NaNNaN
1A1B1NaNNaN
2NaNNaNA2B2
3NaNNaNA3B3

或者使用多层索引 keys

concat([x,y],keys=['x','y'])

In [13]:

pd.concat([df1,df2],keys=['x','y'])

Out[13]:

AB
x0A1B1
1A2B2
y0A3B3
1A4B4

In [ ]:

#pd 模块 import pandas as pd

#df1,df2 具体的实例
#级联的方法,属于上一级,DataFrame来自pandas

============================================

练习13:

  1. 想一想级联的应用场景?

  2. 使用昨天的知识,建立一个期中考试张三、李四的成绩表ddd

  3. 假设新增考试学科"计算机",如何实现?

  4. 新增王老五同学的成绩,如何实现?

============================================

In [ ]:

2) 不匹配级联

不匹配指的是级联的维度的索引不一致。例如纵向级联时列索引不一致,横向级联时行索引不一致

In [38]:

df1 = make_df([1,2],list('AB'))
df2 = make_df([2,4],list('BC'))
display(df1,df2)
AB
1A1B1
2A2B2
BC
2B2C2
4B4C4

有3种连接方式:

  • 外连接:补NaN(默认模式)

In [39]:

pd.concat([df1,df2])
C:\Users\BLX\AppData\Roaming\Python\Python37\site-packages\ipykernel_launcher.py:1: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.

To accept the future behavior, pass 'sort=False'.

To retain the current behavior and silence the warning, pass 'sort=True'.

  """Entry point for launching an IPython kernel.

Out[39]:

ABC
1A1B1NaN
2A2B2NaN
2NaNB2C2
4NaNB4C4
  • 内连接:只连接匹配的项

In [41]:

#合并显示共有数据
pd.concat((df1,df2),join = 'inner',axis = 1)

Out[41]:

ABBC
2A2B2B2C2
  • 连接指定轴 join_axes

In [42]:

df2.columns

Out[42]:

Index(['B', 'C'], dtype='object')

In [43]:

#join_axex以某个DataFrame的列索引为新的列索引值
pd.concat([df1,df2],join_axes=[df2.columns])

Out[43]:

BC
1B1NaN
2B2NaN
2B2C2
4B4C4

============================================

练习14:

假设【期末】考试ddd2的成绩没有张三的,只有李四、王老五、赵小六的,使用多种方法级联

============================================

3) 使用append()函数添加

由于在后面级联的使用非常普遍,因此有一个函数append专门用于在后面添加

In [44]:

display(df1,df2)
AB
1A1B1
2A2B2
BC
2B2C2
4B4C4

In [49]:

#append函数属于DataFrame,concat这函数属于pandas模块
#pd.concat((df1,df2))
df1.append(df2)

Out[49]:

ABC
1A1B1NaN
2A2B2NaN
2NaNB2C2
4NaNB4C4

============================================

练习15:

新建一个只有张三李四王老五的期末考试成绩单ddd3,使用append()与期中考试成绩表ddd级联

============================================

2. 使用pd.merge()合并

merge与concat的区别在于,merge需要依据某一共同的行或列来进行合并

使用pd.merge()合并时,会自动根据两者相同column名称的那一列,作为key来进行合并。

注意每一列元素的顺序不要求一致

1) 一对一合并

In [54]:

#merge根据相同的元素进行合并的
df1 = DataFrame({'employee':['Po','Sara','Danis'],
                 'group':['sail','couting','marketing']})
df2 = DataFrame({'employee':['Po','Sara','Bush'],
                 'work_time':[2,3,1]})
display(df1,df2)
employeegroup
0Posail
1Saracouting
2Danismarketing
employeework_time
0Po2
1Sara3
2Bush1

In [55]:

pd.merge(df1,df2)

Out[55]:

employeegroupwork_time
0Posail2
1Saracouting3

In [56]:

df1.merge(df2)

Out[56]:

employeegroupwork_time
0Posail2
1Saracouting3

2) 多对一合并

In [57]:

df1 = DataFrame({'employee':['Po','Sara','Danis'],
                 'group':['sail','couting','marketing']})
df2 = DataFrame({'employee':['Po','Po','Bush'],
                 'work_time':[2,3,1]})
display(df1,df2)
employeegroup
0Posail
1Saracouting
2Danismarketing
employeework_time
0Po2
1Po3
2Bush1

In [58]:

pd.merge(df1,df2)

Out[58]:

employeegroupwork_time
0Posail2
1Posail3

3) 多对多合并

In [61]:

df1 = DataFrame({'employee':['Po','Po','Danis'],
                 'group':['sail','couting','marketing']})
df2 = DataFrame({'employee':['Po','Po','Bush'],
                 'work_time':[2,3,1]})
display(df1,df2)
employeegroup
0Posail
1Pocouting
2Danismarketing
employeework_time
0Po2
1Po3
2Bush1

In [62]:

pd.merge(df1,df2)

Out[62]:

employeegroupwork_time
0Posail2
1Posail3
2Pocouting2
3Pocouting3

4) key的规范化

  • 使用on=显式指定哪一列为key,当有多个key相同时使用

In [66]:

df3 = DataFrame({'employee':['Po','Summer','Flower'],
                 'group':['sail','marketing','serch'],
                 'salary':[12000,10000,8000]})
df4 = DataFrame({'employee':['Po','Winter','Flower'],
                 'group':['marketing','marketing','serch'],
                 'work_time':[2,1,5]})
display(df3,df4)
employeegroupsalary
0Posail12000
1Summermarketing10000
2Flowerserch8000
employeegroupwork_time
0Pomarketing2
1Wintermarketing1
2Flowerserch5

In [67]:

pd.merge(df3,df4)

Out[67]:

employeegroupsalarywork_time
0Flowerserch80005

In [70]:

pd.merge(df3,df4,on='employee')

Out[70]:

employeegroup_xsalarygroup_ywork_time
0Posail12000marketing2
1Flowerserch8000serch5

In [73]:

pd.merge(df3,df4,on='group',suffixes=['_A','_B'])

Out[73]:

employee_Agroupsalaryemployee_Bwork_time
0Summermarketing10000Po2
1Summermarketing10000Winter1
2Flowerserch8000Flower5
  • 使用left_on和right_on指定左右两边的列作为key,当左右两边的key都不想等时使用
  • 参数1为左,参数2为右

In [79]:

df3 = DataFrame({'employer':['Po','Summer','Flower'],
                                    'Team':['sail','marketing','serch'],
                                    'salary':[12000,10000,8000]})
df4 = DataFrame({'employee':['Po','Winter','Flower'],
                                    'group':['marketing','marketing','serch'],
                                    'work_time':[2,1,5]})
display(df3,df4)
employerTeamsalary
0Posail12000
1Summermarketing10000
2Flowerserch8000
employeegroupwork_time
0Pomarketing2
1Wintermarketing1
2Flowerserch5

In [81]:

pd.merge(df3,df4,left_on='employer',right_on='employee')

Out[81]:

employerTeamsalaryemployeegroupwork_time
0Posail12000Pomarketing2
1Flowerserch8000Flowerserch5

In [82]:

pd.merge(df3,df4,left_on='Team',right_on='group')

Out[82]:

employerTeamsalaryemployeegroupwork_time
0Summermarketing10000Pomarketing2
1Summermarketing10000Wintermarketing1
2Flowerserch8000Flowerserch5

============================================

练习16:

  1. 假设有两份成绩单,除了ddd是张三李四王老五之外,还有ddd4是张三和赵小六的成绩单,如何合并?

  2. 如果ddd4中张三的名字被打错了,成为了张十三,怎么办?

  3. 自行练习多对一,多对多的情况

  4. 自学left_index,right_index

============================================

5) 内合并与外合并

  • 内合并:只保留两者都有的key(默认模式)

In [85]:

df1 = DataFrame({'age':[18,22,33],'height':[175,169,180]})

df2 = DataFrame({'age':[18,23,31],'weight':[65,70,80]})

In [86]:

pd.merge(df1,df2)

Out[86]:

ageheightweight
01817565

In [87]:

df1.merge(df2,how='inner')

Out[87]:

ageheightweight
01817565
  • 外合并 how='outer':补NaN

In [88]:

df1.merge(df2,how = 'outer')

Out[88]:

ageheightweight
018175.065.0
122169.0NaN
233180.0NaN
323NaN70.0
431NaN80.0
  • 左合并、右合并:how='left',how='right',

In [89]:

df1.merge(df2,how = 'left')#保留左侧

Out[89]:

ageheightweight
01817565.0
122169NaN
233180NaN

In [90]:

pd.merge(df1,df2,how='right')#保留右侧

Out[90]:

ageheightweight
018175.065
123NaN70
231NaN80

============================================

练习17:

  1. 如果只有张三赵小六语数英三个科目的成绩,如何合并?

  2. 考虑应用情景,使用多种方式合并ddd与ddd4

============================================

6) 列冲突的解决

当列冲突时,即有多个列名称相同时,需要使用on=来指定哪一个列作为key,配合suffixes指定冲突列名

可以使用suffixes=自己指定后缀

In [91]:

display(df3,df4)
employerTeamsalary
0Posail12000
1Summermarketing10000
2Flowerserch8000
employeegroupwork_time
0Pomarketing2
1Wintermarketing1
2Flowerserch5

In [93]:

df3.columns = ['employee','group','salary']
display(df3)
employeegroupsalary
0Posail12000
1Summermarketing10000
2Flowerserch8000

In [94]:

pd.merge(df3,df4,on='employee',suffixes=['_李','_王'])

Out[94]:

employeegroup_李salarygroup_王work_time
0Posail12000marketing2
1Flowerserch8000serch5

============================================

练习18:

假设有两个同学都叫李四,ddd5、ddd6都是张三和李四的成绩表,如何合并?

============================================

作业

3. 案例分析:美国各州人口数据分析

首先导入文件,并查看数据样本

In [62]:

import numpy as np
import pandas as pd
from pandas import Series,DataFrame

In [63]:

#使用pandas读取数据
pop = pd.read_csv('../../data/state-population.csv')

areas = pd.read_csv('../../data/state-areas.csv')

abb = pd.read_csv('../../data/state-abbrevs.csv')

In [64]:

pop.shape

Out[64]:

(2544, 4)

In [65]:

pop.head()

Out[65]:

state/regionagesyearpopulation
0ALunder1820121117489.0
1ALtotal20124817528.0
2ALunder1820101130966.0
3ALtotal20104785570.0
4ALunder1820111125763.0

In [70]:

areas.shape

Out[70]:

(52, 2)

In [69]:

abb.shape

Out[69]:

(51, 2)

合并pop与abbrevs两个DataFrame,分别依据state/region列和abbreviation列来合并。

为了保留所有信息,使用外合并。

In [71]:

pop.head()

Out[71]:

state/regionagesyearpopulation
0ALunder1820121117489.0
1ALtotal20124817528.0
2ALunder1820101130966.0
3ALtotal20104785570.0
4ALunder1820111125763.0

In [72]:

abb.head()

Out[72]:

stateabbreviation
0AlabamaAL
1AlaskaAK
2ArizonaAZ
3ArkansasAR
4CaliforniaCA

In [73]:

display(pop.shape,abb.shape)
(2544, 4)
(51, 2)

In [78]:

#此时的场景 left == outer left数据大于abb
#left效果比outer差一些
#abb 河北
pop_m = pop.merge(abb,left_on='state/region',right_on='abbreviation',how = 'outer')
pop_m.shape

Out[78]:

(2544, 6)

去除abbreviation的那一列(axis=1)

In [79]:

pop_m.head()

Out[79]:

state/regionagesyearpopulationstateabbreviation
0ALunder1820121117489.0AlabamaAL
1ALtotal20124817528.0AlabamaAL
2ALunder1820101130966.0AlabamaAL
3ALtotal20104785570.0AlabamaAL
4ALunder1820111125763.0AlabamaAL

In [83]:

pop_m.drop('abbreviation',axis = 1,inplace=True)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-83-15dcfc478d0b> in <module>()
----> 1 pop_m.drop('abbreviation',axis = 1,inplace=True)

/usr/local/lib/python3.5/dist-packages/pandas/core/generic.py in drop(self, labels, axis, level, inplace, errors)
   2159                 new_axis = axis.drop(labels, level=level, errors=errors)
   2160             else:
-> 2161                 new_axis = axis.drop(labels, errors=errors)
   2162             dropped = self.reindex(**{axis_name: new_axis})
   2163             try:

/usr/local/lib/python3.5/dist-packages/pandas/core/indexes/base.py in drop(self, labels, errors)
   3622             if errors != 'ignore':
   3623                 raise ValueError('labels %s not contained in axis' %
-> 3624                                  labels[mask])
   3625             indexer = indexer[~mask]
   3626         return self.delete(indexer)

ValueError: labels ['abbreviation'] not contained in axis

In [82]:

pop_m.head()

Out[82]:

state/regionagesyearpopulationstate
0ALunder1820121117489.0Alabama
1ALtotal20124817528.0Alabama
2ALunder1820101130966.0Alabama
3ALtotal20104785570.0Alabama
4ALunder1820111125763.0Alabama

查看存在缺失数据的列。

使用.isnull().any(),只有某一列存在一个缺失数据,就会显示True。

In [88]:

pop_m.isnull().any()

Out[88]:

state/region    False
ages            False
year            False
population       True
state            True
dtype: bool

In [ ]:

#population 和 state这两列有数据缺失的情况

查看缺失数据

In [92]:

#为空的行索引
pop_m.loc[pop_m.isnull().any(axis = 1)]

Out[92]:

state/regionagesyearpopulationstate
2448PRunder181990NaNNaN
2449PRtotal1990NaNNaN
2450PRtotal1991NaNNaN
2451PRunder181991NaNNaN
2452PRtotal1993NaNNaN
2453PRunder181993NaNNaN
2454PRunder181992NaNNaN
2455PRtotal1992NaNNaN
2456PRunder181994NaNNaN
2457PRtotal1994NaNNaN
2458PRtotal1995NaNNaN
2459PRunder181995NaNNaN
2460PRunder181996NaNNaN
2461PRtotal1996NaNNaN
2462PRunder181998NaNNaN
2463PRtotal1998NaNNaN
2464PRtotal1997NaNNaN
2465PRunder181997NaNNaN
2466PRtotal1999NaNNaN
2467PRunder181999NaNNaN
2468PRtotal20003810605.0NaN
2469PRunder1820001089063.0NaN
2470PRtotal20013818774.0NaN
2471PRunder1820011077566.0NaN
2472PRtotal20023823701.0NaN
2473PRunder1820021065051.0NaN
2474PRtotal20043826878.0NaN
2475PRunder1820041035919.0NaN
2476PRtotal20033826095.0NaN
2477PRunder1820031050615.0NaN
..................
2514USAunder18199971946051.0NaN
2515USAtotal2000282162411.0NaN
2516USAunder18200072376189.0NaN
2517USAtotal1999279040181.0NaN
2518USAtotal2001284968955.0NaN
2519USAunder18200172671175.0NaN
2520USAtotal2002287625193.0NaN
2521USAunder18200272936457.0NaN
2522USAtotal2003290107933.0NaN
2523USAunder18200373100758.0NaN
2524USAtotal2004292805298.0NaN
2525USAunder18200473297735.0NaN
2526USAtotal2005295516599.0NaN
2527USAunder18200573523669.0NaN
2528USAtotal2006298379912.0NaN
2529USAunder18200673757714.0NaN
2530USAtotal2007301231207.0NaN
2531USAunder18200774019405.0NaN
2532USAtotal2008304093966.0NaN
2533USAunder18200874104602.0NaN
2534USAunder18201373585872.0NaN
2535USAtotal2013316128839.0NaN
2536USAtotal2009306771529.0NaN
2537USAunder18200974134167.0NaN
2538USAunder18201074119556.0NaN
2539USAtotal2010309326295.0NaN
2540USAunder18201173902222.0NaN
2541USAtotal2011311582564.0NaN
2542USAunder18201273708179.0NaN
2543USAtotal2012313873685.0NaN

96 rows × 5 columns

根据数据是否缺失情况显示数据,如果缺失为True,那么显示

找到有哪些state/region使得state的值为NaN,使用unique()查看非重复值

In [94]:

condition = pop_m['state'].isnull()
pop_m['state/region'][condition].unique()

Out[94]:

array(['PR', 'USA'], dtype=object)

In [95]:

areas

Out[95]:

statearea (sq. mi)
0Alabama52423
1Alaska656425
2Arizona114006
3Arkansas53182
4California163707
5Colorado104100
6Connecticut5544
7Delaware1954
8Florida65758
9Georgia59441
10Hawaii10932
11Idaho83574
12Illinois57918
13Indiana36420
14Iowa56276
15Kansas82282
16Kentucky40411
17Louisiana51843
18Maine35387
19Maryland12407
20Massachusetts10555
21Michigan96810
22Minnesota86943
23Mississippi48434
24Missouri69709
25Montana147046
26Nebraska77358
27Nevada110567
28New Hampshire9351
29New Jersey8722
30New Mexico121593
31New York54475
32North Carolina53821
33North Dakota70704
34Ohio44828
35Oklahoma69903
36Oregon98386
37Pennsylvania46058
38Rhode Island1545
39South Carolina32007
40South Dakota77121
41Tennessee42146
42Texas268601
43Utah84904
44Vermont9615
45Virginia42769
46Washington71303
47West Virginia24231
48Wisconsin65503
49Wyoming97818
50District of Columbia68
51Puerto Rico3515

In [ ]:

只有两个州,对应的州名为空

为找到的这些state/region的state项补上正确的值,从而去除掉state这一列的所有NaN!

记住这样清除缺失数据NaN的方法!

In [96]:

#Puerto Rico 

conditon = pop_m['state/region'] == 'PR'
condition

Out[96]:

0       False
1       False
2       False
3       False
4       False
5       False
6       False
7       False
8       False
9       False
10      False
11      False
12      False
13      False
14      False
15      False
16      False
17      False
18      False
19      False
20      False
21      False
22      False
23      False
24      False
25      False
26      False
27      False
28      False
29      False
        ...  
2514     True
2515     True
2516     True
2517     True
2518     True
2519     True
2520     True
2521     True
2522     True
2523     True
2524     True
2525     True
2526     True
2527     True
2528     True
2529     True
2530     True
2531     True
2532     True
2533     True
2534     True
2535     True
2536     True
2537     True
2538     True
2539     True
2540     True
2541     True
2542     True
2543     True
Name: state, Length: 2544, dtype: bool

In [97]:

pop_m['state'][condition] = 'Puerto Rico'
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
  """Entry point for launching an IPython kernel.

In [99]:

condition = pop_m['state/region'] == 'USA'
pop_m['state'][condition] = 'United State'
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
  

In [100]:

#刚才的填补操作,起作用了
pop_m.isnull().any()

Out[100]:

state/region    False
ages            False
year            False
population       True
state           False
dtype: bool

合并各州面积数据areas,使用左合并。

思考一下为什么使用外合并?

In [102]:

pop.head()
#人口的DataFrame和abb合并,有了州名全程
#可以和areas DataFrame进行合并

Out[102]:

state/regionagesyearpopulation
0ALunder1820121117489.0
1ALtotal20124817528.0
2ALunder1820101130966.0
3ALtotal20104785570.0
4ALunder1820111125763.0

In [103]:

pop_areas_m = pop_m.merge(areas,how = 'outer')

继续寻找存在缺失数据的列

In [105]:

pop_areas_m.shape

Out[105]:

(2544, 6)

In [109]:

areas

Out[109]:

statearea (sq. mi)
0Alabama52423
1Alaska656425
2Arizona114006
3Arkansas53182
4California163707
5Colorado104100
6Connecticut5544
7Delaware1954
8Florida65758
9Georgia59441
10Hawaii10932
11Idaho83574
12Illinois57918
13Indiana36420
14Iowa56276
15Kansas82282
16Kentucky40411
17Louisiana51843
18Maine35387
19Maryland12407
20Massachusetts10555
21Michigan96810
22Minnesota86943
23Mississippi48434
24Missouri69709
25Montana147046
26Nebraska77358
27Nevada110567
28New Hampshire9351
29New Jersey8722
30New Mexico121593
31New York54475
32North Carolina53821
33North Dakota70704
34Ohio44828
35Oklahoma69903
36Oregon98386
37Pennsylvania46058
38Rhode Island1545
39South Carolina32007
40South Dakota77121
41Tennessee42146
42Texas268601
43Utah84904
44Vermont9615
45Virginia42769
46Washington71303
47West Virginia24231
48Wisconsin65503
49Wyoming97818
50District of Columbia68
51Puerto Rico3515

In [106]:

pop_areas_m.isnull().any()

Out[106]:

state/region     False
ages             False
year             False
population        True
state            False
area (sq. mi)     True
dtype: bool

我们会发现area(sq.mi)这一列有缺失数据,为了找出是哪一行,我们需要找出是哪个state没有数据

In [110]:

cond = pop_areas_m['area (sq. mi)'].isnull()
cond

Out[110]:

0       False
1       False
2       False
3       False
4       False
5       False
6       False
7       False
8       False
9       False
10      False
11      False
12      False
13      False
14      False
15      False
16      False
17      False
18      False
19      False
20      False
21      False
22      False
23      False
24      False
25      False
26      False
27      False
28      False
29      False
        ...  
2514     True
2515     True
2516     True
2517     True
2518     True
2519     True
2520     True
2521     True
2522     True
2523     True
2524     True
2525     True
2526     True
2527     True
2528     True
2529     True
2530     True
2531     True
2532     True
2533     True
2534     True
2535     True
2536     True
2537     True
2538     True
2539     True
2540     True
2541     True
2542     True
2543     True
Name: area (sq. mi), Length: 2544, dtype: bool

In [111]:

pop_areas_m['state/region'][cond]

Out[111]:

2496    USA
2497    USA
2498    USA
2499    USA
2500    USA
2501    USA
2502    USA
2503    USA
2504    USA
2505    USA
2506    USA
2507    USA
2508    USA
2509    USA
2510    USA
2511    USA
2512    USA
2513    USA
2514    USA
2515    USA
2516    USA
2517    USA
2518    USA
2519    USA
2520    USA
2521    USA
2522    USA
2523    USA
2524    USA
2525    USA
2526    USA
2527    USA
2528    USA
2529    USA
2530    USA
2531    USA
2532    USA
2533    USA
2534    USA
2535    USA
2536    USA
2537    USA
2538    USA
2539    USA
2540    USA
2541    USA
2542    USA
2543    USA
Name: state/region, dtype: object

去除含有缺失数据的行

In [112]:

pop_areas_m.shape

Out[112]:

(2544, 6)

In [114]:

pop_areas_r = pop_areas_m.dropna()

In [115]:

pop_areas_r.shape

Out[115]:

(2476, 6)

查看数据是否缺失

In [116]:

pop_areas_r.isnull().any()

Out[116]:

state/region     False
ages             False
year             False
population       False
state            False
area (sq. mi)    False
dtype: bool

找出2010年的全民人口数据,df.query(查询语句)

In [117]:

pop_areas_r.head()

Out[117]:

state/regionagesyearpopulationstatearea (sq. mi)
0ALunder1820121117489.0Alabama52423.0
1ALtotal20124817528.0Alabama52423.0
2ALunder1820101130966.0Alabama52423.0
3ALtotal20104785570.0Alabama52423.0
4ALunder1820111125763.0Alabama52423.0

In [120]:

t_2010 = pop_areas_r.query("ages == 'total' and year == 2010")

In [121]:

t_2010.shape

Out[121]:

(52, 6)

In [122]:

t_2010

Out[122]:

state/regionagesyearpopulationstatearea (sq. mi)
3ALtotal20104785570.0Alabama52423.0
91AKtotal2010713868.0Alaska656425.0
101AZtotal20106408790.0Arizona114006.0
189ARtotal20102922280.0Arkansas53182.0
197CAtotal201037333601.0California163707.0
283COtotal20105048196.0Colorado104100.0
293CTtotal20103579210.0Connecticut5544.0
379DEtotal2010899711.0Delaware1954.0
389DCtotal2010605125.0District of Columbia68.0
475FLtotal201018846054.0Florida65758.0
485GAtotal20109713248.0Georgia59441.0
570HItotal20101363731.0Hawaii10932.0
581IDtotal20101570718.0Idaho83574.0
666ILtotal201012839695.0Illinois57918.0
677INtotal20106489965.0Indiana36420.0
762IAtotal20103050314.0Iowa56276.0
773KStotal20102858910.0Kansas82282.0
858KYtotal20104347698.0Kentucky40411.0
869LAtotal20104545392.0Louisiana51843.0
954MEtotal20101327366.0Maine35387.0
965MDtotal20105787193.0Maryland12407.0
1050MAtotal20106563263.0Massachusetts10555.0
1061MItotal20109876149.0Michigan96810.0
1146MNtotal20105310337.0Minnesota86943.0
1157MStotal20102970047.0Mississippi48434.0
1242MOtotal20105996063.0Missouri69709.0
1253MTtotal2010990527.0Montana147046.0
1338NEtotal20101829838.0Nebraska77358.0
1349NVtotal20102703230.0Nevada110567.0
1434NHtotal20101316614.0New Hampshire9351.0
1445NJtotal20108802707.0New Jersey8722.0
1530NMtotal20102064982.0New Mexico121593.0
1541NYtotal201019398228.0New York54475.0
1626NCtotal20109559533.0North Carolina53821.0
1637NDtotal2010674344.0North Dakota70704.0
1722OHtotal201011545435.0Ohio44828.0
1733OKtotal20103759263.0Oklahoma69903.0
1818ORtotal20103837208.0Oregon98386.0
1829PAtotal201012710472.0Pennsylvania46058.0
1914RItotal20101052669.0Rhode Island1545.0
1925SCtotal20104636361.0South Carolina32007.0
2010SDtotal2010816211.0South Dakota77121.0
2021TNtotal20106356683.0Tennessee42146.0
2106TXtotal201025245178.0Texas268601.0
2117UTtotal20102774424.0Utah84904.0
2202VTtotal2010625793.0Vermont9615.0
2213VAtotal20108024417.0Virginia42769.0
2298WAtotal20106742256.0Washington71303.0
2309WVtotal20101854146.0West Virginia24231.0
2394WItotal20105689060.0Wisconsin65503.0
2405WYtotal2010564222.0Wyoming97818.0
2490PRtotal20103721208.0Puerto Rico3515.0

对查询结果进行处理,以state列作为新的行索引:set_index

In [124]:

t_2010.set_index('state',inplace=True)

In [126]:

t_2010

Out[126]:

state/regionagesyearpopulationarea (sq. mi)
state
AlabamaALtotal20104785570.052423.0
AlaskaAKtotal2010713868.0656425.0
ArizonaAZtotal20106408790.0114006.0
ArkansasARtotal20102922280.053182.0
CaliforniaCAtotal201037333601.0163707.0
ColoradoCOtotal20105048196.0104100.0
ConnecticutCTtotal20103579210.05544.0
DelawareDEtotal2010899711.01954.0
District of ColumbiaDCtotal2010605125.068.0
FloridaFLtotal201018846054.065758.0
GeorgiaGAtotal20109713248.059441.0
HawaiiHItotal20101363731.010932.0
IdahoIDtotal20101570718.083574.0
IllinoisILtotal201012839695.057918.0
IndianaINtotal20106489965.036420.0
IowaIAtotal20103050314.056276.0
KansasKStotal20102858910.082282.0
KentuckyKYtotal20104347698.040411.0
LouisianaLAtotal20104545392.051843.0
MaineMEtotal20101327366.035387.0
MarylandMDtotal20105787193.012407.0
MassachusettsMAtotal20106563263.010555.0
MichiganMItotal20109876149.096810.0
MinnesotaMNtotal20105310337.086943.0
MississippiMStotal20102970047.048434.0
MissouriMOtotal20105996063.069709.0
MontanaMTtotal2010990527.0147046.0
NebraskaNEtotal20101829838.077358.0
NevadaNVtotal20102703230.0110567.0
New HampshireNHtotal20101316614.09351.0
New JerseyNJtotal20108802707.08722.0
New MexicoNMtotal20102064982.0121593.0
New YorkNYtotal201019398228.054475.0
North CarolinaNCtotal20109559533.053821.0
North DakotaNDtotal2010674344.070704.0
OhioOHtotal201011545435.044828.0
OklahomaOKtotal20103759263.069903.0
OregonORtotal20103837208.098386.0
PennsylvaniaPAtotal201012710472.046058.0
Rhode IslandRItotal20101052669.01545.0
South CarolinaSCtotal20104636361.032007.0
South DakotaSDtotal2010816211.077121.0
TennesseeTNtotal20106356683.042146.0
TexasTXtotal201025245178.0268601.0
UtahUTtotal20102774424.084904.0
VermontVTtotal2010625793.09615.0
VirginiaVAtotal20108024417.042769.0
WashingtonWAtotal20106742256.071303.0
West VirginiaWVtotal20101854146.024231.0
WisconsinWItotal20105689060.065503.0
WyomingWYtotal2010564222.097818.0
Puerto RicoPRtotal20103721208.03515.0

计算人口密度。注意是Series/Series,其结果还是一个Series。

In [127]:

pop_density = t_2010['population']/t_2010["area (sq. mi)"]
pop_density

Out[127]:

state
Alabama                   91.287603
Alaska                     1.087509
Arizona                   56.214497
Arkansas                  54.948667
California               228.051342
Colorado                  48.493718
Connecticut              645.600649
Delaware                 460.445752
District of Columbia    8898.897059
Florida                  286.597129
Georgia                  163.409902
Hawaii                   124.746707
Idaho                     18.794338
Illinois                 221.687472
Indiana                  178.197831
Iowa                      54.202751
Kansas                    34.745266
Kentucky                 107.586994
Louisiana                 87.676099
Maine                     37.509990
Maryland                 466.445797
Massachusetts            621.815538
Michigan                 102.015794
Minnesota                 61.078373
Mississippi               61.321530
Missouri                  86.015622
Montana                    6.736171
Nebraska                  23.654153
Nevada                    24.448796
New Hampshire            140.799273
New Jersey              1009.253268
New Mexico                16.982737
New York                 356.094135
North Carolina           177.617157
North Dakota               9.537565
Ohio                     257.549634
Oklahoma                  53.778278
Oregon                    39.001565
Pennsylvania             275.966651
Rhode Island             681.339159
South Carolina           144.854594
South Dakota              10.583512
Tennessee                150.825298
Texas                     93.987655
Utah                      32.677188
Vermont                   65.085075
Virginia                 187.622273
Washington                94.557817
West Virginia             76.519582
Wisconsin                 86.851900
Wyoming                    5.768079
Puerto Rico             1058.665149
dtype: float64

排序,并找出人口密度最高的五个州sort_values()

In [128]:

type(pop_density)

Out[128]:

pandas.core.series.Series

In [130]:

pop_density.sort_values(inplace=True)

找出人口密度最低的五个州

In [131]:

pop_density[:5]

Out[131]:

state
Alaska           1.087509
Wyoming          5.768079
Montana          6.736171
North Dakota     9.537565
South Dakota    10.583512
dtype: float64

In [132]:

pop_density.tail()

Out[132]:

state
Connecticut              645.600649
Rhode Island             681.339159
New Jersey              1009.253268
Puerto Rico             1058.665149
District of Columbia    8898.897059
dtype: float64

要点总结:

  • 统一用loc()索引
  • 善于使用.isnull().any()找到存在NaN的列
  • 善于使用.unique()确定该列中哪些key是我们需要的
  • 一般使用外合并、左合并,目的只有一个:宁愿该列是NaN也不要丢弃其他列的信息

回顾:Series/DataFrame运算与ndarray运算的区别

  • Series与DataFrame没有广播,如果对应index没有值,则记为NaN;或者使用add的fill_value来补缺失值
  • ndarray有广播,通过重复已有值来计算