📌  相关文章
📜  Pyspark数据框:对另一列进行分组时求和列

📅  最后修改于: 2022-05-13 01:55:19.184000             🧑  作者: Mango

Pyspark数据框:对另一列进行分组时求和列

在本文中,我们将讨论如何在使用Python对 Pyspark 数据帧中的另一列进行分组的同时对一列求和。

让我们创建用于演示的数据框:

Python3
# importing module
import pyspark
 
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
 
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
 
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
 
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
 
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
 
# display
dataframe.show()


Python3
# importing module
import pyspark
 
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
 
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
 
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
 
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
 
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
 
# Groupby with DEPT along FEE with sum()
dataframe.groupBy('DEPT').sum('FEE').show()


Python3
# importing module
import pyspark
 
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
 
# import sum
from pyspark.sql.functions import sum
 
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
 
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
 
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
 
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
 
# Groupby with DEPT and NAME with sum()
dataframe.groupBy("DEPT").agg(sum("FEE")).show()


Python3
# importing module
import pyspark
 
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
 
# import functions
from pyspark.sql import functions as f
 
# import window module
from pyspark.sql import Window
 
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
 
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
 
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
 
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
 
# summing using window function
dataframe.withColumn('Total Branch Sum', f.sum(
    'FEE').over(Window.partitionBy('DEPT'))).show()


输出:

方法一:使用 groupBy() 方法

在 PySpark 中, groupBy()用于将相同的数据收集到 PySpark DataFrame 上的组中,并对分组数据执行聚合函数。这里的聚合函数是 sum()。

sum():这将返回每个组的总值。

示例:带有 DEPT 的 Groupby 以及带有 sum() 的 FEE

Python3

# importing module
import pyspark
 
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
 
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
 
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
 
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
 
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
 
# Groupby with DEPT along FEE with sum()
dataframe.groupBy('DEPT').sum('FEE').show()

输出:

方法 2:将 agg()函数与 GroupBy() 一起使用

在这里,我们必须从 sql.functions 模块导入 sum函数以与聚合方法一起使用。

Python3

# importing module
import pyspark
 
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
 
# import sum
from pyspark.sql.functions import sum
 
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
 
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
 
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
 
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
 
# Groupby with DEPT and NAME with sum()
dataframe.groupBy("DEPT").agg(sum("FEE")).show()

输出:

方法 3:使用带有 sum 的 Window函数

窗口函数用于对数据框中的列进行分区。

我们可以对包含分组值的数据列进行分区,然后使用 sum() 的聚合函数得到分组(分区)列的总和。

示例:根据部门获取费用总和

Python3

# importing module
import pyspark
 
# importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
 
# import functions
from pyspark.sql import functions as f
 
# import window module
from pyspark.sql import Window
 
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
 
# list  of student  data
data = [["1", "sravan", "IT", 45000],
        ["2", "ojaswi", "CS", 85000],
        ["3", "rohith", "CS", 41000],
        ["4", "sridevi", "IT", 56000],
        ["5", "bobby", "ECE", 45000],
        ["6", "gayatri", "ECE", 49000],
        ["7", "gnanesh", "CS", 45000],
        ["8", "bhanu", "Mech", 21000]
        ]
 
# specify column names
columns = ['ID', 'NAME', 'DEPT', 'FEE']
 
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
 
# summing using window function
dataframe.withColumn('Total Branch Sum', f.sum(
    'FEE').over(Window.partitionBy('DEPT'))).show()

输出: