Home » PySpark StorageLevel

PySpark StorageLevel

by Online Tutorials Library

PySpark StorageLevel

PySpark StorageLevel is used to decide how RDD should be stored in memory. It also determines the weather serialize RDD and weather to replicate RDD partitions. In Apache Spark, it is responsible for RDD should be saved in the memory or should it be stored over the disk, or in both. It contains commonly-used PySpark StorageLevels, static constant like MEMORY_ONLY.

The following code block consist the class definition of a StorageLevel-

Class Variables

There are different PySpark StorageLevels to decide the storage of RDD, such as:

  • DISK_ONLY: StorageLevel(True, False, False, False, 1)
  • DISK_ONLY_2: StorageLevel(True, False, False, False, 2)
  • MEMORY_AND_DISK: StorageLevel(True, True, False, False, 1)
  • MEMORY_AND_DISK_2: StorageLevel(True, True, False, False, 2)
  • MEMORY_AND_DISK_SER: StorageLevel(True, True, False, False, 1)
  • MEMORY_AND_DISK_SER_2: StorageLevel(True, True, False, False, 2)
  • MEMORY_ONLY: StorageLevel(False, True, False, False, 1)
  • MEMORY_ONLY_2: StorageLevel(False, True, False, False, 2)
  • MEMORY_ONLY_SER: StorageLevel(False, True, False, False, 1)
  • MEMORY_ONLY_SER_2: StorageLevel(False, True, False, False, 2)
  • OFF_HEAP: StorageLevel(True, True, True, False, 1)

Instance Method

Example of PySpark StorageLevel

Here we use the storage level Memory_And_Disk_2, which means RDD partition will have replication of 2.

Output:

Disk Memory Serialized 2x Replicated  

Next TopicPySpark Profiler

You may also like