Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
# # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
""" A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous sequence of RDDs (of the same type) representing a continuous stream of data (see :class:`RDD` in the Spark core documentation for more details on RDDs).
DStreams can either be created from live data (such as, data from TCP sockets, etc.) using a :class:`StreamingContext` or it can be generated by transforming existing DStreams using operations such as `map`, `window` and `reduceByKeyAndWindow`. While a Spark Streaming program is running, each DStream periodically generates a RDD, either from live data or by transforming the RDD generated by a parent DStream.
DStreams internally is characterized by a few basic properties: - A list of other DStreams that the DStream depends on - A time interval at which the DStream generates an RDD - A function that is used to generate an RDD after each time interval """
""" Return the StreamingContext associated with this DStream """ return self._ssc
""" Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream. """
""" Return a new DStream containing only the elements that satisfy predicate. """
""" Return a new DStream by applying a function to all elements of this DStream, and then flattening the results """
""" Return a new DStream by applying a function to each element of DStream. """
""" Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs of this DStream. """
""" Return a new DStream in which each RDD is generated by applying mapPartitionsWithIndex() to each RDDs of this DStream. """
""" Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream. """
""" Return a new DStream by applying reduceByKey to each RDD. """
numPartitions=None): """ Return a new DStream by applying combineByKey to each RDD. """
""" Return a copy of the DStream in which each RDD are partitioned using the specified partitioner. """ return self.transform(lambda rdd: rdd.partitionBy(numPartitions, partitionFunc))
""" Apply a function to each RDD in this DStream. """
""" Print the first num elements of each RDD generated in this DStream.
Parameters ---------- num : int, optional the number of elements from the first will be printed. """ print("-------------------------------------------") print("Time: %s" % time) print("-------------------------------------------") for record in taken[:num]: print(record) if len(taken) > num: print("...") print("")
""" Return a new DStream by applying a map function to the value of each key-value pairs in this DStream without changing the key. """
""" Return a new DStream by applying a flatmap function to the value of each key-value pairs in this DStream without changing the key. """
""" Return a new DStream in which RDD is generated by applying glom() to RDD of this DStream. """
""" Persist the RDDs of this DStream with the default storage level (`MEMORY_ONLY`). """ self.is_cached = True self.persist(StorageLevel.MEMORY_ONLY) return self
""" Persist the RDDs of this DStream with the given storage level """ self.is_cached = True javaStorageLevel = self._sc._getJavaStorageLevel(storageLevel) self._jdstream.persist(javaStorageLevel) return self
""" Enable periodic checkpointing of RDDs of this DStream
Parameters ---------- interval : int time in seconds, after each period of that, generated RDD will be checkpointed """
""" Return a new DStream by applying groupByKey on each RDD. """
""" Return a new DStream in which each RDD contains the counts of each distinct value in each RDD of this DStream. """
""" Save each RDD in this DStream as at text file, using string representation of elements. """ except Py4JJavaError as e: # after recovered from checkpointing, the foreachRDD may # be called twice if 'FileAlreadyExistsException' not in str(e): raise
# TODO: uncomment this until we have ssc.pickleFileStream() # def saveAsPickleFiles(self, prefix, suffix=None): # """ # Save each RDD in this DStream as at binary file, the elements are # serialized by pickle. # """ # def saveAsPickleFile(t, rdd): # path = rddToFileName(prefix, suffix, t) # try: # rdd.saveAsPickleFile(path) # except Py4JJavaError as e: # # after recovered from checkpointing, the foreachRDD may # # be called twice # if 'FileAlreadyExistsException' not in str(e): # raise # return self.foreachRDD(saveAsPickleFile)
""" Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream.
`func` can have one argument of `rdd`, or have two arguments of (`time`, `rdd`) """
""" Return a new DStream in which each RDD is generated by applying a function on each RDD of this DStream and 'other' DStream.
`func` can have two arguments of (`rdd_a`, `rdd_b`) or have three arguments of (`time`, `rdd_a`, `rdd_b`) """ other._jdstream.dstream(), jfunc)
""" Return a new DStream with an increased or decreased level of parallelism. """
def _slideDuration(self): """ Return the slideDuration in seconds of this DStream """
""" Return a new DStream by unifying data of another DStream with this DStream.
Parameters ---------- other : :class:`DStream` Another DStream having the same interval (i.e., slideDuration) as this DStream. """ raise ValueError("the two DStream should have same slide duration")
""" Return a new DStream by applying 'cogroup' between RDDs of this DStream and `other` DStream.
Hash partitioning is used to generate the RDDs with `numPartitions` partitions. """
""" Return a new DStream by applying 'join' between RDDs of this DStream and `other` DStream.
Hash partitioning is used to generate the RDDs with `numPartitions` partitions. """
""" Return a new DStream by applying 'left outer join' between RDDs of this DStream and `other` DStream.
Hash partitioning is used to generate the RDDs with `numPartitions` partitions. """
""" Return a new DStream by applying 'right outer join' between RDDs of this DStream and `other` DStream.
Hash partitioning is used to generate the RDDs with `numPartitions` partitions. """
""" Return a new DStream by applying 'full outer join' between RDDs of this DStream and `other` DStream.
Hash partitioning is used to generate the RDDs with `numPartitions` partitions. """
""" Convert datetime or unix_timestamp into Time """
""" Return all the RDDs between 'begin' to 'end' (both included)
`begin`, `end` could be datetime.datetime() or unix_timestamp """
"dstream's slide (batch) duration (%d ms)" % duration) "dstream's slide (batch) duration (%d ms)" % duration)
""" Return a new DStream in which each RDD contains all the elements in seen in a sliding window of time over this DStream.
Parameters ---------- windowDuration : int width of the window; must be a multiple of this DStream's batching interval slideDuration : int, optional sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval """ return DStream(self._jdstream.window(d), self._ssc, self._jrdd_deserializer)
""" Return a new DStream in which each RDD has a single element generated by reducing all elements in a sliding window over this DStream.
if `invReduceFunc` is not None, the reduction is done incrementally using the old window's reduced value :
1. reduce the new values that entered the window (e.g., adding new counts)
2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than `invReduceFunc` is None.
Parameters ---------- reduceFunc : function associative and commutative reduce function invReduceFunc : function inverse reduce function of `reduceFunc`; such that for all y, and invertible x: `invReduceFunc(reduceFunc(x, y), x) = y` windowDuration : int width of the window; must be a multiple of this DStream's batching interval slideDuration : int sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval """ windowDuration, slideDuration, 1)
""" Return a new DStream in which each RDD has a single element generated by counting the number of elements in a window over this DStream. windowDuration and slideDuration are as defined in the window() operation.
This is equivalent to window(windowDuration, slideDuration).count(), but will be more efficient if window is large. """ windowDuration, slideDuration)
""" Return a new DStream in which each RDD contains the count of distinct elements in RDDs in a sliding window over this DStream.
Parameters ---------- windowDuration : int width of the window; must be a multiple of this DStream's batching interval slideDuration : int sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval numPartitions : int, optional number of partitions of each RDD in the new DStream. """ windowDuration, slideDuration, numPartitions)
""" Return a new DStream by applying `groupByKey` over a sliding window. Similar to `DStream.groupByKey()`, but applies it over a sliding window.
Parameters ---------- windowDuration : int width of the window; must be a multiple of this DStream's batching interval slideDuration : int sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval numPartitions : int, optional Number of partitions of each RDD in the new DStream. """ windowDuration, slideDuration, numPartitions)
numPartitions=None, filterFunc=None): """ Return a new DStream by applying incremental `reduceByKey` over a sliding window.
The reduced value of over a new window is calculated using the old window's reduce value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts)
`invFunc` can be None, then it will reduce all the RDDs in window, could be slower than having `invFunc`.
Parameters ---------- func : function associative and commutative reduce function invFunc : function inverse function of `reduceFunc` windowDuration : int width of the window; must be a multiple of this DStream's batching interval slideDuration : int, optional sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval numPartitions : int, optional number of partitions of each RDD in the new DStream. filterFunc : function, optional function to filter expired key-value pairs; only pairs that satisfy the function are retained set this to null if you do not want to filter """
r = r.filter(filterFunc)
if kv[1] is not None else kv[0])
slideDuration = self._slideDuration reduced._jdstream.dstream(), jreduceFunc, jinvReduceFunc, self._ssc._jduration(windowDuration), self._ssc._jduration(slideDuration)) else:
""" Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
Parameters ---------- updateFunc : function State update function. If this function returns None, then corresponding state key-value pair will be eliminated. """
initialRDD = self._sc.parallelize(initialRDD)
else:
self._sc.serializer, self._jrdd_deserializer) initialRDD._jrdd) else:
""" TransformedDStream is a DStream generated by an Python function transforming each RDD of a DStream to another RDDs.
Multiple continuous transformations of DStream can be combined into one transformation. """
# Using type() to avoid folding the functions and compacting the DStreams which is not # not strictly an object of TransformedDStream. not prev.is_cached and not prev.is_checkpointed): else:
def _jdstream(self):
|