Hide keyboard shortcuts

Hot-keys on this page

r m x p   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508

509

510

511

512

513

514

515

516

517

518

519

520

521

522

523

524

525

526

527

528

529

530

531

532

533

534

535

536

537

538

539

540

541

542

543

544

545

546

547

548

549

550

551

552

553

554

555

556

557

558

559

560

561

562

563

564

565

566

567

568

569

570

571

572

573

574

575

576

577

578

579

580

581

582

583

584

585

586

587

588

589

590

591

592

593

594

595

596

597

598

599

600

601

602

603

604

605

606

607

608

609

610

611

612

613

614

615

616

617

618

619

620

621

622

623

624

625

626

627

628

629

630

631

632

633

634

635

636

637

638

639

640

641

642

643

644

645

646

647

648

649

650

651

652

653

654

655

656

657

658

659

660

661

662

663

664

665

666

667

668

669

670

671

672

673

674

675

676

677

678

679

680

681

682

683

684

685

686

687

688

689

690

691

692

693

694

695

696

697

698

699

700

701

702

703

704

705

706

707

708

709

710

711

712

713

714

715

716

717

718

719

720

721

722

723

724

725

726

727

728

729

730

731

732

733

734

735

736

737

738

739

740

741

742

743

744

745

746

747

748

749

750

751

752

753

754

755

756

757

758

759

760

761

762

763

764

765

766

767

768

769

770

771

772

773

774

775

776

777

778

779

780

781

782

783

784

785

786

787

788

789

790

791

792

793

794

795

796

797

798

799

800

801

802

803

804

805

806

807

808

809

810

811

812

813

814

815

816

817

818

819

820

821

822

823

824

825

826

827

828

829

830

831

832

833

834

835

836

837

838

839

840

841

842

843

844

845

846

847

848

849

850

851

852

853

854

855

856

857

858

859

860

861

862

863

864

865

866

867

868

869

870

871

872

873

874

875

876

877

878

879

880

881

882

883

884

885

886

887

888

889

890

891

892

893

894

895

896

897

898

899

900

901

902

903

904

905

906

907

908

909

910

911

912

913

914

915

916

917

918

919

920

921

922

923

924

925

926

927

928

929

930

931

932

933

934

935

936

937

938

939

940

941

942

943

944

945

946

947

948

949

950

951

952

953

954

955

956

957

958

959

960

961

962

963

964

965

966

967

968

969

970

971

972

973

974

975

976

977

978

979

980

981

982

983

984

985

986

987

988

989

990

991

992

993

994

995

996

997

998

999

1000

1001

1002

1003

1004

1005

1006

1007

1008

1009

1010

1011

1012

1013

1014

1015

1016

1017

1018

1019

1020

1021

1022

1023

1024

1025

1026

1027

1028

1029

1030

1031

1032

1033

1034

1035

1036

1037

1038

1039

1040

1041

1042

1043

1044

1045

1046

1047

1048

1049

1050

1051

1052

1053

1054

1055

1056

1057

1058

1059

1060

1061

1062

1063

1064

1065

1066

1067

1068

1069

1070

1071

1072

1073

1074

1075

1076

1077

1078

1079

1080

1081

1082

1083

1084

1085

1086

1087

1088

1089

1090

1091

1092

1093

1094

1095

1096

1097

1098

1099

1100

1101

1102

1103

1104

1105

1106

1107

1108

1109

1110

1111

1112

1113

1114

1115

1116

1117

1118

1119

1120

1121

1122

1123

1124

1125

1126

1127

1128

1129

1130

1131

1132

1133

1134

1135

1136

1137

1138

1139

1140

1141

1142

1143

1144

1145

1146

1147

1148

1149

1150

1151

1152

# 

# Licensed to the Apache Software Foundation (ASF) under one or more 

# contributor license agreements. See the NOTICE file distributed with 

# this work for additional information regarding copyright ownership. 

# The ASF licenses this file to You under the Apache License, Version 2.0 

# (the "License"); you may not use this file except in compliance with 

# the License. You may obtain a copy of the License at 

# 

# http://www.apache.org/licenses/LICENSE-2.0 

# 

# Unless required by applicable law or agreed to in writing, software 

# distributed under the License is distributed on an "AS IS" BASIS, 

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 

# See the License for the specific language governing permissions and 

# limitations under the License. 

# 

 

import sys 

import array as pyarray 

from math import exp, log 

from collections import namedtuple 

 

from numpy import array, random, tile 

 

from pyspark import SparkContext, since 

from pyspark.rdd import RDD 

from pyspark.mllib.common import JavaModelWrapper, callMLlibFunc, callJavaFunc, _py2java, _java2py 

from pyspark.mllib.linalg import SparseVector, _convert_to_vector, DenseVector # noqa: F401 

from pyspark.mllib.stat.distribution import MultivariateGaussian 

from pyspark.mllib.util import Saveable, Loader, inherit_doc, JavaLoader, JavaSaveable 

from pyspark.streaming import DStream 

 

__all__ = ['BisectingKMeansModel', 'BisectingKMeans', 'KMeansModel', 'KMeans', 

'GaussianMixtureModel', 'GaussianMixture', 'PowerIterationClusteringModel', 

'PowerIterationClustering', 'StreamingKMeans', 'StreamingKMeansModel', 

'LDA', 'LDAModel'] 

 

 

@inherit_doc 

class BisectingKMeansModel(JavaModelWrapper): 

""" 

A clustering model derived from the bisecting k-means method. 

 

.. versionadded:: 2.0.0 

 

Examples 

-------- 

>>> data = array([0.0,0.0, 1.0,1.0, 9.0,8.0, 8.0,9.0]).reshape(4, 2) 

>>> bskm = BisectingKMeans() 

>>> model = bskm.train(sc.parallelize(data, 2), k=4) 

>>> p = array([0.0, 0.0]) 

>>> model.predict(p) 

0 

>>> model.k 

4 

>>> model.computeCost(p) 

0.0 

""" 

 

def __init__(self, java_model): 

super(BisectingKMeansModel, self).__init__(java_model) 

self.centers = [c.toArray() for c in self.call("clusterCenters")] 

 

@property 

@since('2.0.0') 

def clusterCenters(self): 

"""Get the cluster centers, represented as a list of NumPy 

arrays.""" 

return self.centers 

 

@property 

@since('2.0.0') 

def k(self): 

"""Get the number of clusters""" 

return self.call("k") 

 

def predict(self, x): 

""" 

Find the cluster that each of the points belongs to in this 

model. 

 

.. versionadded:: 2.0.0 

 

Parameters 

---------- 

x : :py:class:`pyspark.mllib.linalg.Vector` or :py:class:`pyspark.RDD` 

A data point (or RDD of points) to determine cluster index. 

:py:class:`pyspark.mllib.linalg.Vector` can be replaced with equivalent 

objects (list, tuple, numpy.ndarray). 

 

Returns 

------- 

int or :py:class:`pyspark.RDD` of int 

Predicted cluster index or an RDD of predicted cluster indices 

if the input is an RDD. 

""" 

if isinstance(x, RDD): 

vecs = x.map(_convert_to_vector) 

return self.call("predict", vecs) 

 

x = _convert_to_vector(x) 

return self.call("predict", x) 

 

def computeCost(self, x): 

""" 

Return the Bisecting K-means cost (sum of squared distances of 

points to their nearest center) for this model on the given 

data. If provided with an RDD of points returns the sum. 

 

.. versionadded:: 2.0.0 

 

Parameters 

---------- 

point : :py:class:`pyspark.mllib.linalg.Vector` or :py:class:`pyspark.RDD` 

A data point (or RDD of points) to compute the cost(s). 

:py:class:`pyspark.mllib.linalg.Vector` can be replaced with equivalent 

objects (list, tuple, numpy.ndarray). 

""" 

if isinstance(x, RDD): 

vecs = x.map(_convert_to_vector) 

return self.call("computeCost", vecs) 

 

return self.call("computeCost", _convert_to_vector(x)) 

 

 

class BisectingKMeans(object): 

""" 

A bisecting k-means algorithm based on the paper "A comparison of 

document clustering techniques" by Steinbach, Karypis, and Kumar, 

with modification to fit Spark. 

The algorithm starts from a single cluster that contains all points. 

Iteratively it finds divisible clusters on the bottom level and 

bisects each of them using k-means, until there are `k` leaf 

clusters in total or no leaf clusters are divisible. 

The bisecting steps of clusters on the same level are grouped 

together to increase parallelism. If bisecting all divisible 

clusters on the bottom level would result more than `k` leaf 

clusters, larger clusters get higher priority. 

 

.. versionadded:: 2.0.0 

 

Notes 

----- 

See the original paper [1]_ 

 

.. [1] Steinbach, M. et al. "A Comparison of Document Clustering Techniques." (2000). 

KDD Workshop on Text Mining, 2000 

http://glaros.dtc.umn.edu/gkhome/fetch/papers/docclusterKDDTMW00.pdf 

""" 

 

@classmethod 

def train(self, rdd, k=4, maxIterations=20, minDivisibleClusterSize=1.0, seed=-1888008604): 

""" 

Runs the bisecting k-means algorithm return the model. 

 

.. versionadded:: 2.0.0 

 

Parameters 

---------- 

rdd : :py:class:`pyspark.RDD` 

Training points as an `RDD` of `Vector` or convertible 

sequence types. 

k : int, optional 

The desired number of leaf clusters. The actual number could 

be smaller if there are no divisible leaf clusters. 

(default: 4) 

maxIterations : int, optional 

Maximum number of iterations allowed to split clusters. 

(default: 20) 

minDivisibleClusterSize : float, optional 

Minimum number of points (if >= 1.0) or the minimum proportion 

of points (if < 1.0) of a divisible cluster. 

(default: 1) 

seed : int, optional 

Random seed value for cluster initialization. 

(default: -1888008604 from classOf[BisectingKMeans].getName.##) 

""" 

java_model = callMLlibFunc( 

"trainBisectingKMeans", rdd.map(_convert_to_vector), 

k, maxIterations, minDivisibleClusterSize, seed) 

return BisectingKMeansModel(java_model) 

 

 

@inherit_doc 

class KMeansModel(Saveable, Loader): 

 

"""A clustering model derived from the k-means method. 

 

.. versionadded:: 0.9.0 

 

Examples 

-------- 

>>> data = array([0.0,0.0, 1.0,1.0, 9.0,8.0, 8.0,9.0]).reshape(4, 2) 

>>> model = KMeans.train( 

... sc.parallelize(data), 2, maxIterations=10, initializationMode="random", 

... seed=50, initializationSteps=5, epsilon=1e-4) 

>>> model.predict(array([0.0, 0.0])) == model.predict(array([1.0, 1.0])) 

True 

>>> model.predict(array([8.0, 9.0])) == model.predict(array([9.0, 8.0])) 

True 

>>> model.k 

2 

>>> model.computeCost(sc.parallelize(data)) 

2.0 

>>> model = KMeans.train(sc.parallelize(data), 2) 

>>> sparse_data = [ 

... SparseVector(3, {1: 1.0}), 

... SparseVector(3, {1: 1.1}), 

... SparseVector(3, {2: 1.0}), 

... SparseVector(3, {2: 1.1}) 

... ] 

>>> model = KMeans.train(sc.parallelize(sparse_data), 2, initializationMode="k-means||", 

... seed=50, initializationSteps=5, epsilon=1e-4) 

>>> model.predict(array([0., 1., 0.])) == model.predict(array([0, 1.1, 0.])) 

True 

>>> model.predict(array([0., 0., 1.])) == model.predict(array([0, 0, 1.1])) 

True 

>>> model.predict(sparse_data[0]) == model.predict(sparse_data[1]) 

True 

>>> model.predict(sparse_data[2]) == model.predict(sparse_data[3]) 

True 

>>> isinstance(model.clusterCenters, list) 

True 

>>> import os, tempfile 

>>> path = tempfile.mkdtemp() 

>>> model.save(sc, path) 

>>> sameModel = KMeansModel.load(sc, path) 

>>> sameModel.predict(sparse_data[0]) == model.predict(sparse_data[0]) 

True 

>>> from shutil import rmtree 

>>> try: 

... rmtree(path) 

... except OSError: 

... pass 

 

>>> data = array([-383.1,-382.9, 28.7,31.2, 366.2,367.3]).reshape(3, 2) 

>>> model = KMeans.train(sc.parallelize(data), 3, maxIterations=0, 

... initialModel = KMeansModel([(-1000.0,-1000.0),(5.0,5.0),(1000.0,1000.0)])) 

>>> model.clusterCenters 

[array([-1000., -1000.]), array([ 5., 5.]), array([ 1000., 1000.])] 

""" 

 

def __init__(self, centers): 

self.centers = centers 

 

@property 

@since('1.0.0') 

def clusterCenters(self): 

"""Get the cluster centers, represented as a list of NumPy arrays.""" 

return self.centers 

 

@property 

@since('1.4.0') 

def k(self): 

"""Total number of clusters.""" 

return len(self.centers) 

 

def predict(self, x): 

""" 

Find the cluster that each of the points belongs to in this 

model. 

 

.. versionadded:: 0.9.0 

 

Parameters 

---------- 

x : :py:class:`pyspark.mllib.linalg.Vector` or :py:class:`pyspark.RDD` 

A data point (or RDD of points) to determine cluster index. 

:py:class:`pyspark.mllib.linalg.Vector` can be replaced with equivalent 

objects (list, tuple, numpy.ndarray). 

 

Returns 

------- 

int or :py:class:`pyspark.RDD` of int 

Predicted cluster index or an RDD of predicted cluster indices 

if the input is an RDD. 

""" 

best = 0 

best_distance = float("inf") 

280 ↛ 281line 280 didn't jump to line 281, because the condition on line 280 was never true if isinstance(x, RDD): 

return x.map(self.predict) 

 

x = _convert_to_vector(x) 

for i in range(len(self.centers)): 

distance = x.squared_distance(self.centers[i]) 

if distance < best_distance: 

best = i 

best_distance = distance 

return best 

 

def computeCost(self, rdd): 

""" 

Return the K-means cost (sum of squared distances of points to 

their nearest center) for this model on the given 

data. 

 

.. versionadded:: 1.4.0 

 

Parameters 

---------- 

rdd : ::py:class:`pyspark.RDD` 

The RDD of points to compute the cost on. 

""" 

cost = callMLlibFunc("computeCostKmeansModel", rdd.map(_convert_to_vector), 

[_convert_to_vector(c) for c in self.centers]) 

return cost 

 

@since('1.4.0') 

def save(self, sc, path): 

""" 

Save this model to the given path. 

""" 

java_centers = _py2java(sc, [_convert_to_vector(c) for c in self.centers]) 

java_model = sc._jvm.org.apache.spark.mllib.clustering.KMeansModel(java_centers) 

java_model.save(sc._jsc.sc(), path) 

 

@classmethod 

@since('1.4.0') 

def load(cls, sc, path): 

""" 

Load a model from the given path. 

""" 

java_model = sc._jvm.org.apache.spark.mllib.clustering.KMeansModel.load(sc._jsc.sc(), path) 

return KMeansModel(_java2py(sc, java_model.clusterCenters())) 

 

 

class KMeans(object): 

""" 

K-means clustering. 

 

.. versionadded:: 0.9.0 

""" 

 

@classmethod 

def train(cls, rdd, k, maxIterations=100, initializationMode="k-means||", 

seed=None, initializationSteps=2, epsilon=1e-4, initialModel=None): 

""" 

Train a k-means clustering model. 

 

.. versionadded:: 0.9.0 

 

Parameters 

---------- 

rdd : ::py:class:`pyspark.RDD` 

Training points as an `RDD` of :py:class:`pyspark.mllib.linalg.Vector` 

or convertible sequence types. 

k : int 

Number of clusters to create. 

maxIterations : int, optional 

Maximum number of iterations allowed. 

(default: 100) 

initializationMode : str, optional 

The initialization algorithm. This can be either "random" or 

"k-means||". 

(default: "k-means||") 

seed : int, optional 

Random seed value for cluster initialization. Set as None to 

generate seed based on system time. 

(default: None) 

initializationSteps : 

Number of steps for the k-means|| initialization mode. 

This is an advanced setting -- the default of 2 is almost 

always enough. 

(default: 2) 

epsilon : float, optional 

Distance threshold within which a center will be considered to 

have converged. If all centers move less than this Euclidean 

distance, iterations are stopped. 

(default: 1e-4) 

initialModel : :py:class:`KMeansModel`, optional 

Initial cluster centers can be provided as a KMeansModel object 

rather than using the random or k-means|| initializationModel. 

(default: None) 

""" 

clusterInitialModel = [] 

if initialModel is not None: 

377 ↛ 378line 377 didn't jump to line 378, because the condition on line 377 was never true if not isinstance(initialModel, KMeansModel): 

raise TypeError("initialModel is of " + str(type(initialModel)) + ". It needs " 

"to be of <type 'KMeansModel'>") 

clusterInitialModel = [_convert_to_vector(c) for c in initialModel.clusterCenters] 

model = callMLlibFunc("trainKMeansModel", rdd.map(_convert_to_vector), k, maxIterations, 

initializationMode, seed, initializationSteps, epsilon, 

clusterInitialModel) 

centers = callJavaFunc(rdd.context, model.clusterCenters) 

return KMeansModel([c.toArray() for c in centers]) 

 

 

@inherit_doc 

class GaussianMixtureModel(JavaModelWrapper, JavaSaveable, JavaLoader): 

 

""" 

A clustering model derived from the Gaussian Mixture Model method. 

 

.. versionadded:: 1.3.0 

 

Examples 

-------- 

>>> from pyspark.mllib.linalg import Vectors, DenseMatrix 

>>> from numpy.testing import assert_equal 

>>> from shutil import rmtree 

>>> import os, tempfile 

 

>>> clusterdata_1 = sc.parallelize(array([-0.1,-0.05,-0.01,-0.1, 

... 0.9,0.8,0.75,0.935, 

... -0.83,-0.68,-0.91,-0.76 ]).reshape(6, 2), 2) 

>>> model = GaussianMixture.train(clusterdata_1, 3, convergenceTol=0.0001, 

... maxIterations=50, seed=10) 

>>> labels = model.predict(clusterdata_1).collect() 

>>> labels[0]==labels[1] 

False 

>>> labels[1]==labels[2] 

False 

>>> labels[4]==labels[5] 

True 

>>> model.predict([-0.1,-0.05]) 

0 

>>> softPredicted = model.predictSoft([-0.1,-0.05]) 

>>> abs(softPredicted[0] - 1.0) < 0.03 

True 

>>> abs(softPredicted[1] - 0.0) < 0.03 

True 

>>> abs(softPredicted[2] - 0.0) < 0.03 

True 

 

>>> path = tempfile.mkdtemp() 

>>> model.save(sc, path) 

>>> sameModel = GaussianMixtureModel.load(sc, path) 

>>> assert_equal(model.weights, sameModel.weights) 

>>> mus, sigmas = list( 

... zip(*[(g.mu, g.sigma) for g in model.gaussians])) 

>>> sameMus, sameSigmas = list( 

... zip(*[(g.mu, g.sigma) for g in sameModel.gaussians])) 

>>> mus == sameMus 

True 

>>> sigmas == sameSigmas 

True 

>>> from shutil import rmtree 

>>> try: 

... rmtree(path) 

... except OSError: 

... pass 

 

>>> data = array([-5.1971, -2.5359, -3.8220, 

... -5.2211, -5.0602, 4.7118, 

... 6.8989, 3.4592, 4.6322, 

... 5.7048, 4.6567, 5.5026, 

... 4.5605, 5.2043, 6.2734]) 

>>> clusterdata_2 = sc.parallelize(data.reshape(5,3)) 

>>> model = GaussianMixture.train(clusterdata_2, 2, convergenceTol=0.0001, 

... maxIterations=150, seed=4) 

>>> labels = model.predict(clusterdata_2).collect() 

>>> labels[0]==labels[1] 

True 

>>> labels[2]==labels[3]==labels[4] 

True 

""" 

 

@property 

@since('1.4.0') 

def weights(self): 

""" 

Weights for each Gaussian distribution in the mixture, where weights[i] is 

the weight for Gaussian i, and weights.sum == 1. 

""" 

return array(self.call("weights")) 

 

@property 

@since('1.4.0') 

def gaussians(self): 

""" 

Array of MultivariateGaussian where gaussians[i] represents 

the Multivariate Gaussian (Normal) Distribution for Gaussian i. 

""" 

return [ 

MultivariateGaussian(gaussian[0], gaussian[1]) 

for gaussian in self.call("gaussians")] 

 

@property 

@since('1.4.0') 

def k(self): 

"""Number of gaussians in mixture.""" 

return len(self.weights) 

 

def predict(self, x): 

""" 

Find the cluster to which the point 'x' or each point in RDD 'x' 

has maximum membership in this model. 

 

.. versionadded:: 1.3.0 

 

Parameters 

---------- 

x : :py:class:`pyspark.mllib.linalg.Vector` or :py:class:`pyspark.RDD` 

A feature vector or an RDD of vectors representing data points. 

 

Returns 

------- 

numpy.float64 or :py:class:`pyspark.RDD` of int 

Predicted cluster label or an RDD of predicted cluster labels 

if the input is an RDD. 

""" 

if isinstance(x, RDD): 

cluster_labels = self.predictSoft(x).map(lambda z: z.index(max(z))) 

return cluster_labels 

else: 

z = self.predictSoft(x) 

return z.argmax() 

 

def predictSoft(self, x): 

""" 

Find the membership of point 'x' or each point in RDD 'x' to all mixture components. 

 

.. versionadded:: 1.3.0 

 

Parameters 

---------- 

x : :py:class:`pyspark.mllib.linalg.Vector` or :py:class:`pyspark.RDD` 

A feature vector or an RDD of vectors representing data points. 

 

Returns 

------- 

numpy.ndarray or :py:class:`pyspark.RDD` 

The membership value to all mixture components for vector 'x' 

or each vector in RDD 'x'. 

""" 

if isinstance(x, RDD): 

means, sigmas = zip(*[(g.mu, g.sigma) for g in self.gaussians]) 

membership_matrix = callMLlibFunc("predictSoftGMM", x.map(_convert_to_vector), 

_convert_to_vector(self.weights), means, sigmas) 

return membership_matrix.map(lambda x: pyarray.array('d', x)) 

else: 

return self.call("predictSoft", _convert_to_vector(x)).toArray() 

 

@classmethod 

def load(cls, sc, path): 

"""Load the GaussianMixtureModel from disk. 

 

.. versionadded:: 1.5.0 

 

Parameters 

---------- 

sc : :py:class:`SparkContext` 

path : str 

Path to where the model is stored. 

""" 

model = cls._load_java(sc, path) 

wrapper = sc._jvm.org.apache.spark.mllib.api.python.GaussianMixtureModelWrapper(model) 

return cls(wrapper) 

 

 

class GaussianMixture(object): 

""" 

Learning algorithm for Gaussian Mixtures using the expectation-maximization algorithm. 

 

.. versionadded:: 1.3.0 

""" 

 

@classmethod 

def train(cls, rdd, k, convergenceTol=1e-3, maxIterations=100, seed=None, initialModel=None): 

""" 

Train a Gaussian Mixture clustering model. 

 

.. versionadded:: 1.3.0 

 

Parameters 

---------- 

rdd : ::py:class:`pyspark.RDD` 

Training points as an `RDD` of :py:class:`pyspark.mllib.linalg.Vector` 

or convertible sequence types. 

k : int 

Number of independent Gaussians in the mixture model. 

convergenceTol : float, optional 

Maximum change in log-likelihood at which convergence is 

considered to have occurred. 

(default: 1e-3) 

maxIterations : int, optional 

Maximum number of iterations allowed. 

(default: 100) 

seed : int, optional 

Random seed for initial Gaussian distribution. Set as None to 

generate seed based on system time. 

(default: None) 

initialModel : GaussianMixtureModel, optional 

Initial GMM starting point, bypassing the random 

initialization. 

(default: None) 

""" 

initialModelWeights = None 

initialModelMu = None 

initialModelSigma = None 

if initialModel is not None: 

592 ↛ 593line 592 didn't jump to line 593, because the condition on line 592 was never true if initialModel.k != k: 

raise ValueError("Mismatched cluster count, initialModel.k = %s, however k = %s" 

% (initialModel.k, k)) 

initialModelWeights = list(initialModel.weights) 

initialModelMu = [initialModel.gaussians[i].mu for i in range(initialModel.k)] 

initialModelSigma = [initialModel.gaussians[i].sigma for i in range(initialModel.k)] 

java_model = callMLlibFunc("trainGaussianMixtureModel", rdd.map(_convert_to_vector), 

k, convergenceTol, maxIterations, seed, 

initialModelWeights, initialModelMu, initialModelSigma) 

return GaussianMixtureModel(java_model) 

 

 

class PowerIterationClusteringModel(JavaModelWrapper, JavaSaveable, JavaLoader): 

 

""" 

Model produced by :py:class:`PowerIterationClustering`. 

 

.. versionadded:: 1.5.0 

 

Examples 

-------- 

>>> import math 

>>> def genCircle(r, n): 

... points = [] 

... for i in range(0, n): 

... theta = 2.0 * math.pi * i / n 

... points.append((r * math.cos(theta), r * math.sin(theta))) 

... return points 

>>> def sim(x, y): 

... dist2 = (x[0] - y[0]) * (x[0] - y[0]) + (x[1] - y[1]) * (x[1] - y[1]) 

... return math.exp(-dist2 / 2.0) 

>>> r1 = 1.0 

>>> n1 = 10 

>>> r2 = 4.0 

>>> n2 = 40 

>>> n = n1 + n2 

>>> points = genCircle(r1, n1) + genCircle(r2, n2) 

>>> similarities = [(i, j, sim(points[i], points[j])) for i in range(1, n) for j in range(0, i)] 

>>> rdd = sc.parallelize(similarities, 2) 

>>> model = PowerIterationClustering.train(rdd, 2, 40) 

>>> model.k 

2 

>>> result = sorted(model.assignments().collect(), key=lambda x: x.id) 

>>> result[0].cluster == result[1].cluster == result[2].cluster == result[3].cluster 

True 

>>> result[4].cluster == result[5].cluster == result[6].cluster == result[7].cluster 

True 

>>> import os, tempfile 

>>> path = tempfile.mkdtemp() 

>>> model.save(sc, path) 

>>> sameModel = PowerIterationClusteringModel.load(sc, path) 

>>> sameModel.k 

2 

>>> result = sorted(model.assignments().collect(), key=lambda x: x.id) 

>>> result[0].cluster == result[1].cluster == result[2].cluster == result[3].cluster 

True 

>>> result[4].cluster == result[5].cluster == result[6].cluster == result[7].cluster 

True 

>>> from shutil import rmtree 

>>> try: 

... rmtree(path) 

... except OSError: 

... pass 

""" 

 

@property 

@since('1.5.0') 

def k(self): 

""" 

Returns the number of clusters. 

""" 

return self.call("k") 

 

@since('1.5.0') 

def assignments(self): 

""" 

Returns the cluster assignments of this model. 

""" 

return self.call("getAssignments").map( 

lambda x: (PowerIterationClustering.Assignment(*x))) 

 

@classmethod 

@since('1.5.0') 

def load(cls, sc, path): 

""" 

Load a model from the given path. 

""" 

model = cls._load_java(sc, path) 

wrapper =\ 

sc._jvm.org.apache.spark.mllib.api.python.PowerIterationClusteringModelWrapper(model) 

return PowerIterationClusteringModel(wrapper) 

 

 

class PowerIterationClustering(object): 

""" 

Power Iteration Clustering (PIC), a scalable graph clustering algorithm. 

 

 

Developed by Lin and Cohen [1]_. From the abstract: 

 

"PIC finds a very low-dimensional embedding of a 

dataset using truncated power iteration on a normalized pair-wise 

similarity matrix of the data." 

 

.. versionadded:: 1.5.0 

 

.. [1] Lin, Frank & Cohen, William. (2010). Power Iteration Clustering. 

http://www.cs.cmu.edu/~frank/papers/icml2010-pic-final.pdf 

""" 

 

@classmethod 

def train(cls, rdd, k, maxIterations=100, initMode="random"): 

r""" 

Train PowerIterationClusteringModel 

 

.. versionadded:: 1.5.0 

 

Parameters 

---------- 

rdd : :py:class:`pyspark.RDD` 

An RDD of (i, j, s\ :sub:`ij`\) tuples representing the 

affinity matrix, which is the matrix A in the PIC paper. The 

similarity s\ :sub:`ij`\ must be nonnegative. This is a symmetric 

matrix and hence s\ :sub:`ij`\ = s\ :sub:`ji`\ For any (i, j) with 

nonzero similarity, there should be either (i, j, s\ :sub:`ij`\) or 

(j, i, s\ :sub:`ji`\) in the input. Tuples with i = j are ignored, 

because it is assumed s\ :sub:`ij`\ = 0.0. 

k : int 

Number of clusters. 

maxIterations : int, optional 

Maximum number of iterations of the PIC algorithm. 

(default: 100) 

initMode : str, optional 

Initialization mode. This can be either "random" to use 

a random vector as vertex properties, or "degree" to use 

normalized sum similarities. 

(default: "random") 

""" 

model = callMLlibFunc("trainPowerIterationClusteringModel", 

rdd.map(_convert_to_vector), int(k), int(maxIterations), initMode) 

return PowerIterationClusteringModel(model) 

 

class Assignment(namedtuple("Assignment", ["id", "cluster"])): 

""" 

Represents an (id, cluster) tuple. 

 

.. versionadded:: 1.5.0 

""" 

 

 

class StreamingKMeansModel(KMeansModel): 

""" 

Clustering model which can perform an online update of the centroids. 

 

The update formula for each centroid is given by 

 

- c_t+1 = ((c_t * n_t * a) + (x_t * m_t)) / (n_t + m_t) 

- n_t+1 = n_t * a + m_t 

 

where 

 

- c_t: Centroid at the n_th iteration. 

- n_t: Number of samples (or) weights associated with the centroid 

at the n_th iteration. 

- x_t: Centroid of the new data closest to c_t. 

- m_t: Number of samples (or) weights of the new data closest to c_t 

- c_t+1: New centroid. 

- n_t+1: New number of weights. 

- a: Decay Factor, which gives the forgetfulness. 

 

.. versionadded:: 1.5.0 

 

Parameters 

---------- 

clusterCenters : list of :py:class:`pyspark.mllib.linalg.Vector` or covertible 

Initial cluster centers. 

clusterWeights : :py:class:`pyspark.mllib.linalg.Vector` or covertible 

List of weights assigned to each cluster. 

 

Notes 

----- 

If a is set to 1, it is the weighted mean of the previous 

and new data. If it set to zero, the old centroids are completely 

forgotten. 

 

Examples 

-------- 

>>> initCenters = [[0.0, 0.0], [1.0, 1.0]] 

>>> initWeights = [1.0, 1.0] 

>>> stkm = StreamingKMeansModel(initCenters, initWeights) 

>>> data = sc.parallelize([[-0.1, -0.1], [0.1, 0.1], 

... [0.9, 0.9], [1.1, 1.1]]) 

>>> stkm = stkm.update(data, 1.0, "batches") 

>>> stkm.centers 

array([[ 0., 0.], 

[ 1., 1.]]) 

>>> stkm.predict([-0.1, -0.1]) 

0 

>>> stkm.predict([0.9, 0.9]) 

1 

>>> stkm.clusterWeights 

[3.0, 3.0] 

>>> decayFactor = 0.0 

>>> data = sc.parallelize([DenseVector([1.5, 1.5]), DenseVector([0.2, 0.2])]) 

>>> stkm = stkm.update(data, 0.0, "batches") 

>>> stkm.centers 

array([[ 0.2, 0.2], 

[ 1.5, 1.5]]) 

>>> stkm.clusterWeights 

[1.0, 1.0] 

>>> stkm.predict([0.2, 0.2]) 

0 

>>> stkm.predict([1.5, 1.5]) 

1 

""" 

def __init__(self, clusterCenters, clusterWeights): 

super(StreamingKMeansModel, self).__init__(centers=clusterCenters) 

self._clusterWeights = list(clusterWeights) 

 

@property 

@since('1.5.0') 

def clusterWeights(self): 

"""Return the cluster weights.""" 

return self._clusterWeights 

 

@since('1.5.0') 

def update(self, data, decayFactor, timeUnit): 

"""Update the centroids, according to data 

 

.. versionadded:: 1.5.0 

 

Parameters 

---------- 

data : :py:class:`pyspark.RDD` 

RDD with new data for the model update. 

decayFactor : float 

Forgetfulness of the previous centroids. 

timeUnit : str 

Can be "batches" or "points". If points, then the decay factor 

is raised to the power of number of new points and if batches, 

then decay factor will be used as is. 

""" 

834 ↛ 835line 834 didn't jump to line 835, because the condition on line 834 was never true if not isinstance(data, RDD): 

raise TypeError("Data should be of an RDD, got %s." % type(data)) 

data = data.map(_convert_to_vector) 

decayFactor = float(decayFactor) 

838 ↛ 839line 838 didn't jump to line 839, because the condition on line 838 was never true if timeUnit not in ["batches", "points"]: 

raise ValueError( 

"timeUnit should be 'batches' or 'points', got %s." % timeUnit) 

vectorCenters = [_convert_to_vector(center) for center in self.centers] 

updatedModel = callMLlibFunc( 

"updateStreamingKMeansModel", vectorCenters, self._clusterWeights, 

data, decayFactor, timeUnit) 

self.centers = array(updatedModel[0]) 

self._clusterWeights = list(updatedModel[1]) 

return self 

 

 

class StreamingKMeans(object): 

""" 

Provides methods to set k, decayFactor, timeUnit to configure the 

KMeans algorithm for fitting and predicting on incoming dstreams. 

More details on how the centroids are updated are provided under the 

docs of StreamingKMeansModel. 

 

.. versionadded:: 1.5.0 

 

Parameters 

---------- 

k : int, optional 

Number of clusters. 

(default: 2) 

decayFactor : float, optional 

Forgetfulness of the previous centroids. 

(default: 1.0) 

timeUnit : str, optional 

Can be "batches" or "points". If points, then the decay factor is 

raised to the power of number of new points and if batches, then 

decay factor will be used as is. 

(default: "batches") 

""" 

def __init__(self, k=2, decayFactor=1.0, timeUnit="batches"): 

self._k = k 

self._decayFactor = decayFactor 

876 ↛ 877line 876 didn't jump to line 877, because the condition on line 876 was never true if timeUnit not in ["batches", "points"]: 

raise ValueError( 

"timeUnit should be 'batches' or 'points', got %s." % timeUnit) 

self._timeUnit = timeUnit 

self._model = None 

 

@since('1.5.0') 

def latestModel(self): 

"""Return the latest model""" 

return self._model 

 

def _validate(self, dstream): 

if self._model is None: 

raise ValueError( 

"Initial centers should be set either by setInitialCenters " 

"or setRandomCenters.") 

892 ↛ 893line 892 didn't jump to line 893, because the condition on line 892 was never true if not isinstance(dstream, DStream): 

raise TypeError( 

"Expected dstream to be of type DStream, " 

"got type %s" % type(dstream)) 

 

@since('1.5.0') 

def setK(self, k): 

"""Set number of clusters.""" 

self._k = k 

return self 

 

@since('1.5.0') 

def setDecayFactor(self, decayFactor): 

"""Set decay factor.""" 

self._decayFactor = decayFactor 

return self 

 

@since('1.5.0') 

def setHalfLife(self, halfLife, timeUnit): 

""" 

Set number of batches after which the centroids of that 

particular batch has half the weightage. 

""" 

self._timeUnit = timeUnit 

self._decayFactor = exp(log(0.5) / halfLife) 

return self 

 

@since('1.5.0') 

def setInitialCenters(self, centers, weights): 

""" 

Set initial centers. Should be set before calling trainOn. 

""" 

self._model = StreamingKMeansModel(centers, weights) 

return self 

 

@since('1.5.0') 

def setRandomCenters(self, dim, weight, seed): 

""" 

Set the initial centers to be random samples from 

a gaussian population with constant weights. 

""" 

rng = random.RandomState(seed) 

clusterCenters = rng.randn(self._k, dim) 

clusterWeights = tile(weight, self._k) 

self._model = StreamingKMeansModel(clusterCenters, clusterWeights) 

return self 

 

@since('1.5.0') 

def trainOn(self, dstream): 

"""Train the model on the incoming dstream.""" 

self._validate(dstream) 

 

def update(rdd): 

self._model.update(rdd, self._decayFactor, self._timeUnit) 

 

dstream.foreachRDD(update) 

 

@since('1.5.0') 

def predictOn(self, dstream): 

""" 

Make predictions on a dstream. 

Returns a transformed dstream object 

""" 

self._validate(dstream) 

return dstream.map(lambda x: self._model.predict(x)) 

 

@since('1.5.0') 

def predictOnValues(self, dstream): 

""" 

Make predictions on a keyed dstream. 

Returns a transformed dstream object. 

""" 

self._validate(dstream) 

return dstream.mapValues(lambda x: self._model.predict(x)) 

 

 

class LDAModel(JavaModelWrapper, JavaSaveable, Loader): 

 

""" A clustering model derived from the LDA method. 

 

Latent Dirichlet Allocation (LDA), a topic model designed for text documents. 

Terminology 

 

- "word" = "term": an element of the vocabulary 

- "token": instance of a term appearing in a document 

- "topic": multinomial distribution over words representing some concept 

 

.. versionadded:: 1.5.0 

 

Notes 

----- 

See the original LDA paper (journal version) [1]_ 

 

.. [1] Blei, D. et al. "Latent Dirichlet Allocation." 

J. Mach. Learn. Res. 3 (2003): 993-1022. 

https://www.jmlr.org/papers/v3/blei03a 

 

Examples 

-------- 

>>> from pyspark.mllib.linalg import Vectors 

>>> from numpy.testing import assert_almost_equal, assert_equal 

>>> data = [ 

... [1, Vectors.dense([0.0, 1.0])], 

... [2, SparseVector(2, {0: 1.0})], 

... ] 

>>> rdd = sc.parallelize(data) 

>>> model = LDA.train(rdd, k=2, seed=1) 

>>> model.vocabSize() 

2 

>>> model.describeTopics() 

[([1, 0], [0.5..., 0.49...]), ([0, 1], [0.5..., 0.49...])] 

>>> model.describeTopics(1) 

[([1], [0.5...]), ([0], [0.5...])] 

 

>>> topics = model.topicsMatrix() 

>>> topics_expect = array([[0.5, 0.5], [0.5, 0.5]]) 

>>> assert_almost_equal(topics, topics_expect, 1) 

 

>>> import os, tempfile 

>>> from shutil import rmtree 

>>> path = tempfile.mkdtemp() 

>>> model.save(sc, path) 

>>> sameModel = LDAModel.load(sc, path) 

>>> assert_equal(sameModel.topicsMatrix(), model.topicsMatrix()) 

>>> sameModel.vocabSize() == model.vocabSize() 

True 

>>> try: 

... rmtree(path) 

... except OSError: 

... pass 

""" 

 

@since('1.5.0') 

def topicsMatrix(self): 

"""Inferred topics, where each topic is represented by a distribution over terms.""" 

return self.call("topicsMatrix").toArray() 

 

@since('1.5.0') 

def vocabSize(self): 

"""Vocabulary size (number of terms or terms in the vocabulary)""" 

return self.call("vocabSize") 

 

def describeTopics(self, maxTermsPerTopic=None): 

"""Return the topics described by weighted terms. 

 

.. versionadded:: 1.6.0 

.. warning:: If vocabSize and k are large, this can return a large object! 

 

Parameters 

---------- 

maxTermsPerTopic : int, optional 

Maximum number of terms to collect for each topic. 

(default: vocabulary size) 

 

Returns 

------- 

list 

Array over topics. Each topic is represented as a pair of 

matching arrays: (term indices, term weights in topic). 

Each topic's terms are sorted in order of decreasing weight. 

""" 

if maxTermsPerTopic is None: 

topics = self.call("describeTopics") 

else: 

topics = self.call("describeTopics", maxTermsPerTopic) 

return topics 

 

@classmethod 

def load(cls, sc, path): 

"""Load the LDAModel from disk. 

 

.. versionadded:: 1.5.0 

 

Parameters 

---------- 

sc : :py:class:`pyspark.SparkContext` 

path : str 

Path to where the model is stored. 

""" 

1071 ↛ 1072line 1071 didn't jump to line 1072, because the condition on line 1071 was never true if not isinstance(sc, SparkContext): 

raise TypeError("sc should be a SparkContext, got type %s" % type(sc)) 

1073 ↛ 1074line 1073 didn't jump to line 1074, because the condition on line 1073 was never true if not isinstance(path, str): 

raise TypeError("path should be a string, got type %s" % type(path)) 

model = callMLlibFunc("loadLDAModel", sc, path) 

return LDAModel(model) 

 

 

class LDA(object): 

""" 

Train Latent Dirichlet Allocation (LDA) model. 

 

.. versionadded:: 1.5.0 

""" 

 

@classmethod 

def train(cls, rdd, k=10, maxIterations=20, docConcentration=-1.0, 

topicConcentration=-1.0, seed=None, checkpointInterval=10, optimizer="em"): 

"""Train a LDA model. 

 

.. versionadded:: 1.5.0 

 

Parameters 

---------- 

rdd : :py:class:`pyspark.RDD` 

RDD of documents, which are tuples of document IDs and term 

(word) count vectors. The term count vectors are "bags of 

words" with a fixed-size vocabulary (where the vocabulary size 

is the length of the vector). Document IDs must be unique 

and >= 0. 

k : int, optional 

Number of topics to infer, i.e., the number of soft cluster 

centers. 

(default: 10) 

maxIterations : int, optional 

Maximum number of iterations allowed. 

(default: 20) 

docConcentration : float, optional 

Concentration parameter (commonly named "alpha") for the prior 

placed on documents' distributions over topics ("theta"). 

(default: -1.0) 

topicConcentration : float, optional 

Concentration parameter (commonly named "beta" or "eta") for 

the prior placed on topics' distributions over terms. 

(default: -1.0) 

seed : int, optional 

Random seed for cluster initialization. Set as None to generate 

seed based on system time. 

(default: None) 

checkpointInterval : int, optional 

Period (in iterations) between checkpoints. 

(default: 10) 

optimizer : str, optional 

LDAOptimizer used to perform the actual calculation. Currently 

"em", "online" are supported. 

(default: "em") 

""" 

model = callMLlibFunc("trainLDAModel", rdd, k, maxIterations, 

docConcentration, topicConcentration, seed, 

checkpointInterval, optimizer) 

return LDAModel(model) 

 

 

def _test(): 

import doctest 

import numpy 

import pyspark.mllib.clustering 

try: 

# Numpy 1.14+ changed it's string format. 

numpy.set_printoptions(legacy='1.13') 

except TypeError: 

pass 

globs = pyspark.mllib.clustering.__dict__.copy() 

globs['sc'] = SparkContext('local[4]', 'PythonTest', batchSize=2) 

(failure_count, test_count) = doctest.testmod(globs=globs, optionflags=doctest.ELLIPSIS) 

globs['sc'].stop() 

1147 ↛ 1148line 1147 didn't jump to line 1148, because the condition on line 1147 was never true if failure_count: 

sys.exit(-1) 

 

 

if __name__ == "__main__": 

_test()