一、映射(Map)
1.1 构造Map
1 2 3 4 5 6 7 8
| val scores01 = new HashMap[String, Int]
val scores02 = Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
val scores03 = Map(("hadoop", 10), ("spark", 20), ("storm", 30))
|
采用上面方式得到的都是不可变Map(immutable map),想要得到可变Map(mutable map),则需要使用:
1
| val scores04 = scala.collection.mutable.Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
|
1.2 获取值
1 2 3 4 5 6 7 8 9 10
| object ScalaApp extends App {
val scores = Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
println(scores("hadoop"))
println(scores.getOrElse("hadoop01", 100)) }
|
1.3 新增/修改/删除值
可变Map允许进行新增、修改、删除等操作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
| object ScalaApp extends App {
val scores = scala.collection.mutable.Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
scores("hadoop") = 100
scores("flink") = 40
scores += ("spark" -> 200, "hive" -> 50)
scores -= "storm"
for (elem <- scores) {println(elem)} }
(spark,200) (hadoop,100) (flink,40) (hive,50)
|
不可变Map不允许进行新增、修改、删除等操作,但是允许由不可变Map产生新的Map。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| object ScalaApp extends App {
val scores = Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
val newScores = scores + ("spark" -> 200, "hive" -> 50)
for (elem <- scores) {println(elem)}
}
(hadoop,10) (spark,200) (storm,30) (hive,50)
|
1.4 遍历Map
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| object ScalaApp extends App {
val scores = Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
for (key <- scores.keys) { println(key) }
for (value <- scores.values) { println(value) }
for ((key, value) <- scores) { println(key + ":" + value) }
}
|
1.5 yield关键字
可以使用yield
关键字从现有Map产生新的Map。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
| object ScalaApp extends App {
val scores = Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
val newScore = for ((key, value) <- scores) yield (key, value * 10) for (elem <- newScore) { println(elem) }
val reversalScore: Map[Int, String] = for ((key, value) <- scores) yield (value, key) for (elem <- reversalScore) { println(elem) }
}
(hadoop,100) (spark,200) (storm,300)
(10,hadoop) (20,spark) (30,storm)
|
1.6 其他Map结构
在使用Map时候,如果不指定,默认使用的是HashMap,如果想要使用TreeMap
或者LinkedHashMap
,则需要显式的指定。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| object ScalaApp extends App {
val scores01 = scala.collection.mutable.TreeMap("B" -> 20, "A" -> 10, "C" -> 30) for (elem <- scores01) {println(elem)}
val scores02 = scala.collection.mutable.LinkedHashMap("B" -> 20, "A" -> 10, "C" -> 30) for (elem <- scores02) {println(elem)} }
(A,10) (B,20) (C,30)
(B,20) (A,10) (C,30)
|
1.7 可选方法
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| object ScalaApp extends App {
val scores = scala.collection.mutable.TreeMap("B" -> 20, "A" -> 10, "C" -> 30)
println(scores.size)
println(scores.isEmpty)
println(scores.contains("A"))
}
|
1.8 与Java互操作
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| import java.util import scala.collection.{JavaConverters, mutable}
object ScalaApp extends App {
val scores = Map("hadoop" -> 10, "spark" -> 20, "storm" -> 30)
val javaMap: util.Map[String, Int] = JavaConverters.mapAsJavaMap(scores)
val scalaMap: mutable.Map[String, Int] = JavaConverters.mapAsScalaMap(javaMap) for (elem <- scalaMap) {println(elem)} }
|
二、元组(Tuple)
元组与数组类似,但是数组中所有的元素必须是同一种类型,而元组则可以包含不同类型的元素。
1 2
| scala> val tuple=(1,3.24f,"scala") tuple: (Int, Float, String) = (1,3.24,scala)
|
2.1 模式匹配
可以通过模式匹配来获取元组中的值并赋予对应的变量:
1 2 3 4
| scala> val (a,b,c)=tuple a: Int = 1 b: Float = 3.24 c: String = scala
|
如果某些位置不需要赋值,则可以使用下划线代替:
1 2
| scala> val (a,_,_)=tuple a: Int = 1
|
2.2 zip方法
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| object ScalaApp extends App {
val array01 = Array("hadoop", "spark", "storm") val array02 = Array(10, 20, 30) val tuples: Array[(String, Int)] = array01.zip(array02) val map: Map[String, Int] = array01.zip(array02).toMap for (elem <- tuples) { println(elem) } for (elem <- map) {println(elem)} }
(hadoop,10) (spark,20) (storm,30)
(hadoop,10) (spark,20) (storm,30)
|
参考资料
- Martin Odersky . Scala编程(第3版)[M] . 电子工业出版社 . 2018-1-1
- 凯.S.霍斯特曼 . 快学Scala(第2版)[M] . 电子工业出版社 . 2017-7