0% found this document useful (1 vote)
636 views

Learn Go With Pocket-Sized Projects

Học golang với các dự án nhỏ

Uploaded by

lamgiaovien2k1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
636 views

Learn Go With Pocket-Sized Projects

Học golang với các dự án nhỏ

Uploaded by

lamgiaovien2k1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 487

Tìm hiểu Đi với bỏ Túi dự Án

1. chào mừng
2. 1_Meet_Go
3. 2_Hello,_Earth!_Extend_your_hello_world
4. 3_A_bookworm's_digest:_playing_with_loops_and_maps
5. 4_A_log_story:_create_a_logging_library
6. 5_Gordle:_play_a_word_game_in_your_terminal
7. 6_Money_converter:_CLI_around_an_HTTP_call
8. 7_Caching_with_generics
9. 8_Gordle_as_a_service
10. 9_Concurrent_maze_solver
11. 10_Habits_Tracker_using_gRPC
12. Appendix_A._Installation_steps
13. Appendix_B._Formatting_cheat_sheet
14. Appendix_C._Zero_values
15. Appendix_D._Benchmarking
16. chỉ số
chào mừng
Cảm ơn bạn đã mua tìm Hiểu Đi với bỏ Túi dự Án! Chúng tôi hy vọng
anh sẽ vui vẻ và thực hiện ngay lập tức sử dụng các bài học của bạn.

Cuốn sách này là dành cho các người muốn học các ngôn ngữ trong một niềm vui và
cách tương tác, và được thoải mái đủ để sử dụng nó một cách chuyên nghiệp. Mỗi
chương là một độc lập bỏ túi dự án. Cuốn sách này bao gồm những
đặc tính của những ngôn ngữ như ngầm diện và làm thế nào họ giúp đỡ trong
thử nghiệm thiết kế. Kiểm tra mã là bao gồm trong suốt cuốn sách. Chúng tôi muốn
giúp người trở thành một phần mềm hiện đại phát triển trong khi sử dụng các
ngôn ngữ Đi.

Cuốn sách này cũng có hướng dẫn cho dòng lệnh diện, và cho cả
phần CÒN lại và gRPC microservices, thấy thế nào ngôn ngữ là tuyệt vời cho đám mây
tính toán. Nó kết thúc với một dự án mà sử dụng TinyGo, các biên dịch cho
các hệ thống nhúng.

Mỗi túi có kích thước dự án được viết trong một hợp lý, số dòng. Mục tiêu của chúng
tôi
là cung cấp cho các bài tập khác nhau vì vậy, bất kỳ nhà phát triển ai muốn bắt đầu với
Đi
hoặc để khám phá những nét đặc trưng của ngôn ngữ có thể thực hiện theo các bước
mô tả
trong
Chúngmỗitôi chương. Đây không
khuyến khích phải
bạn đặt câulàhỏi
một cuốn
của bạnsách để các
và gửi tìm phản
hiểu phát triểncótừ đầu và
hồi bạn
cácnội dung trong nhữngliveBook Thảo luận
về . Chúng tôi muốn cậu nhận được
chương được phân loại.
nhiều nhất của bạn đọc đến tăng sự hiểu biết của dự án.

— Aliénor Latour, li ly Chaiehloudj và Pascal Bertrand

Trong cuốn sách này

chào mừng1 Gặp Nhau 2 Đi


Xin Chào, Trái Đất! Mở rộng xin chào 3thế
Một
giới
con mọt sách ' s digest: chơi với các vòng và bản4 đồ
A log chuyện: tạo ra một
đăng nhập viện5 Gordle: chơi một trò chơi từ trong đầu của6bạn
Tiền
đổi: CLI xung quanh một HTTP gọi 7 Đệm với thuốc 8 Gordle như một
dịch vụ 9 Đồng thời mê giải 10 Thói quen theo Dõi sử dụng gRPC
Phụ lục A. Cài đặt bước Phụ lục B. Dạng tờ cheat Phụ lục
C. Zero, nhữngPhụ
giá Lục
trị Chuẩn D.
1 Gặp Nhau Đi
Chương này, bao gồm

Giới thiệu Đi ngôn ngữ và tại sao bạn sẽ muốn tìm hiểu nó
trình Bày cuốn sách này và làm thế nào để sử dụng nó
Chi tiết tại sao muốn viết cuốn sách này
Viết sạch và thử nghiệm mã

Cuốn sách này cung cấp cho bạn một tập hợp các dự án vui vẻ để dần dần khám phá
những
đặc tính của những ngôn ngữ Đi. Mỗi túi có kích thước dự án được viết trong một
hợp lý, số dòng. Mục tiêu của chúng tôi là cung cấp cho các bài tập khác nhau vì vậy,
bất kỳ
nhà phát triển ai muốn bắt đầu với Đi hoặc để khám phá những ngôn ngữ có thể thực
hiện theo
Chúng tôi muốn giúp người trở thành một phần mềm hiện đại phát triển bởi
cácdụng
sử bướccác môngôn
tả trong
ngữmỗi
Đi. chương.
Chúng ta sẽ sử dụng chúng tôi, kinh nghiệm như các kỹ sư
phần mềm để
cung cấp có ý nghĩa tư vấn cho người mới dày dạn và phát triển.
Cuốn sách này cũng có hướng dẫn cho thực hiện APIs với microservices,
chứng minh ngôn ngữ là tuyệt vời cho đám mây. Nó kết thúc với
một dự án mà sử dụng TinyGo, các biên dịch cho các hệ thống nhúng.

Nếu bạn là một người mới bắt đầu tại lập trình chúng tôi toàn tâm toàn ý đề nghị bắt
đầu https://www.manning.com/books/get-programming-
Đi với
với đi kinh
. nghiệm

1.1 là Gì Đi?
Đi là một ngôn ngữ lập trình được thiết kế ban đầu để giải quyết các
vấn đề trong phần mềm quy mô lớn phát triển trong thế giới thực, lúc đầu
trong Google và sau đó, cho phần còn lại của thế giới kinh doanh. Nó chỉ làm chậm
chương trình xây dựng, out-of-phụ thuộc kiểm soát ban quản lý, sự phức tạp của
mã, và khó khăn cross-ngôn ngữ xây dựng.
Mỗi ngôn ngữ cố gắng giải quyết họ theo một cách khác, hoặc là bằng cách hạn chế
các người dùng hoặc do làm cho nó mềm mại như có thể. Đi đội đã chọn để giải quyết
họ bằng cách nhắm kỹ thuật hiện đại. Đó là lý do tại sao nó đi kèm với một người giàu
có, công cụ
chuỗi.
Các công cụ chuỗi bao gồm và xây dựng mã định dạng, gói
quản lý phụ thuộc tĩnh mã kiểm tra, kiểm tra tài liệu
thế hệ và xem, hiệu suất, phân tích, ngôn ngữ chủ, chạy
chương trình theo dõi và nhiều hơn nữa.

Được xây dựng cho đồng thời, và nối mạng, máy chủ giải thích nhanh nhận con nuôi
của ngôn ngữ trong công ty phần mềm của tất cả các kích thước trong vài năm.

Ngoài ra, Đi được sử dụng bởi một cộng đồng lớn của phát triển, những người đã
được
chia sẻ mã nguồn của họ trên công nền tảng cho những người khác để sử dụng hay
được truyền cảm hứng.
Như phát triển, chúng tôi yêu, để chia sẻ và sử dụng lại những gì khác thông minh,
người đã
1.1.1
viết.
lịch Sử và triết

Đi bắt đầu vào tháng chín năm 2007 khi Robert Griesemer, Ken Thompson, và tôi
bắt đầu thảo luận về một ngôn ngữ mới để giải quyết các thuật thách thức chúng ta
và các cộng sự của tôi tại Google đã phải đối mặt với hàng ngày của chúng tôi làm
việc.
-- Cướp Pike

Đi được thiết kế để cải thiện năng suất trong một thời gian khi cách
mạng máy và lớn codebases đã trở thành chuẩn.

Các lựa chọn thiết kế được điều khiển chủ yếu do đơn giảnmà học
các ngôn ngữ một nhiệm vụ nhanh chóng. Chỉ có 25 dành riêng từ khóa trong toàn bộ
ngôn ngữ (trước phiên bản 1.18 - một phiên bản đó bạn sẽ đọc về rất nhiều). Các
phần còn lại là chỉ đơn giản là cảm giác bạn muốn để cho nó. Và thơ.

Thậm chí quan trọng hơn, để chúng tôi cung cấp cho các nhu cầu của các phần mềm
hiện đại
công nghiệp. Quản lý phụ thuộc, các công cụ để kiểm tra đơn vị, chuẩn và
fuzzing, định dạng, tất cả các công cụ bình thường của một nhà phát triển đang được
xây dựng và
chuẩn.
1.1.2 Sử dụng hôm nay
[1]
Theo 2021 Đi khảo sát ngôn ngữ được sử dụng bao la đặt/NHỮNG
dịch vụ. Những thứ hai cách sử dụng là bắt đầu chương trình với một dòng lệnh
diện (B), sau đó web dịch vụ trả lại HTML, thư viện và
khung tự xử lý dữ liệu, đại lý và linh thú sao? 8% của Đi
phát triển sử dụng nó vào hệ thống nhúng, 4% cho trò chơi và 4% cho máy
học hoặc trí thông minh nhân tạo.
Mặc dù nó chưa có thể gọi trên thập kỷ của thư viện như Nghiên dựa ngôn ngữ,
nó có lợi từ một rộng lớn, và chào đón cộng đồng của văn phòng giáo viên, mở
nguồn đóng góp, người tạo ra học tập tài liệu trong mỗi mẫu này rất
cuốn sách bao gồm.

Hiện tại có một nhu cầu về Đi các kỹ sư. Học các ngôn ngữ
do đó có thể cho phép một sự nghiệp lớn nhảy về phía trước. Từ những tác giả' cá
nhân
kinh nghiệm, việc tuyển dụng lĩnh vực, bao gồm các khu vực như công nghệ tài chính,
này, nra show,
chơi game, âm nhạc, tất cả các loại e-thương mại nền tảng hàng không vũ trụ nghiên
cứu
hình ảnh vệ tinh xử lý.
1.2 tại Sao bạn nên tìm hiểu Đi
Cuốn sách này nhằm nhận được đọc và chạy bằng cách sử dụng các Đi ngôn ngữ trên
công việc, trong bối cảnh của kỹ thuật hiện đại, có thể là cho cá nhân sự tò mò, như
một phần
của nghiên cứu một tập thể dục hoặc trong bối cảnh của một dự án công nghiệp.
Chúng ta sẽ không bao gồm mọi thứ để biết về các ngôn ngữ, nhưng tập trung
thay vào đó, chính điều chúng tôi cần, như phát triển, để được sản xuất
và hiệu quả.

Hãy xem tại sao Đi là một khoản đầu tư của bạn, thời gian học tập như là một nhà phát
triển.
1.2.1 thế Nào và ở đâu thể Đi giúp bạn?

Đi là một linh hoạt ngôn ngữ được thiết kế để bảo trì và dễ đọc. Nó là
tối ưu cho phụ trợ phần mềm và phát triển đã rất hợp với
hiện đại đám mây công nghệ.

Xem xét các doanh thu trung bình trong công ty công nghệ là việc thấp hơn mỗi
năm, một ít hơn một năm nay, điều quan trọng là mã được viết bởi một người
có thể được đọc bởi một người khác, sau khi họ rời khỏi công ty. Do đó, quan trọng
đó ngôn ngữ được lựa chọn bởi công ty này cho dễ đọc.

Đi là chìa khóa có làm cho nó đáng tin cậy, và an toàn ngôn ngữ, với một nhanh xây
dựng
thời gian.
Một số dụng đòn bẩy goroutines đó là một an toàn hơn và ít tốn kém cách
đối phó với tính toán song song với các chủ đề. Chủ đề dựa vào HỆ điều hành
mà có một giới hạn liên quan đến các kích thước và sức mạnh của LÝ trong khi
goroutines xảy ra tại các ứng dụng là cấp. Để làm cho stacks nhỏ, Hãy
sử dụng vật thay giáp stacks. Một mới được đúc goroutine được đưa ra một vài
tiện dụng này, đó gần như là luôn luôn đủ, và có thể phát triển và co lại, cho phép
nhiều goroutines để sống trong một số tiền khiêm tốn của nhớ. Nó là thực tế
tạo ra hàng trăm ngàn goroutines trong cùng một địa chỉ không gian.

Mặc dù nó không phải là một đối tượng-ngôn ngữ hướng và đã không thừa kế
hệ thống hỗ trợ nhất của tính năng qua phần tiềm ẩn và
diện. Các già lập luận của Đi mất tích thuốc, làm cho nó tiết và
đòi hỏi rất nhiều sự soạn bản sao dán, đã đi một chặng đường dài
trở không hợp lệ từ các nổi tiếng 1.18 phiên bản. Khi chúng ta viết thư, cuộc thảo luận
đang
vẫn còn trong tiến trình để làm cho nó như linh hoạt như nó là đơn giản.
Đi là một ngôn ngữ biên dịch, có nghĩa là tất cả lỗi sẽ được tìm thấy
trong nuốt hơn là lúc chạy. Tất cả chúng ta đều muốn biết về
những sai lầm trong sự an toàn của chúng ta, máy tính, chứ không phát hiện ra họ nói
sản xuất.

Nó là dễ dàng hơn để chạy các ứng dụng quy mô lớn với Đi so với nhiều
ngôn ngữ. Đi được xây dựng bởi Google để giải quyết vấn đề ở một Google-kích quy
mô.
Đó là lý tưởng lớn ứng dụng đồng thời.
Cloud nền tảng tình yêu Đi. Họ cung cấp hỗ trợ cho Đi là một ngôn ngữ.
Ví dụ, cloud chức năng và lambdas hỗ trợ Đi trong tất cả các sử dụng hầu hết
các nhà cung cấp. Đám mây lớn các công cụ, như Kubernetes hoặc Docker, được viết
trong
Đi.
1.2.2 Nơi có thể Đi KHÔNG giúp gì bạn?

Mặc dù nó cao sự linh hoạt, có vài trường hợp sử dụng đó Đi là không được thực hiện
để
che.
Đi dựa trên một thu gom rác cho phát hành những nhớ nó sử dụng. Nếu
ứng yêu cầu toàn quyền kiểm soát trí nhớ chọn một cấp dưới ngôn ngữ
như C gia đình có thể cung cấp. Đi có thể bọc thư viện viết trong C với
CGo, một bản dịch lớp tạo ra để giảm bớt sự chuyển đổi giữa 2
ngôn ngữ. Này CGo twist, bạn có thể bọc động có kết nối thư viện.

Các Đi ghi chủ yếu là sản xuất thực thi - tạo ra một biên soạn
thư viện là đau đớn có thể đạt được. Chúng ta sẽ không bao nó trong cuốn sách này.
Trong nhiều
trường hợp, cập nhật bản Đi phụ thuộc ngụ ý xây dựng lại
nhị phân với phiên bản mới. Này cũng có nghĩa là, để sử dụng một Đi
thư viện, bạn cần phải có quyền truy cập vào mã nguồn của nó.

Các Đi biên dịch hỗ trợ một danh sách của nền tảng khác nhau và
hệ điều hành, nhưng chúng tôi sẽ khuyên bạn không nên viết một hệ điều hành
với Đi, mặc dù nhiều linh hồn dũng cảm đã thực hiện nó. Lý do chính cho đây là
việc xử lý bộ nhớ khi nó được thực hiện trong Đi: thu gom rác
thường xuyên bỏ bit được không còn được sử dụng. Như với tất cả các thu gom rác,
nó là điều chỉnh, nhưng nó sẽ không phát hành nhớ chính xác như thế nào bạn có
muốn hay khi
bạn muốn.
Đi nhị phân các tập tin được biết đến là lớn hơn mức trung bình. Nó thường không phải
là một
vấn đề trong một môi trường đám Mây, nhưng nếu anh yêu cầu ánh sáng những
chương trình, xem xét
sử dụng TinyGo biên dịch. Xem chương cuối của cuốn sách này cho sự
giới
Cuốithiệu.
cùng, những khó khăn cho google lên các câu trả lời nghiêm túc, những người
tên của họ
ngôn ngữ như vậy với một từ chung? Rõ ràng, Google bản thân mình. Đây là
một chuyên mẹo: khi cố gắng để tìm thấy một câu trả lời, sử dụng "file", đó không phải
là sự thật
tên, nhưng là những gì tìm kiếm sẽ nhận ra. Đôi khi nó giống như đang cố gắng để
tìm
Chúngtài liệu trongcó
tôi cũng C thể
trênđề
dây – anh
cập đến không
nhữngcó được
khó khănnhững
trong gì bạn
việc đang
thuê Đi mong đợi. đó là,
phát triển,
thực sự, tốt cho chúng ta phát triển.

1.2.3 So với thường dùng ngôn ngữ

Chính lý do để sử dụng một ngôn ngữ khác hơn Đi, đến 2021, được sự
vắng mặt của một thiếu của sự trưởng thành trong hệ sinh thái. Đó là
trước khi 1.18, một phiên bản đó thay đổi trò chơi.

Chúng tôi tổng hợp một số năng phát triển xem xét trong sự lựa chọn của
một ngôn ngữ đến địa chỉ của họ dự án cần. Trong kinh nghiệm của chúng tôi, rườm rà

rác bộ sưu tập rất quan trọng chí ngày hôm nay để làm cho chúng ta hiệu quả hơn.
Bàn 1.1 So sánh bốn chương trình ngôn ngữ

C Python Java Đi
thủ tục, và
đối tượng cao cấp
cấp cao cấp cao dữ liệu hướng
thiết kế theo
đối tượng đối tượng lập trình,
triết lý thủ tục,
hướng hướng hỗ trợ nhất
multiparadigm
OOP năng

lỗi qua qua


qua trường hợp ngoại lệ lỗi là giá trị
quản lý trường hợp ngoại
trường
lệ hợp ngoại lệ

động biên soạn


tĩnh gõ tĩnh gõ
loại
loại và ngôn ngữ ngôn ngữ
chạy trong một
nuốt
biên dịch tại ảo
biên soạn biên soạn
chạy máy

hoặc là nhớ- hệ điều hành cấp


hệ điều hành cấp goroutines và
đồng thời, hoặc cấp hệ điều hành cấp chủ đề và
chủ đề kênh
chủ đề thư viện

tiềm ẩn và
diện rõ ràng rõ ràng ngầm
rõ ràng

nhớ rác rác


kiểm soát hoàn toàn thu gom rác
phát hành thu thu

cao
tuyệt vời rất thích hợpthích nghi để web
suất
chính sử dụng cho dữ liệu cho web APIs và đám mây
trường hợp thấp trên không phân tích ứng dụng máy tính
được xây dựng
bêntrong
ngoài được xây dựng trong bản địa
bên ngoài
công cụ kiểm tra bản địa côngkhung
cụ công cụ (test, ghế dài,
khung
để kiểm tra JUnit lông tơ)

1.3 Tại Sao Túi Dự Án?


Vào cuối Của thế kỷ, các nhà khoa học bắt đầu theorise học.
Trong số đó, John Dewey đã viết vào năm 1897 một danh sách dài của tốt lý do tại sao
làm là cách tốt nhất để tìm hiểu. Kể từ đó, kinh nghiệm đã được chứng minh tuyên bố
của mình trong
nhiều hệ thống giáo dục và học tình huống.
Các dự án mà chúng tôi đề nghị đây là thời điểm cho những người bận rộn. Chúng ta
đã chắc chắn
để giữ cho chúng càng nhỏ càng tốt trong khi vẫn làm cho chúng bổ ích. Chúng tôi
thừa nhận rằng trong số họ đặc biệt là không vui, nhưng đây là việc có ích nhất
trong thế giới thực dự án.
1.3.1 Mà cuốn sách này là cho

Đầu tiên và trước hết, chúng ta nghĩ về phát triển những người biết và sử dụng một
ngôn ngữ và muốn mở rộng của họ chuyên nghiệp, kỹ năng. Chúng tôi muốn nhận
được
bạn và chạy bằng cách chia sẻ thực tế sử dụng của ngôn ngữ.
Chúng tôi tập trung vào công chuẩn bao lâu dài cân nhắc và không
chỉ một dự án của dùng nữa mã mà anh không bận tâm để kiểm tra.

Chúng tôi cũng có suy nghĩ về những con người đang đánh giá Đi tiếp theo của
dự án. Lặn xuống Đi các sẽ giúp bạn quyết định rằng đó là tốt nhất
ngôn ngữ bạn có thể đầu tư vào.

1.3.2 những Gì bạn sẽ biết sau khi đọc cuốn sách (và viết
mã)

Đầu tiên, chúng tôi muốn chắc chắn rằng anh hiểu những gì cuốn sách giải thích. Cho
vấn đề này, chúng tôi sẽ hướng dẫn anh thông qua các cuộc hành trình của chương
mô tả những
thực hiện giờ mã lặp đi lặp lại, trên một cam kết-by-cam kết sở,
khi chúng tôi xem xét nó quan trọng để hiểu những gì đang xảy ra, từng chút một.

Thứ hai, chúng tôi cung cấp tốt và ví dụ rõ ràng viết cho ngành công nghiệp
trình độ Đi mã - đề nghị rằng áp dụng bên ngoài ví dụ của chúng tôi, và điều đó
sẽ giúp bạn dấn thân vào thế giới thực của sự phát triển. Tất cả của chúng tôi, ví dụ
có chức năng đó có thể tái sử dụng trong một công ty.

Cuối cùng, mục tiêu của chúng ta là làm cho bạn nhận ra rằng bạn có thể viết tuyệt vời
Đi mã của
mình khi bạn đã hiểu được những điều cơ bản.
Chúng tôi bắt đầu ở xin Chào-cấp trên thế Giới, khám phá những cú pháp của ngôn
ngữ,
và tiến hành tất cả các đường đến một vụ sẵn sàng để được triển khai trong đám Mây,
bạn qua quyết định kiến trúc
Ngữ pháp và cú pháp

Các chương đầu tập trung vào ngữ pháp cụ thể Đi. Ví dụ, làm thế nào tất cả
các vòng bắt đầu với cùng một từ khóa, nghỉ là tiềm ẩn trong công tắc, nhưng cũng
làm thế nào để lộ và không một số của bạn hằng và phương pháp (những gì Java gọi
công cộng hay tư nhân).

Những gì Đi đã chọn trong mã của nó đã được thiết kế để làm cho nó diện ngầm.
Trong nhất
của lớn khác ngôn ngữ, để có một thực thể (hoặc lớp) được coi
là thực hiện một diện, nó cần phải nói rõ ràng nó vào định nghĩa của nó. Trong
Đi, thực hiện các phương pháp là đủ. Bạn do đó có thể vô tình
thực hiện một sự giao mà, bạn vẫn chưa biết. Này, mở ra thế giới của mới
khả năng trong hình dung làm thế nào chúng ta chế giễu và stubbing, phụ thuộc
tiêm thuốc, và khả năng tương tác.
Mặc dù goroutines là một tính năng tuyệt vời của Đi, chúng tôi sẽ không dừng lại ở đó.
Trong
kinh nghiệm của chúng tôi, các bạn có chương trình hiệu quả trong Đi mà không có họ.
Chỉ có một
dự áncùng,
Cuối sử dụng chúng.
khi bạn sẽ học trong những cuốn sách, Đi không sử dụng trường hợp ngoại
lệ. Nó
thích xem xét lỗi như giá trị. Này thay đổi theo cách chúng ta đối phó với chảy
mà không tuân theo các hạnh phúc con đường, nơi mà không bao giờ thất bại. Mọi
chương trình đã
để đối phó với lỗi tại một số điểm, và chúng tôi sẽ yểm trợ này trong suốt các dự án.
Thử nghiệm mã

Mỗi chương bao gồm đơn vị thử nghiệm.

Tất cả chúng.

Không phát triển ngày hôm nay sẽ mơ về cung cấp mã trong sản xuất đó là
không được bảo hiểm bởi ít nhất một số thử nghiệm bất cứ điều gì mức độ của họ. Họ
không thể thiếu đối với bất kỳ tiến hóa phần mềm phát triển qua. Đó là lý do tại sao
chúng tôi
bao thử nghiệm đơn vị ở khắp mọi nơi.
Đi cũng là tuyệt vời tại chuẩn thuật toán khác nhau với một xây dựng-trong cuốn
lệnh. Nó cho phép phát triển để so sánh các phiên bản quá, mà có nghĩa là bạn
có thể sử dụng nó để kiểm tra mọi cam kết rằng mã-mức độ hiệu quả không
không giảm. Anh sẽ xem một số ví dụ trong suốt cuốn sách.

One last gần đây của các Đi dụng cụ kiểm tra chuỗi là fuzzing. Fuzzing là một
cách để kiểm tra một hệ thống bằng cách ngẫu nhiên giá trị vào nó, và nhìn thấy làm
thế nào nó
hoạt động. Nó là một tuyệt vời giúp trong việc kiểm tra cho các lỗ hổng.
Mã sạch sẽ thực hành tốt nhất

[2]
Bất kỳ mã của bạn đã không nhìn cho sáu tháng
có thể cũng đã được viết bởi một người nào khác.

-- Eagleson của pháp Luật

Trong khi vài dự án đầu tiên phù hợp trong một tập tin chúng tôi, chúng tôi sẽ nhanh
chóng cần phải sắp xếp những
mã ở một cách làm cho nó dễ dàng để duy trì. Bởi duy trì, chúng tôi có nghĩa là nó
sẽ cho phép một người mới đến tìm cách của họ qua mã để sửa chữa một
lỗi hoặc thêm một năng. Hư cấu này mới có thể là bạn trong một rất ngắn
thời gian.
Chúng tôi đề nghị và giải thích về một số mã tổ chức thực tiễn. Chúng tôi tin rằng đó
Đi là tuyệt vời cho miền-hướng thiết kế và tổ chức của chúng tôi mã phù hợp.
Tất nhiên là không có thư mục duy nhất tổ chức cho một dự án, nhưng chúng tôi mong
cho những gì có ý nghĩa hơn.
Những gì để lộ và những gì để giữ cho mình đã bắt nhân loại cho
thiên niên kỷ, và mềm đã phát triển trong nhiều thập kỷ. Câu hỏi này là được bảo hiểm
như
vậy, chúng ta tạo ra cái gì đó vượt ra khỏi một gói.
Quyết định kiến trúc

Như là Đi được dùng cho dịch vụ viết được triển khai ở đám mây môi trường,
chúng tôi đã thêm hai người dự án để giúp bạn lựa chọn của bạn yêu thích giao thức:
một phục vụ
HTML hơn HTTP khác sử dụng protobuf hơn gRPC. Bạn sẽ viết đầy đủ
hoạt động dịch vụ mà bạn có thể dễ dàng triển khai để chơi xung quanh và xem những

bạn muốn
Một khi họ và
có những gì phù
chạy, bạn cầnhợp
phảinhất
theovới
dõinhu cầu gì
những của bạn.
xảy ra trong chương trình của bạn.
Một trong những sớm và dễ dàng dự án là một ứng mà đi xa hơn những gì
mặc định thư viện tiêu chuẩn không. Một số khác đọc chuẩn và hoạt động như một
chất chống
tham nhũng lớp để chèn các dữ liệu từ đó API vào miền của bạn. Một thứ ba
là một đơn giản cân bằng tải của hệ thống giao thông mà bạn có thể tạo
phức tạp hơn theo nhu cầu của bạn.
IoT là vui vẻ

Các dự án cuối cùng được thiết kế để chạy trên dự án điều khiển bằng cách sử dụng
một
khác nhau biên dịch. Nó là không đủ để làm cho bạn một hệ thống nhúng chuyên gia.
Nó sẽ chỉ chơi như một giới thiệu cho một số ít được biết đến tính năng của Đi. Chúng
tôi
hy vọng để cù của sự sáng tạo và hy vọng bạn sẽ thích nó cả.
Tóm lại 1.4
Đi là một hiện đại công nghiệp định hướng, đơn giản và linh hoạt, ngôn ngữ, tốt
nhất
cho phụ trợ phát triển, sử dụng rộng rãi cho đám mây hướng công cụ tuyệt vời
cho
CLI và thậm chí thích nghi để nhúng các hệ thống.
Dễ dàng để tìm hiểu, đội có một cách nhanh chóng hiệu quả với nó, viết phức tạp
bực bội mã là có thể, nhưng không dễ dàng.
Cuốn sách này: tìm hiểu bằng cách làm túi dự án mà chỉ có một vài giờ,
bạn và chạy.
[1]
https://go.dev/blog/survey2021-results

[2]
Thành thật mà nói, 6 tháng là hào phóng.
2 Xin Chào, Trái Đất! Mở rộng xin
chào
thế giới
Chương này, bao gồm

Viết đầu ra chuẩn


Thử nghiệm viết đầu ra chuẩn
Viết bảng điều khiển xét nghiệm
Sử dụng một bảng băm để giữ chìa khóa cặp giá trị
Sử dụng lá cờ để đọc dòng lệnh thông số

Như phát triển, nhiệm vụ chính của chúng tôi là viết chương trình có giá trị. Các
chương trình này được
thực hiện trên máy tính và họ sẽ chấp nhận một số đầu vào (e.g: chìa khóa ép trên
bàn phím, nhận được một tín hiệu từ microphone), và sẽ tạo ra
(e.g: phát ra một tiếng bíp, gửi dữ liệu trên mạng). Các chương trình đơn giản của tất
cả
không có gì, và chỉ cần lối ra. Điều đó sẽ không được một rất hài lòng
giới thiệu để mã hóa, phải không? Thay vào đó, chúng ta có một nồng nhiệt chào đón
tin
Kể nhắn!
từ năm 1972, học lập trình khám phá của họ ngôn ngữ mới thông qua
biến thể của cùng một câu: . Một lập trình đầu tiên của
Xin chào thế giới
tự trị bước đi được, như vậy, thường đến sự thay đổi này chuẩn tin nhắn, và xem
những gì sẽ xảy ra khi các tin nhắn chúc mừng một chút thay đổi. Loại, biên dịch,
chạy, nụ cười. Đây là những gì đangXinphát triển
chào thế một
giớilà về.

Chương trình chúc mừng lịch sử

Chương trình này chào mừng đã được phổ biến của Brian Kernighan và
Dennis Ritchie là "C ngôn Ngữ" cuốn sách được xuất bản vào năm 1978.
Câu ban đầu đến từ một công bố cũng bởi Brian
Kernighan, "Một hướng Dẫn giới Thiệu ngôn Ngữ B", được xuất bản vào năm 1972.
Điều này, trong tất cả sự trung thực, ví dụ thứ hai in nhân vật trong này
bố - cái đầu tiên có chương trình in chào!. Lý do là B
đã có một giới hạn trong số TÊN nhân vật, nó có thể ở trong một đơn
biến - một biến không thể giữ hơn 4 kí tự. Xin chào, thế giới!, như là một
kết quả, đã đạt được với một số cuộc gọi đến các chức năng in ấn. Thông điệp này
được lấy cảm hứng từ một con chim nở ra trứng của nó trong một truyện tranh.

Mục tiêu của chương này là để đi một chút ngoài những bước đơn giản. Chúng ta hãy
xem xét
tốt mã cần được cả hai tài liệu và thử nghiệm. Vì lý do này chúng ta sẽ có
để hiểu làm thế nào để kiểm tra một chức năng có mục đích là để viết để chuẩn
ra. Trên đó, nhờ Đi của hỗ trợ của hoa kỳ,nhân vật
này chương đầu tiên sẽ được cơ hội của chúng ta để chào đón mọi người bằng ngôn
ngữ khác
hơn
Nếu anh
bạn và hệ thống
không văn bản
có đường khác
đi biên hơntrên
dịch là tiếng Latin.
máy vẫn chưa cài đặt nó theo
các bước trong phụ Lục A. Chúng tôi sẽ giả định rằng, ngay từ, các thiết lập của bạn
phát triển môi trường đã được hoàn thành.

Yêu cầu

Viết một chương trình mà có ngôn ngữ của sự lựa chọn của bạn và in
liên quan đến chào
chương trình Này, phải được bảo hiểm bởi thử nghiệm đơn vị

2.1 Bất kỳ du lịch bắt đầu ở nhà


Hành trình của chúng tôi là bắt đầu phát triển ở đâu, mọi người bắt đầu: trên một cái
ghế trước mặt
của một bàn và một màn hình. Để bắt đầu cuộc phiêu lưu tuyệt vời, hãy viết một
chương trình nhỏ đó sẽ chào đón chúng tôi mỗi khi chúng tôi chạy nó - tốt-mài dũa xin
chào
thế giới. Như tốt, lập trình viên, chúng ta cũng sẽ muốn đảm bảo mã này hoạt động
như
Như đã nói trong Một phụ Lục, Đi mã chạy bên trong module. Bắt đầu tươi trong một
dự
mụckiến,
mớivì vậy
bởi chúng tôicủa
initialising sẽ bạn
kiểmsửtradụng
nó đúng.
module
đi khởi mod tiếp theo là
tên bạn chọn cho gói. Tên này thường là con đường của bạn
mã kho lưu trữ.

đi khởi mod example.com/your-repository

hay
đi mod khởi learngo-túi/xin chào

2.1.1 đầu tiên của Chúng tôi chương trình chính.đi

Làm thế nào chúng ta có thể đạt được một chương trình để in một tin nhắn cho màn
hình của chúng tôi?
Chúng
chính.đi ta. hãy vào nó! Chúng ta cần viết mã sau trong một tập tin có tên là

Danh sách 2.1 chính.đi

gói chính

nhập khẩu "đạp"

chức năng chính() {


đạp.Println("xin Chào thế giới")
}

Chao ôi! Đó là rất nhiều cho một nhiệm vụ đầu tiên. Trước khi chúng tôi có một bước
vào những
dòng, bạn có thể ưa thích một số hài lòng và chạy chương trình đầu tiên này. Các đi
chính.đi
lệnh choHy
tập tin). đây là như
vọng bạnsau (chạy vui
sẽ được nó mừng
trong cùng mộtthấy
khi nhìn mụcdanh
như thế
dự kiến tin nhắn xuất hiện
trên
màn hình!
> đi chạy chính.đi
xin Chào thế giới

Yay!

Có gì trong một cái tên?

Là lập trình viên, khi nói đến việc viết mã, thách thức lớn nhất chúng ta
mặt, hàng ngày, được đưa ra tên để biến hằng số loại gói
bí danh, chức năng, hoặc các tập tin. Hoặc lưu các tập tin, hoặc microservices, thiết bị
đầu cuối,
không gian tên, và vân vân. Danh sách này là vô tận. Ở đây là một số lời khuyên đó sẽ
giúp
bạn
· Nếutên biếnvitrong
phạm của tương
biến làlai
códự án:
hạn coi, chừng hai, hoặc ba đường, một hoặc
hai chữ giữ chỗ này là hoàn toàn hợp lệ. Tuy nhiên, đừng lựa chọn ngẫu nhiên.
Sử dụng một cái gì đó mà ngay lập tức nhắc nhở bạn của các mục đích của biến này.
Chúng ta sẽ sử dụng l cho ngôn ngữ và tc cho testCase sau đó trong chương này.

· Ở phù hợp giữa chức năng khác nhau: nếu biến đại diện cho
một thực thể, sử dụng cùng một tên.

· Nếu không, sử dụng một tên đó rõ ràng dùng để hiện thực. Đó là


không cần viết tắt, trừ khi họ đang sử dụng một số nơi khác trong mã
địa chỉ, id, sẽ là đủ rõ ràng và dễ hiểu tất cả mọi người. Nghĩ hàng
cột, cuốn sách, địa chỉ, và trật tự ...

· Đi là hội nghị, khi nói đến tên là sử dụng camelCase cho


chưa phơi sáng chức năng, loại, biến và hằng số. Cho gói, cố gắng càng
nhiều càng tốt, bạn có thể sử dụng một từ duy nhất.

· Đi của biến không cần phải diễn tả loại của họ. Các ký hiệu
không được sử dụng trong Đi. IDE của bạn sẽ được loại, đủ để làm cho bạn biết nếu
một
biến là một trỏ hoặc một giá trị.
· Cuối cùng, tên biến không thể bắt đầu với một số, cũng không có chức năng,
loại hay hằng số.

Bước tiếp theo là để hiểu những gì chúng tôi chỉ viết. Thật vậy, nhiệm vụ của chúng tôi
là Đi
triển sẽ khó có thể đạt được với chỉ là bản sao của những gì chúng ta có thể tìm thấy
ở xa nguồn tài nguyên. Đó là một phần trong mã hóa liên quan đến sự sáng tạo đó
không nên bỏ qua. Như là cho mọi thủ, luyện tập làm cho sự hoàn hảo, và rất
sớm, chúng ta có nên mua đủ thức, dám thay đổi đầu tiên này,
chương trình của chúng tôi để đáp ứng nguồn cảm hứng. Cuốn sách này sẽ hướng
dẫn chúng ta qua
khác nhau bước cuối cùng sẽ đảm bảo sự tự tin qua
sự hiểu biết.
Để bắt đầu, chúng ta sẽ tập trung vào dòng đầu tiên của chương trình.

gói chính

Mỗi Đi tập tin bắt đầu với tên của nó, gói trong trường hợp này chính.
Gói Đi là cách của tổ chức mã, tương tự như mô-đun hoặc thư viện trong
ngôn ngữ khác. Bây giờ, tất cả mọi thứ phù hợp trong những
chính.đi tập tin, mà phải
cư trú trong những
chính gói. Chúng ta sẽ thấy nhiều về làm thế nào để làm và sử dụng
gói trong Chương 3.

Nhữngchính gói là một chút đặc biệt, vì hai lý do. Đầu tiên, nó không tôn trọng
Đi của hội nghị đặt tên gói sau khi mục (hoặc cách khác
xung quanh). Thứ hai, điều này là làm thế nào các biên dịch biết chức năng đặc biệt
được
chính() gọi
sẽlàđược tìm thấy ở đây.chính()
Nhữngchức năng là những gì sẽ được thực thi
khi chạy chương trình.

Sau khi gói là tên sau danh sách các yêu cầu nhập khẩu tập tin này sẽ
sử dụng. Nhập khẩu gói được sáng tác của thư viện chuẩn gói và thứ ba,
bên thư viện.

nhập khẩu "đạp"

Hầu hết Đi chương trình dựa trên phụ thuộc bên ngoài. Một đơn Đi tập tin, mà không

sự giúp đỡ của nhập khẩu gói, chỉ có thể xử lý một giới hạn của các công cụ. Cho
những
lợi ích của các ngôn ngữ của hối tiếc, đơn giản, những công cụ này không cung cấp
nhiều,
Để sử và viếtnăng
dụng cho như
các thiết bị bên
vậy phụ ngoài
thuộc bênlàngoài,
khôngchúng
có giớitôihạn.
cần để nhập
gói nơi họ sinh sống. Đây chính xác là những gì những nhập khẩu từ khóa
thực hiện - cho tầm nhìn qua các chức năng và biến cung cấp trong một
gói cụ thể, ở một nơi khác. Bên ngoài thư viện được xác nhận bởi các
địa chỉ để họ kho; thêm về điều này sau. Cho thời điểm này, điều quan trọng
thông tin để nhớ là bất kỳ nhập mà không trông như một địa chỉ là
từ thư viện tiêu chuẩn, có nghĩa là nó đi với các biên dịch.

Trong trường hợp của đạp


chúnggóita,
chuẩn-thư
chúng taviện
sử dụng
gói những
định dạng, và xuất bất kỳ loại dữ liệu nhờ Println chức năng
đứng cho in với dòng mới. Một chức năng rất hữu ích cho rẻ lỗi!

Cuối cùng, chúng tôichính()


có những
chức năng chính nó. Nó không có bất kỳ cuộc tranh luận và
không trả lại gì cả. Đơn giản. Đi là một ngôn ngữ đơn giản.

chức năng chính() {


đạp.Println("xin Chào thế giới")
}

Từ nhữngđạp gói Println chức năng viết cho các tiêu chuẩn ra. Nếu
bạn cho nó một số nguyên hay một phép biến, nó sẽ hiển thị các con người, -
có thể đọc được bản của tổ chức đó. là một người anh em của một đại gia đình của
Println
các chức năng trách nhiệm của dạng tin nhắn.

Lưu ý rằng thụt vào trong Đi được thực hiện với tab. Không cần phải bắt đầu một cuộc
tranh luận, nó
được viết trong những tài liệu, và tất cả mọi người làm theo cách đó.
Một câu hỏi vốn

Bạn có thể tự hỏi tại saoPrintln bắt đầu với một chữ. Những chương dài
về phạm vi và tầm nhìn là:

· Bất kỳ biểu tượng bắt đầu với một vốn được tiếp xúc với bên ngoài người của các
gói;

· Bất cứ điều gì khác không được truy cập từ bên ngoài gói. Chung
ví dụ của chưa phơi sáng tên, gồm những người bắt đầu với chữ
và những người bắt đầu với một gạch dưới.

Này áp dụng để biến hằng, chức năng và các loại.

Và đó là nó. Thực sự.

Đó là lý do tại sao tên của các chứcPrintln


năng bắt đầu với một số vốn.

2.1.2 Hãy kiểm tra với Dụ

Công việc tốt, bây giờ chúng ta đã viết những chương trình, chúng ta có thể thử nó!
Như chúng ta sẽ thấy, đây
không phải là cách duy nhất của việc phát triển - đôi khi, chúng ta có thể bắt đầu với
viết các
bài kiểm tra, và sau đó mã. Điều quan trọng là mã và kiểm tra đi tay trong
tay. Viết mã không có kiểm tra là nguy hiểm như suy nghĩ của bạn brand-new
lò nướng bánh sẽ trở về hoàn toàn giòn và vẫn dịu dàng trên bánh mì của nó sử dụng
đầu
Nhưngtiêncó gì trong một bài kiểm tra không? Bởi "kiểm tra", chúng tôi có nghĩa là bài

kiểmkhông
tra tựcó kiểm
động tra bất
(hoặc kỳ của
ít nhất là các thiết lập của nó.
automatable), không phải dựa trên đánh giá con người. Các thử nghiệm có thể được
viết ở
shell, trong chương trình, vào Đi, hay trong bất kỳ ngôn ngữ của sự lựa chọn của bạn.
Nó có để có thể
nói với các người dùng con người mà mọi thứ đều ổn, hoặc cái gì đó không - trong
trường hợp đó, một số chi tiết luôn được chào đón. Cho dự án đầu tiên, chúng ta
hãy xem xét mà chạy mã và "thấy" đầu ra là Xin chào thế giớilà
không đủ, ít nhất không phải là người duy nhất kiểm tra mã của chúng tôi. Nếu những
gì mà không gian
nhân vật giữa những lời là không dễ vỡ nhân vật không gian, mà chúng
con người không thể phân biệt từ một thường xuyên không gian nhân vật? Đầu ra
chuỗi
sẽ
Và không
tại saogiống
chúngnhau, nhưng
ta nên kiểm chúng ta sẽ
tra? Sau tất không
cả, cácthể
mãbiết.
đã thực thi khi chúng ta muốn khi
chúng tôi cho nó chạy, phải không? Mặc dù điều này là sự thật, nó mới chỉ đúng một
lần. Và trong một
dự án lớn hơn, nơi một đoạn mã không phải là thực hiện một lần duy nhất, và
thường xuyên chốc, kiểm tra có một phương pháp của đảm bảo, chúng tôi không phá
vỡ
trước hành vi. Kiểm tra là một khối quan trọng của bất kỳ liên tục
nhập
Ví dụ đường
so kiểmống
Tra- nếu không phải là quan trọng nhất.

Một kỹ thuật nhỏ tựa là cần thiết ở đây. Trong khi Đi chức năng thường trở lại
giá trị rất ít viết đặc biệt để ra tiêu chuẩn. Các thử nghiệm chiến lược,
mà chúng tôi sẽ thực hiện đây chỉ là cần thiết khi kiểm tra các tiêu chuẩn
ra, đó có nghĩa là nó sẽ không được mặc định tiếp cận phần còn lại của mã.
Tuy nhiên, vì đây là lần đầu tiên chúng tôi chức năng, và như chúng tôi muốn kiểm tra
nó, đây là
cách dễ dàng. Chúng ta sẽ thấy thêm về chức năng kiểm tra rất nhanh chóng.
Ví dụ là không chỉ dùng để thử nghiệm đầu ra tiêu chuẩn, nhưng cũng như là của họ
tên cho các người sử dụng và bảo trì của mã của bạn tốt,
điểm khởi đầu. Họ sẽ xuất hiện trong các tài liệu được tạo ra bởi đi doc .

Đi cung cấp rất nhiều công cụ để kiểm tra mã, hãy sử dụng họ! Ở đây, chúng tôi, mục
tiêu sẽ đượcchính - một nhiệm vụ mà là khá phổ biến. Phần lớn của
để kiểm tra
Đi mã nằm trong các chức năng khác - nếu không khác gói - và đó là những
chức năng mà chúng tôi rất nhiều, bài kiểm tra. Hầu hết chức
chính thời gian,
năng sẽ gọi
những thử nghiệm chức năng và chỉ đơn giản là sẽ chịu trách nhiệm in một chuỗi hoặc
trở về trạng thái mã. Ngoài dịp này, các bài kiểm tra trong cuốn sách này sẽ
không được vào chính chức năng, mà là về chức năng, nó gọi.

Đầu tiên, chúng ta cần thử nghiệm một tậpmain_internal_test.đi


tin, mà chúng tôi sẽ tên cho những
lý do sau đây:
chính bởi vì các tập tin chúng tôi kiểm tra được đặt tên
chính.đi
nội bộ bởi vì chúng tôi muốn tiếp cận chưa phơi sáng phương pháp, một hội
nghị

kiểmchúng
bởi vìtađây
tra lựalàchọn
một để
tậplàm theo tra.
tin kiểm trong cuốn
Khi sáchtòa
nói đến nàynhà hoặc thực hiện
các chương trình *
_test.đi các tập tin được, bị lờ đi bởi các biên dịch và chỉ có
có thể kiểm tra được chạy.

Bên trong và kiểm tra bên ngoài

Có hai cách tiếp cận để thử nghiệm. Trong một trường hợp chúng tôi kiểm tra từ của
người dùng điểm
của xem, vì vậy chúng tôi chỉ có thể kiểm tra những gì được tiếp xúc, chúng tôi tên này,
kiểm tra bên ngoài.
Kiểm tra
Trong lầncác
thứtậphaitin nên trong
trường hợp, {tên gói}_test
chúng tôi biếtgói.
tất cả mọi thứ mà đi vào bên trong và
chúng tôi muốn
kiểm tra các chưa phơi sáng chức năng. Kiểm tra các tập tin cần được ở cùng một gói
theo
các nguồn
Hai tiếp cậntin.
không được độc quyền, và cần được nhìn thấy như
bổ sung.

Nâng cao trình tiêu chuẩn,

Làm thế nào để chúng ta kiểm tra nó? Làm thế nào chúng ta có thể chắc chắn rằng
một cái gì đó được gửi đến các
tiêu chuẩn ra từ bên trong một chức năng? Đi cung cấp một công cụ dựa trên
một chức năng kiểm tra tên đó có thể được sử dụng đểVíkiểm tra raNăng>
Dụ<Chức tiêu chuẩn của rằng
chức năng.trường
mô - trong Nếu một chức
hợp củanăng
chúngcủa
ExampleMain Đitên
-ta, sẽtrong một nó
xác định tậpnhư
tin kiểm
là đủtra,
tiêucác trận đấu
chuẩn,
ra xác minh. Mặc dù chính không được tiếp xúc, chức năng là trong
PascalCase cần một số vốn M ở đây.

Danh sách 2.2 main_internal_test.đi: kiểm Tra in ra

gói chính

chức năng ExampleMain() {


chính() #A
// Ra: #B
// Xin chào thế giới
}
Các chức năng kiểm tra kết thúc tốtchính
đẹpcác
mộtthử
cuộc
nghiệm
gọi đến
chức năng, ở dòng 04.

Để khẳng định rằng dự kiến sẽ ra tin nhắnXin chào thế giớiđã được gửi đến
chuẩn ra, chúng tôi sử dụng Đi
Ví dụ cú pháp, mà cho phép chúng tôi để viết một
bình luận dòng chứa Ra: . Bất kỳ bình luận đường ngay sau này
sẽ được danh dự kiến giá trị đó Đi kiểm tra của tiện sử dụng để kiểm tra ra
được tạo ra bởi những cơ thểVínày
dụ chức năng.

MộtVí dụ chức năng mà không có kết quả này sẽ được biên dịch nhưng không được
thực hiện
trong thời gian thử nghiệm. Nó sẽ xuất hiện trong các tài liệu và có thể rất hữu ích cho
người dùng mã của bạn.
Hãy chạy các bài kiểm tra

Để chạy một bài kiểm tra,


kiểmgọi
lệnh
tra Đi trong thư mục đó.

> đi kiểm tra


QUA
ok learngo-túi/xin chào các 0.048

Đầu ra danh sách các bài kiểm tra các tập tin Đi đi qua. Mỗi dòng sẽ được tên của
module của bạn đã theo dõi theo các đường dẫn tới các gói bên trong nó.

Viết bài kiểm tra đi kèm với nhiều lợi ích.

Chú ý

Nó quan trọng để giữ trong tâm trí báo này của Edsger Trỏ: "Thử nghiệm có thể
chứng minh sự hiện diện của lỗi, nhưng không phải là sự vắng mặt của họ!". Một thử
nghiệm đơn sẽ không phải
chứng minh một đoạn mã là lỗi-bằng chứng gì cả. Các bài kiểm tra hơn, chúng ta có,
đáng tin cậy hơn mã được.
Đầu tiên, chúng ta có một automatable quá trình đó sẽ kiểm tra mật mã mà chúng ta đã
tạo ra một xác định ra. Thứ hai, với bài kiểm tra này, chúng ta có thể bắt đầu thay đổi
mã - và mỗi lần thay đổi, mọi một chút của chúng ta thay đổi, có thể
xác nhận với một chạy các bài kiểm tra trước. Cuối cùng, và điều này sẽ được bảo
hiểm trong Ví dụ đóng một vai trò quan trọng
nhiều chi tiết trong chương
một phần trong Đi tài liệu. sau, văn bản kiểm tra
2.1.3 Gọi chào chức năng

Mục tiêu của chương này không chỉ để in một thông điệp tốt đẹp để các người dùng.
Chúng tôi chính
muốnnăng
chức vài biến
nàothể,
khácmột sốhai
biệt môđiều:
đun.đầu
Mộttiên,
bước trởđịnh
xác lại, một cụ thể tin nhắn, và thứ hai,
in nó. Chúng tôi đã bắt tất cả mọi thứ trên một đường duy nhất trong các trước mã,
nhưng
điều đó không để lại bất kỳ không gian để thích nghi.
Kể từ khi chúng tôi mong muốn làm giàu cho các tin nhắn, chúng tôi cần một linh hoạt
ở đây. Chúng ta sẽ chức năng.
chào đón
bắt đầu
Này, trởbằng cách
về chức giải một
năng nén chuỗi
các tinmà
nhắn thế ta
chúng hệcóthành một tục
thể tiếp chuyên
trongchúc
mộtmừng
biến. chúng tôi
gọi
Dưới đây là các đầy đủ mã chỉnh với việc khai thác.

Danh sách 2.3 chính.đi: di Chuyển Println gọi

gói chính

nhập khẩu "đạp"

chức năng chính() {


chúc mừng := chào() #A
đạp.Println(chào)
}

// chào mừng trở về một chào mừng đến với thế giới.
chức năng chào() chuỗi { #B
// trở lại một lời chào hỏi đơn giản
trở lại "xin Chào thế giới"
}

Hãy nhìn gần hơn.

Các chức năng mới được chào


gọi làđón
vì nó sẽ trở lại các tin nhắn chúc mừng.
Bây giờ, nó không có các thông số và chỉ cần trở lại tin nhắn trong hình thức
một chuỗi.

// chào mừng trở về một chào mừng đến với thế giới.
chức năng chào() chuỗi {
trở lại "xin Chào thế giới"
}
Trong các chức năng chính chúng tachào
gọi là chức
ngườinăng
đón mới và cửa hàng của mình ra
những
chúc mừng biến chuỗi, mà chúng tôi in.

chức năng chính() {


chúc mừng := chào()
đạp.Println(chào)
}

Chúng tôi chỉnh. Không kiểm tra vẫn còn chạy? Nó phải, nhưng nó không phải là đơn
nhất vì nó với rất nhiều linh hoạt hơn.
chào đón
có thể được. Chúng tôi có thể viết một bài kiểm tra xung quanh
2.1.4 Thử nghiệm một cụ thể chức năng với thử nghiệm

Sắp xếp, như chúng ta đã làm, không nên thay đổi mã là hành vi. Chúng ta có thể
vẫn còn chạy trước đây của chúng tôi kiểm tra, và nó vẫn cần phải vượt qua. Nhưng kể
từ khi chúng tôiđón
chào sẽ muốn
chức năng, đó là một trong chúng ta nên bao gồm với
làm giàu
dành riêng, kiểm tra như ít ỏi như nó được.

Đi, như một phần của nó thư viện chuẩn gói cung cấp khả năng sử dụng những
gói thử nghiệm. Chúng ta sẽ sử dụng nó rất nhiều trong cuốn sách này, cố gắng để
được hưởng lợi từ mọi khía cạnh rằng Đi nhà thiết kế đặt vào ngữ vì vậy chúng tôi
không cần phải viết riêng của chúng tôi, các công cụ, hoặc dành thời gian chuẩn độc
lập thử nghiệmgói này được viết
thử nghiệm thư viện.
để viết bài kiểm tra. Như tên độc đáo cho thấy,

Chúng tôi đã nhìn thấy những


Ví Dụ<Chức Năng>() cú pháp, được sử dụng cho
tài liệu và cho thử nghiệm đầu ra tiêu chuẩn. Hãy dấn thân vào một bộ mới của
các chức năng kiểm tra: những với
Kiểm nhữngNăng>(t *thử nghiệm.T) chữ ký.
tra<chức
Quan trọng là có một sự khác biệt ở đây với trước mục: những
chức năng chấp nhận một số - một con trỏ vào một cấu trúc. Những lý do
thử nghiệm.T
cho việc sử dụng một con trỏ ở đây nằm ngoài phạm vi của chương này, chúng tôi sẽ
yểm trợ cho
họ sau.
MộtTestXxx chức năng chạy một hoặc nhiều thử nghiệm trên một chức năng, theo định
nghĩa của sự
phát triển. Chúng ta sẽ bắt đầu với một, và sau đó phát triển. Một bài kiểm tra bao gồm
trong gọi
các chức năng và kiểm tra nó trở lại giá trị, hoặc bang của một số biến,
chống lại một muốn trị giá hoặc nước. Nên họ phù hợp, kiểm tra này là coi như
qua đời. Nếu không, nó được coi là không.
Tất cả các thử nghiệm có bốn bước chính:

Giai đoạn chuẩn bị, nơi mà chúng ta thiết lập tất cả mọi thứ chúng ta cần để chạy
các
bài kiểm tra đầu vào giá trị, kết quả mong đợi, biến môi trường toàn cầu
biến kết nối mạng, vân vân.;
Giai đoạn thực hiện, nơi chúng tôi gọi là thử nghiệm chức năng - bước này là
thường một dòng duy nhất
quyết định pha, nơi chúng tôi kiểm tra đầu ra để dự kiến ra -
đây có thể bao gồm nhiều so sánh đánh giá, và đôi khi
một số xử lý - và có thử nghiệm hoặc là không hay vượt qua,
Các tháo giai đoạn, nơi chúng tôi vui lòng sạch sẽ trở lại để bất cứ điều gì các
bang
ta từ
trì hoãn
là trước khi chúng khóa:
thực hiệnbất cứ điều
- bước nàygìđược
đó đã thayhiện
thực đổi,rất
hoặc
đơntạo ra
giản,
trong
nhờ Đithời gian để chuẩn bị nên cố định hoặc phá hủy ở đây.
Chúng tôi
TestGreet chức năng sẽ được viết trong cùng main_internal_test.đi
tập tin như trước đó, chủ yếu là vì các thử nghiệm chức
chào cũng
đón năng,
là ở cùng
chính.đi tập tin. Hãy nhìn vào những bổ sung, chúng tôi mang đến cho các tập tin.

Trong Đi, chúng tôi muốn


muốn sử
dự dụng
kiến và giá trị cho
đã nhận một trong những thực tế.

Danh sách thể 2,4 main_internal_test.đi Kiểm trađón


chào

gói chính

nhập khẩu, "kiểm tra"

chức năng TestGreet(t *thử nghiệm.T) {


muốn := "xin Chào thế giới"

có := chào()

nếu có != muốn {
// mark thử nghiệm này là thất bại
t.Errorf("dự kiến: %q, có: %q", muốn, có)
}
}

Đầu tiên khác biệt với trước phiên bản các tập tin này là lúc dòng 03: chúng ta
cần đến nhập thử nghiệmgói, bởi vì chúng ta sử dụng một số loại
*thử nghiệm.T chúng TestGreet
tôi chức năng.
Đây là một đường mà sẽ xuất hiện trong mọi tập tin thử nghiệm của chúng ta sẽ xem
như Đi
phát triển. Sự vắng mặt của nó nên được một lá cờ đỏ khi xem xét công mã.
nhập khẩu, "kiểm tra"

Thứ hai thay đổi quan trọng trong file này, tất nhiên, người mới
TestGreet
chức năng.

chức năng TestGreet(t *thử nghiệm.T)

Chúng tôi đã thêm ý kiến trong cơ thể này chức năng để nó theo
sách trước bước.

Những bước chuẩn bị, trong trường hợp của chúng ta, bao gồm trong xác định sản
lượng dự
chào kiến
chứccủa
đón năng gọi. Kể từ khi điều này không thay đổi môi trường, có
những
không có gì để quay lại sau khi thực hiện kiểm tra, và chúng tôi không cần phải hoãn
bất kỳ đóng cửa bước.

Giai đoạn thực hiện chỉ đơn giản là bao gồm trong gọi thử chức năng, và,
chào đón
tất nhiên, trong chụp nó ra vào một biến.

Danh sách 2.5 main_internal_test.đi: Cơ thể của các thử nghiệm

muốn := "xin Chào thế giới" #A

có := chào() #B

nếu có != muốn { #C
// mark thử nghiệm này là thất bại
t.Errorf("dự kiến: %q, có: %q", muốn chúc mừng)
}

Quyết định pha đây không phải quá khó khăn. Chúng ta cần phải so sánh hai dây, và
chúng tôi sẽ chấp nhận không thay đổi, do
!= đó,
so sánh điều hành hoạt động tốt
cho chúng ta ở đây. Chúng ta sẽ sớm mặt trường hợp so sánh hai dây, phải không đủ,
nhưng chúng ta không bỏ qua bước, như chúng ta vẫn còn có một dòng cuối cùng ở
đây mà cần nhiều
lời giải thích.
t.Errorf("dự kiến: %q, có: %q", muốn, có)

Vì vậy, đến nay, sự cầnt thiết


thôngcho
số những
không phải là rõ ràng. Như đã nói trước đó, một
kiểm tra cần phải là một
VƯỢTtrong
-hoặc
quahai Thất BẠI
-hey. Gọi điện thoại là một trong những cách
t.Errorf
cho phép đi kiểm tra công cụ biết rằng điều này rất thử nghiệm không thành
Errorf đãcông.
một tương tự như chữ ký ; xem phụ Lục B cho thêm về dạng
Printf
sợi dây.

Một lần nữa, bạn có thể chạy các bài kiểm tra cùng lệnh như trước đi
đó,
.
kiểm tra

Trước khi chúng tôi chuyển đến phần tiếp theo, bây giờ là thời gian để chơi một chút.
Thay đổi muốnvà lại chạy các bài kiểm tra.
các nội dung của những
Lý do cho việc này sớm sắp xếp không thể xuất hiện ngay bây giờ. Bởi cuối
của chương này, tuy nhiên, như là, chúng tôi thực hiện các chức năng mới trong mã
của chúng tôi,
các tập tin sẽ tăng kích thước. Nó là thực hành tốt, trong Đi như trong nhiều người
khác
phát ngôn ngữ, để giữ phạm vi của một chức năng hẹp. Này phục vụ cho
nhiềulàm
mụcmãđích, nhưkiểm chứng;
có thể
làm lỗi mã dễ dàng hơn,
làm nhiệm vụ của một chức năng rõ ràng.

Nói chung, các nhận thức trách nhiệm của một chức năng nên tối thiểu. Không ai muốn
đối mặt với một bức tường của văn bản gồm có nhiều lớp của vết lõm.

Bây giờ chúng tôi đã viết một chương trình mà chào đón các người dùng với một tin
nhắn đáng yêu. Chúng tôi
biết nó hoạt động tốt, bởi vì ta đã viết bài kiểm tra để che mã. Nhưng có
một rượt đuổi nhỏ. Nó sẽ chỉ viết chúc mừng anh. Chương trình của chúng tôi có thể
được
cải thiện được sử dụng bởi những người dùng ngôn ngữ khác hơn là anh. Hãy tưởng
tượng
bạn đang áp dụng ở một công ty Canada, nơi nhân viên nói cả tiếng pháp
và tiếng anh. Làm thế nào nó sẽ là nếu họ có thể sử dụng nó quá, và được chào đón
2.2 bạn
với một ngônCó
ngữmột
của họthứ tiếng?
lựa chọn?

Chương trình của chúng tôi rất tĩnh. Nó sẽ luôn luôn chạy và in các tin nhắn tương tự,
bất kể người dùng. Hãy thích nghi mã của chúng tôi để hỗ trợ vài tiếng, và
có người quyết định cái nào họ muốn. Trong phần này, chúng tôi sẽ:
Hỗ trợ thêm cho một ngôn ngữ mới trong chàonhững
phương pháp
đón
Xử lý các người dùng ngôn ngữ của yêu cầu
Thích nghi với những thử nghiệm và đảm bảo, chúng tôi không phá vỡ trước hành
vi
Để màn hình Xin chào thế giớitrong một ngôn ngữ khác nhau, chúng tôi cần để có thể
nói với các chương trình mà ngôn ngữ chúng tôi muốn sử dụng. Này sẽ được thực
hiện trong
hai lần đầu tiên được hỗ trợ ngôn ngữ mới, và người thứ hai
là một mở chương trình của chúng tôi để các người là sự lựa chọn của ngôn ngữ.
2.2.1 bạn có nói tiếng pháp? Chuyển đổi góc

Hiện tại củachào


chúng chỉ
đóntôi có chức năng trở về là một thông điệp được mã hóa. Vì nó có thể
chỉ trả lại một tin nhắn, chúng tôi muốn một số logic trong đó để xác định đó
chào mừng đến ra. Có nhiều lựa chọn cho điều này trong Đi. Đầu tiên, một trong đó
là để tâm, nếuphương pháp, sẽ chỉ làm việc cho một hoặc hai
ngôn ngữ. Hơn nữa, và mã không cần thiết trở thành một danh sách dài của kiểm tra.
Ở đây, chúng ta sẽ khám phá hai người kia. Kể từ khi chúng tôi cần hỗ trợ ngôn ngữ
khác,
hãy chọn pháp. Đây là những gì toàn mã bây giờ trông giống như:
Danh sách 2.6 chính.đi: Thêm một ngôn ngữ mới

gói chính

nhập khẩu "đạp"

chức năng chính() {


chúc mừng := chào("vi")
đạp.Println(chào)
}

// ngôn đại diện cho các ngôn ngữ của mã


loại ngôn ngữ chuỗi

// chào nói xin chào với thế giới trong ngôn ngữ quy định
chức năng chào(l ngôn ngữ) chuỗi {
chuyển l {
trường hợp "vi":
trở lại "xin Chào thế giới"
trường hợp "cha":
trở lại "Bonjour le monde"
mặc định:
trở lại ""
}
}

Rõ ràng qua gõ

Sử dụng đúng loại biến là quan trọng. Chúng ta cần phải biết những gì chúng ta
đang nói về. Và những gì chúng ta đang nói là có một ngôn ngữ
số đó sẽ được sử dụng để xác định tin nhắn chúc mừng nên được
trả lại bởi chức năng. Ngôn ngữ này thông số có thể là một chuỗi
chào đón
có ngôn ngữ mô tả. Nó có thể là một số nguyên đề cập đến một
chỉ số của ngôn ngữ hiện có. Nó có thể là địa chỉ để một từ điển. Nó có thể là
nhiều thứ. Bây giờ, chúng tôi sẽ giữ nó đơn giản và sử dụng một chuỗi.

loại ngôn ngữ chuỗi

Ngôn ngữ nhập sẽ có một chuỗi đó đại diện cho một ngôn ngữ . Loại này
nghĩa giúp chúng ta và các người dùng thư viện của chúng tôi hiểu những gì các giá trị
đang
mong đợi và làm cho trộn lên, các thông số thực sự khó khăn.
Lựa chọn đúng ngôn ngữ

Bây giờ chúng ta có một loại, chúng ta có thể vượt qua nó như một thông số vào
chào đón
chức năng.

Các chữ ký mới trở thành:

chức năng chào(l ngôn ngữ) chuỗi

Để gọi nó, chúng tôi đã thay đổi dòng đầu tiên của chúng tôi, chức năng chính:

chúc mừng := chào("vi")

Làm thế nào để trình biết nếu điều này là một hoặcngôn
chuỗi mộtngữ ? Nhìn xuống
chữ ký của các chức năng. chào đón yêu cầu một
ngôn ngữ vì vậy, nó sẽ gõ
liên tục như vậy.

Cho phiên đầu tiên, chúng ta có thểchuyển


thêm một
trên ngôn ngữ và trả lại
tương ứng chúc mừng. Mặc định giá trị cho các thời điểm này chỉ là một rỗng
chuỗi. Chúng ta nhớ rằng
chuyển là rõ ràng hơn khi đối phó với hầu hết các loại - những
ngoại lệ được lỗi con trỏ, và short .

Danh sách 2.7 chính.đi trên ngôn ngữ

chuyển l {
trường hợp "vi":
trở lại "xin Chào thế giới"
trường hợp "cha":
trở lại "Bonjour le monde"
mặc định:
trở lại ""
}

Lưu ý rằng giữa mỗi trường hợp, trái ngược với nhiều ngôn ngữ khác, chúng ta không
phá vỡ. Vi phạm là tiềm ẩn trong Đi bởi vì nó là một nguồn tiềm năng của
lỗi. Tất nhiên, khi chúng ta trở lại đây trong mỗi trường hợp, chỉ là tranh luận, nhưng
bây giờ
anh biết.
Trong chính
nhữngchức năng, chúng ta cần phải vượt qua ngônnhững
ngữ để
mong
chúng
muốn
tôi nâng cấp
chào đónchức năng và in ra. Ví dụ, "vi" cho tiếng anh.

2.2.2 Thích nghi với những thử nghiệm với kiểm Tra<chức Năng>
chức năng
Trước đây, chức năng chấp nhận không có số. Nó mất một,
chào đón
mà có nghĩa là chúng ta đã phá vỡ các hợp đồng, chúng tôi đã có người dùng mã của
chúng tôi. Vâng,
bây
chào giờ, chỉ năng
chức
đón dùng với
đượccácmột
đầubài kiểm tra, nhưng mà vẫn còn đếm. Bây giờ chúng tôi
vào.
muốn kiểm tra
Chúng ta sẽ thực hiện mộtchào
cuộcđónchức
gọi đến
năng
những
của đi qua những mong muốn đầu vào
ngôn ngữ và lưu trữ các ra trong một biến, vì vậy chúng tôi có thể xác nhận nó. Các
giai đoạn chuẩn bị bây giờ có hai biến: ngôn ngữ mong muốn, và
mong đợi tin nhắn chúc mừng.

Hãy sử dụng một quy ước mới của thửnhững


nghiệmgói: khi thử nghiệm một chức năng
với hai (hay hơn) kịch bản khác nhau, chúng ta có thể viết một vài chức năng,
Kiểm Tra<Tên_hàm>_<ScenarioName> . Đầy đủ các tập tin thử nghiệm bây giờ trông
như thế này:
Danh sách 2.8 main_internal_test.đi: Tách ra bài kiểm tra trường hợp

gói chính
nhập khẩu, "kiểm tra"

chức năng ExampleMain() {


...
}

chức năng TestGreet_English(t *thử nghiệm.T) {


lang := ngôn ngữ("vi") #A
muốn := "xin Chào thế giới"

có := chào(lang) #B

nếu có != muốn { #C
// mark thử nghiệm này là thất bại
t.Errorf("dự kiến: %q, có: %q", muốn, có)
}
}

chức năng TestGreet_French(t *thử nghiệm.T) {


lang := ngôn ngữ("cha") #A
muốn := "Bonjour le monde"

có := chào(lang) #B

nếu chúc mừng != muốn { #C


// mark thử nghiệm này là thất bại
t.Errorf("dự kiến: %q, có: %q", muốn, có)
}
}

chức năng TestGreet_Akkadian(t *thử nghiệm.T) {


// Thì tên arcadia đã chưa thực hiện!
lang := ngôn ngữ("akk") #A
muốn := ""

có := chào(lang) #B

nếu có != muốn { #C
// mark thử nghiệm này là thất bại
t.Errorf("dự kiến: %q, có: %q", muốn, có)
}
}

Như các bạn có thểTestGreet_English


thấy, chức năng là trách nhiệm của các thử nghiệm
Chúc mừng anh, trong khi những
TestGreet_French chức năng kiểm tra người pháp
tin nhắn. Trong khi phương pháp này là thú vị và đáng ghi nhớ, bạn sẽ
có nhận thấy rằng trong trường hợp của chúng ta, thật sự không có gì thay đổi giữa
anh và
pháp kịch bản. Chỉ có những bước chuẩn bị khác, và chỉ một chút. Các
phần tiếp theo sẽ cải thiện trên này.
Để chạy các bài kiểm tra, chỉ cần chạy mới củađibạn
kiểmyêu
tra lệnh.
thích

Như bạn đã thấy, chúng tôi đã thêm một chức năng để kiểm tra một ngôn ngữ không rõ
cho chương trình. Kiểm tra không phải luôn về việc chắc chắn là "tốt" đầu vào
cung cấp "tốt" kết quả. Đảm bảo an toàn mạng lưới đang ở nơi, gần như là
có giá trị như chắc chắn mã này hoạt động như dự định.

2.3 Hỗ trợ nhiều ngôn ngữ với một thành ngữ


Thêm mục một chuyển khoản làm giảm đọc mã: nó
làm tăng kích thước của các chức năng, đôi khi vượt lên màn hình kích thước,
khi câu trả lời duy nhất chúng ta cần là "nếu ngôn ngữ này được hỗ trợ, cho tôi nó
lời chào". Để cắt xuống chức năng của chúng tôi mà không mất bất kỳ chức năng của
chúng tôi bản rất
đồ phổ biến và hữu ích
tiếp theo của mã hóa, chúng
dữ liệu cấu trúc Đi. Mộtbản đồ tôi quyết định sử
là một bảng băm, một dụng
bộ một
cặp khác biệt và chìa khóa
liên quan của họ giá trị. Trong phần này, chúng tôi sẽ:

Quy mô số lượng hỗ trợ ngôn ngữ


giới Thiệu việc sử dụng
bản đồ

2.3.1 giới Thiệu bản đồ Đi

Hãy nhìn vào thực hiện việc mã bằng cách sử dụng một bản đồ
để lưu trữ những
cặp của ngôn ngữ và chúc mừng tin nhắn:

Danh sách 2.9 chính.đi: Sử dụng bản đồ

gói chính

nhập khẩu (
"đạp"
)

chức năng chính() {


chúc mừng := chào("vi")
đạp.Println(chào)
}

// ngôn đại diện cho các ngôn ngữ của mã


loại ngôn ngữ chuỗi

/ chúng tôi. giữ lời chào cho mỗi hỗ trợ ngôn ngữ
var phrasebook = bản đồ[tiếng]chuỗi{
"el": "Χαίρετε Κόσμε", // tiếng hy lạp
"vi": "xin Chào thế giới", // tiếng anh
"cha": "Bonjour le monde", // tiếng pháp
"ông": " ‫" םולש םלוע‬, // Tiếng do thái
"yêu": " ‫" ﮨ ﯿﻠﻮ‬, // Tiếng Urdu
"vi": "Xin chào Thế Giới", // tiếng Việt
}

// chào nói xin chào với thế giới trong ngôn ngữ khác nhau
và chào(l ngôn ngữ) chuỗi {
chúc mừng, ok := phrasebook[l]
nếu !ok {
trở lại đạp.Sprintf("không được hỗ trợ ngôn ngữ: %q" l)
}

trả lời chào,


}

Chúngbảntôi
nghĩa cộng sự lời chào đến tất cả các ngôn ngữ như một
đồ
cặp {ngôn ngữ, chúc mừng} . Cho chương này, chúng tôi sử dụng toàn cầu biến mà
giữ lời chúc mừng.

Danh sách 2.10 chính.đi: định nghĩa


bản đồ

/ chúng tôi. giữ lời chào cho mỗi hỗ trợ ngôn ngữ
var phrasebook = bản đồ[tiếng]chuỗi{
"el": "Χαίρετε Κόσμε", // tiếng hy lạp
"vi": "xin Chào thế giới", // tiếng anh
"cha": "Bonjour le monde", // tiếng pháp
"ông": " ‫" םולש םלוע‬, // Tiếng do thái
"yêu": " ‫"ﮨ ﯿ ﻠﻮ دﻧ ﯿﺎ‬, // Tiếng Urdu
"vi": "Xin chào Thế Giới", // tiếng Việt
}

Bước tiếp theo là để sử dụng nói thay vì những chuyển trong những
chào đón
chức năng.
Danh sách 2.11 chính.đi: phương pháp
chào đón

// chào nói xin chào với thế giới trong ngôn ngữ khác nhau
và chào(l ngôn ngữ) chuỗi {
chúc mừng, ok := phrasebook[l]
nếu !ok {
trở lại đạp.Sprintf("không được hỗ trợ ngôn ngữ: %q" l)
}

trả lời chào,


}

Truy cập vào một mục trong bảnmột


trởĐi
đồ về hai phần của thông tin có giá trị: một
giá trị - trong trường hợp của chúng tôi, tin nhắn kết hợp với cácl chính
- và một
ngôn ngữ
lôgic ( ok theo ước) thông báo cho chúng ta, cho dù các phím đã được tìm thấy.
Cú pháp của ảnh hưởng đến cả hai trở lại giá trị để hai biến khác nhau trên một con
đường duy nhất có thể là mới một lập trình viên - nó không tồn tại trong cơ C.
Đây là điều chúng ta làm rất nhiều trong Đi.

chúc mừng, ok := phrasebook[l]


nếu !ok {
trở lại đạp.Sprintf("không được hỗ trợ ngôn ngữ: %q" l)
}

Nó là cần thiết để kiểm tra các thứ hai trở lại giá trị của các truy cập vào các bản đồ -
nếu
các ngôn ngữ đã được hỗ trợ, chúng tôi muốn nhận được những zero-giá trị của một
chuỗi,
đó là chuỗi rỗng, không có kiến thức của cho dù các bản đồ đã có một
mục
Lưu ýcho ngôn
rằng ngữ
trong của
sản chúng
xuất sẵn tôi.
sàng mã, chúng tôi sẽ trở lại một lỗi vì
một chuỗi rỗng không thực hiện bất kỳ ý nghĩa. Chúng tôi đã chọn để giữ cho nó đơn
giản cho
bây giờ. Lỗi sẽ được bao phủ trong chương sau.
Nhiều trở lại giá trị: Chúng ta sẽ thấy nhiều chuyện của nhiều giá trị
chuyển nhượng, chủ yếu là trong bốn chung trường hợp:

Bất cứ khi nào chúng tôi muốn biết liệu một quan trọng là hiện diện trong một bản
đồ, như chúng ta làm
ở đây, nơi mà chúng tôi lấy giá trị, và thông tin của sự hiện diện của
chiếc chìa khóa trong khoảng
bản đồtừ(như chúng
khóa, ta đãchúng
cho phép làm trong phần
tôi lặp quanày của mã);
bất cứcác
tất cả Khicặp
nàogiá
chúng tôi sử
trị khóa dụng
trong những
một bản đồ, hoặc tất cả các chỉ số-giá trị của một
yếu tố
lát hoặc mảng (một ví dụ trong phiên bản tiếp theo của các tập tin thử nghiệm
nhiều hơn trong việc
tiếp theo chương); <- điều hành, đó trả một
bất trị
giá cứvàKhi nào
cho dùchúng tôi đọc
các kênh từ một
được đóngkênh
cửa với những
(ví dụ có thể được tìm thấy ở sau
chương);
cuối Cùng, thường xuyên nhất trường hợp là khi chúng tôi lấy nhiều giá trị
trả lại bởi một chức năng duy nhất. Cuốn sách này sẽ có rất nhiều
lần xuất hiện của trường hợp này, chủ yếu là do để Đi xử lý của lỗi.

2.3.2 Viết một bảng điều khiển kiểm Tra

Trước đây của chúng tôi đã kiểm tra tuyến tính - họ đã thử nghiệm mọi thứ ngôn ngữ
trong một tuần tự
cách. Bước trở lại, chúng tôiđón
chào nhận
chứcthấy mỗi
năng, vàkiểm
kiểmtratrachạy
chúccùng
mừng một
chochuỗi:
rằng có một
ngôn ngữ nhập gọiđợi một. Điều này có thể được tổng hợp trong sau
là mong
đoạn mã đó được thực hiện cho ngôn ngữ "vi" , "cha" hay "akk" chúng tôi
trước, ví dụ:

Danh sách 2.12 main_internal_test.đi: Gọi chào và kiểm

có := chào(tiếng(lang))
nếu có != muốn {
t.Errorf("dự kiến: %q, có: %q", muốn, có)
}

Đó là không có điểm tại sao lại đoạn mã này mỗi khi chúng tôi muốn
kiểm tra chúng tôi hỗ trợ đúng là một ngôn ngữ mới. Không phải là kiểm tra luôn luôn là
như vậy? Chúng tôi thực sự cần thêm một mười đường dây của chúng tôi để kiểm tra
các tập tin nếu chỉ có
hai trong số những dòng này thay
chào đổi? Đây
chức
đón năng, không phải
và đây là bền
cũng vững.
là động lựcĐó là chúng
của động lực
tôi để
của chúng tôi để
sử dụng bản đồ của chúng tôi kiểm tra! Chúng ta có thể sử dụng bảng điều khiển kiểm
sử dụng
tra để bản
tăng đồ trong
cường sự cơ thể của những
tái sử dụng và rõ ràng của chúng tôi kiểm tra tập tin, và có đẹp tác dụng của
nó thu hẹp lại rất nhiều! Hãy có một cái nhìn mới kiểm tra trước khi chúng tôi giải thích
nó.
Danh sách 2.13 main_internal_test.đi: Bảng điều khiển kiểm Tra

chức năng TestGreet(t *thử nghiệm.T) {


loại testCase cấu trúc {
lang ngôn ngữ
muốn chuỗi
}

var kiểm tra = bản đồ[chuỗi]testCase{ #A


"Anh": {
lang: "vi",
muốn: "xin Chào thế giới",
},
"Pháp": {
lang: "cha",
muốn: "Bonjour le monde",
},
"Người arcadia, không được hỗ trợ": {
lang: "akk",
muốn: `không được hỗ trợ ngôn ngữ: "akk"`,
},
"Hy lạp": {
lang: "el"
muốn: "Χαίρετε Κόσμε",
},
"Hebrew": {
lang: "ông",
muốn: " ‫" םולש םלוע‬,
},
"Tiếng Urdu": {
lang: "em",
muốn: " ‫"ﮨ ﯿﻠﻮ دﻧ ﯿﺎ‬,
},
"Tiếng việt": {
lang: "vi",
muốn: "Xin chào Thế Giới",
},
"Trống rỗng": {
lang: "",
muốn: `không được hỗ trợ ngôn ngữ: ""`,
},
}

// nhiều hơn tất cả các kịch bản


cho tên, tc := tầm kiểm tra {
t.Chạy(tên, và(t *thử nghiệm.T) {
có := chào(tc.lang) #B

nếu có != tc.muốn { #C
t.Errorf("dự kiến: %q, có: %q", tc.muốn có)
}
})
}
}

Như chúng ta đã thấy trước đây, mọi kiểm tra, chúng tôi muốn chạy cần hai giá trị: các
ngôn ngữ của các điệp mong muốn, và dự kiến chúc mừng thông điệp sẽ
được trả lại bởi chào đón chức năng. Này, chúng tôi giới thiệu một cấu trúc mới
có ngôn ngữ đầu vào, và mong đợi chúc mừng. Cấu trúc Đi là
cách của tập hợp dữ liệu loại với nhau trong một ý nghĩa thực thể. Trong trường hợp
của chúng tôi, testCase . Chúng tôi
kể từ khi cấu trúc đại diện cho một trường hợp
cấu trúc chỉ cần để có thể truy cập trong những thử nghiệm, chúng
TestGreet chức năng (vàtakhông
sẽ đặtnơi
tênnào

khác), vì vậy hãy xác định điều đó.

loại testCase cấu trúc {


lang ngôn ngữ
muốn chuỗi
}

Điều này sẽ làm cho viết một bài kiểm tra hơn một cặp của ngôn ngữ và chúc mừng
thậm chí
đơn giản.
Bây giờ chúng ta có thể dễ dàng viết một bài kiểm tra trường hợp, chúng ta hãy xem
làm thế nào để viết rất nhiều. Trong Đi, các cấu trúc đó sẽ
bản đồ
chung cách viết ra một danh sách của trường hợp thử nghiệm
đề cập đến mỗi trường hợp thử nghiệm với một cụ thể mô tả quan là đểtrọng.
sử dụng một
Mô tả phải
được rõ ràng gì về trường hợp này các bài kiểm tra.

Bây giờ chúng ta có mọi thứ ta cần để viết một danh sách của trường hợp thử nghiệm.

Danh sách 2.14 main_internal_test.đi: trường hợp Thử nghiệm nét

var kiểm tra = bản đồ[chuỗi]testCase{


"Anh": {
lang: "vi",
muốn: "xin Chào thế giới",
},
"Pháp": {
lang: "cha",
muốn: "Bonjour le monde",
},
}

Để kiểm tra các kịch bản, chúng ta có thể lặp lại những
kiểm trabản đồ. Như chúng ta sẽ
xem thông tin chi tiết trong bước kế tiếp, điều
cho +này
khoảng cú pháp trả lại chìa khóa
và giá trị của mỗi phần của bản đồ. Sau đó chúng tôi vượt qua những tên như là người
đầu tiên Chạymột phương pháp từ các thử nghiệm gói mà làm bài kiểm tra rất nhiều
tham
dễ số hơn để sử dụng: nếu một trường hợp thử nghiệm thất bại, các công cụ sẽ
dàng
cung cấp cho bạn tên của nó, do đó bạn
có thể tìm và sửa chữa nó. Ngoài ra, hầu hết mã biên tập viên cho bạn chạy một
trường hợp thử nghiệm
nếu
Hãy bạn
nhớ sử dụng
rằng, bảncúđồ
pháp
này này.
liên kết một mô tả đến một trường hợp thử nghiệm, do đó
tên của tc .
biến,
Danh sách 2.15 main_internal_test.đi: thực Hiện và khẳng định pha

cho tên, tc := tầm kiểm tra {


t.Chạy(tên, và(t *thử nghiệm.T) {
có := chào(tc.ngôn ngữ) #A

nếu có != tc.muốn { #B
t.Errorf("dự kiến: %q, có: %q", tc.muốn có)
}
})
}

Kể từ khi cuộc gọi đến


chàonhững
chức năng là giống bất kể vào
đón
ngôn ngữ, tạo ra một thử nghiệm mới trường hợp chỉ có chúng tôi thêm một mục trong
kiểm tra
những
bản đồ:

Danh sách 2.16 main_internal_test.đi: trường hợp Thử nghiệm

var kiểm tra = bản đồ[chuỗi]testCase{


"Anh": {...},
"Pháp": {
lang: "cha",
muốn: "Bonjour le monde",
},
"Người arcadia, không được hỗ trợ": {
lang: "akk",
muốn: `không được hỗ trợ ngôn ngữ: "akk"`,
},
// thêm mới kịch bản mô tả ở đây!
}

Trích trong Đi
Có lẽ bạn đã thấy, chúng tôi sử dụng một bộ khác nhau của trích dẫn trong những dự
kiến
lời chào cho người arcadia (akk). Có ba loại báo rằng được sử dụng trong
Đi, mỗi trong đầy đủ của nó bối cảnh:
· Những giá gấp đôi ": nó được sử dụng để tạo đúng nghĩa dây. Ví dụ: s := "xin Chào
thế giới"

· Những backtick `: nó cũng được sử dụng để tạo ra nguyên chữ dây. Ví dụ: s :=
`xin Chào thế giới`

· Những giá duy nhất ': nó được sử dụng để tạo ra runes. Ví dụ: r := '學'. Một rune là
một
duy nhất unicode điểm mã.
Bạn đã có thể nhận thấy các trước hai lựa chọn có thể được sử dụng để tạo đúng
nghĩa
dây. Sự khác biệt giữa nguyên chữ dây và không sống theo nghĩa đen dây
là, trong một nguyên chữ chuỗi, không có lối thoát trình tự. Viết một n trong một
nguyên chữ chuỗi sẽ dẫn đến một gạch chéo ngược nhân vật \ đã theo dõi bằng chữ n,
khi chuỗi được in. Nguyên chữ dây là một cách tốt đẹp của không phải
đối phó với thoát đôi giá, đó là rất tiện dụng khi nói đến
viết HỆT trọng tải.
Chúng tôi đã có một chương trình mà có thể quay trở lại một lời chào, trong ngôn ngữ
nào các
người, nhưng cách duy nhất dùng được để thay đổi các ngôn ngữ sử dụng được, vì
vậy
, đến nay, thay đổi mã của chương trình - đó không phải là ưu! Chúng tôi muốn nhận
được đi chạy chính.đi hoặc bằng cách thực hiện các biên soạn thực thi, đó là
những từ đầu
nhiều khả vào
năng, họngười dùngthông
sẽ muốn mà không thay
báo cho đổi mã
chúng mỗichọn
ta lựa lần yêu
củacầu
họ được
của ngôn ngữ.
gửi đi. Kể từ khi các người đang chạy chương trình từ các dòng lệnh, bởi
chạy
Thể 2,4 Sử dụng cờ gói để xem các người dùng
ngôn ngữ
Làm thế nào chúng ta có thể sử dụng các đầu vào để được mong muốn của người
dùng ngôn ngữ của chúc mừng? Đi và hành
hệ điều
hỗ trợ
cờ cho
những phân
gói. Cựutích cácgần
là rất dòng
đểlệnh
C's xử lậplýluận cả
luận - bạn có thể truy cập vào họ bởi vị trí của họ trên đường, nhưng cho dù
họ là của các hình thức
- chìa khóa=giá,trị-chính trị hay -lựa chọn là bên trái đến
phát triển để thực hiện, và nó thật đau, nếu bạn có lặp lại trường. Oh,
và đó là chỉ có phân tích cho họ, sau đó, chúng ta phải chuyển chúng đến của họ phải
loại.

Mặt khác, những cờ gói cung cấp hỗ trợ của một loạt các loại -
nguyên nổi số thời gian thời gian, dây, và các phép toán. Hãy cuộn với
cái này!

Sử dụngcờnhững(https://pkg.go.dev/flag) chuẩn gói để đọc từ


dòng lệnh lập luận
Gọi cờ.Phân tích để lấy lại những giá trị
Chơi với các chương trình luận cờ và kiểm tra đầu ra

2.4.1 Thêm một lá cờ

Điều đầu tiên chúng tôi cần làm, khi nói đến phơi bày một số của chúng tôi trên
dòng lệnh thực thi, là để cho nó một cách tốt đẹp và tên ngắn. Ở đây, chúng tôi sẽ
cung cấp cho người dùng một sự lựa chọn của ngônlang ngữ màràng,
khá rõ
sự lựa chọn.

Hãy nhìn vào mã cập nhật của những chính.đi tập tin:

Danh sách 2.17 chính.đi: Sử dụng cờ

gói chính

nhập khẩu (
"cờ"
"đạp"
)

chức năng chính() {


var lang chuỗi
cờ.StringVar(&lang, "lang", "vi", "ngôn ngữ yêu cầu, như vi hay...")
cờ.Phân tích()

chúc mừng := chào(tiếng(lang))


đạp.Println(chào)
}

// ngôn đại diện cho một ngôn ngữ


loại ngôn ngữ chuỗi

/ chúng tôi. giữ lời chào cho mỗi hỗ trợ ngôn ngữ
var phrasebook = bản đồ[tiếng]chuỗi{
"el": "Χαίρετε Κόσμε", // tiếng hy lạp
"vi": "xin Chào thế giới", // tiếng anh
"cha": "Bonjour le monde", // tiếng pháp
"ông": " ‫" םולש םלוע‬, // Tiếng do thái
"yêu": " ‫"ﮨ ﯿ ﻠﻮ دﻧ ﯿﺎ‬, // Tiếng Urdu
"vi": "Xin chào Thế Giới", // tiếng Việt
}

// chào nói xin chào với thế giới


chức năng chào(l ngôn ngữ) chuỗi {
chúc mừng, ok := phrasebook[l]
nếu !ok {
trở lại đạp.Sprintf("không được hỗ trợ ngôn ngữ: %q" l)
}

trả lời chào,


}

Mục tiêu của phần này là để đọc lá cờ từ các dòng lệnh, mà


có nghĩa là chúng ta cần phảicờnhập
gói.

nhập khẩu (
"cờ"
"đạp"
)

Bây giờ chúng ta đã nhập khẩu gói này, hãy sử dụng nó. Chúng tôi muốn đọc, từ
các dòng lệnh, tên của các ngôn ngữ trong đó các người dùng hy vọng của họ,
chúc mừng. Trong mã của chúng tôi, các loại cho rằng ngôn
tổ chức
ngữ sẽ
màcó một
gần nhất là một loạichuỗi .

Những
cờ gói cung cấp hai rất giống chức năng để đọc một chuỗi từ các
dòng lệnh. Đầu tiên, yêu cầu một con trỏ để biến nó sẽ điền vào.

var lang chuỗi


cờ.StringVar(&lang, "lang", "vi", "ngôn ngữ yêu cầu, như vi hay...")

Thứ hai, tạo ra các trỏ và trở về nó:

lang := cờ.Chuỗi("lang", "vi", "ngôn ngữ yêu cầu, như vi hay...")


Ví dụ này, chúng ta sẽ sử dụng một trong những đầu tiên, chủ yếu là vì nó sẽ cho phép
chúng tôi & điều hành.
giới thiệu
Trên dòng đầu tiên, chúng tôi tuyên bố một biến của
chuỗicác. loại
Biến đó sẽ
giữ giá trị cung cấp bởi người dùng. Hãy nhìn vào những cú pháp và
các thông số khác nhau của những gọi:
StringVar

Đầu tiên, chúng ta vượt qua các chức năng địa chỉ của chuỗi. Thứ hai, chúng ta vượt
qua những
tên của các tùy chọn, vì nó sẽ xuất hiện trên các dòng lệnh. Thứ ba, chúng tôi cung cấp
cho các
giá trị mặc định cho biến này. Mặc định giá trị được sử dụng nếu các người không
cung cấp cờ khi gọi chương trình. Cuối cùng, chúng tôi viết một
mô tả những gì lá cờ này tượng trưng cho một số ví dụ giá trị. Gợi ý
được
Trong bao
thờiphủ
giansâu trong
thực hiệncác
mộtphụ Lục Etrình
chương và sử dụng
biến trong
được lưuchương
trong bộsau.
nhớ ở một
địa chỉ cụ thể. Chúng tôi có thể lấy địa chỉ của một biến với địa chỉ
nhà điều hành &, sử dụng trên một con trỏ. Tương tự như vậy, khi chúng tôi có một
con trỏ và chúng tôi
muốn truy cập vào các giá trị, chúng tôi có thể lấy lại nó với những indirection điều
hành *
sử dụng
Trong Đi,trên những tôi
khi chúng congọi
trỏ.
một chức năng, các lập luận được thông qua sao chép. Điều
này
có nghĩa là nếu chúng ta muốn cho phép một chức năng để làm thay đổi một biến của
chúng tôi, các
đơn
Cuốigiản
cùng,nhất là để
không chocấp
cung chứctrỏnăng
sinh.đó mộtbạn
Nếu concótrỏ đểcon
một biếntrỏ
của
đểchúng tôi.
đầu tiên
tử của một mảng, nó không thể được sử dụng để truy cập vào các yếu tố thứ hai.

Đó là một điều quan trọng để hãy nhớ rằng, bất cứ khi nào chúngcờ tôi sử dụng những
gói, đó là tất cả những StringVar , IntVar , UintVar không quét các
dòng lệnh và trích xuất những giá trị của thông số. Điều này không lừa của
phân tích các dòng lệnh là chức năng cờ.Phân tích . Nó sẽ quét các đầu vào
và các thông số điền vào mỗi biến chúng tôi đã nói với nó sẽ là một nhận. Nếu bạn
cần một mnemotechnical câu phải nhớ nó, cố gắng "SunsetBoolVar bắt đầu
tại -chào mừng đại Dương".
Phân tích

Sau khi cuộc gọiPhân


đếntích
biến l sẽ có giá trị truyền bởi các người,
và những người còn lại của mã này giống như trước đây. Lưu ý rằng đây chuyển đổi
để
ngôn ngữ loại là chấp nhận được trong hoàn cảnh này, nhưng ở sản xuất mã, nó
sẽ là một nơi hoàn hảo để xác nhận thêm giá trị với một danh sách của
hỗ trợ giá trị, hoặc ít nhất là một xác nhận dạng (trong trường hợp của chúng ta, một
chuỗi của
2 TÊN nhân vật).
2.4.2 kiểm Tra các dòng Lệnh Diện (B)

Bây giờ chúng tôi đã hoàn thành mã, và nó là thời gian để chạy một số người dùng
cuối thử nghiệm.
Này, chúng tôi sẽ mô phỏng cuộc gọi từ các dòng lệnh. Chúng ta có nhiều
lựa chọn để bảo đảm cái này hoạt động như mong đợi. Đầu tiên trên danh sách của
chúng tôi là chỉ cần
thử nó ra! Sau tất cả, chúng ta đã dành rất tốt thỏa thuận của chạy
trangđiđảm bảo điều này
hoạt động .
chính.đi lang=en
như chúng ta muốn, vì vậy, một số sự an tâm là xứng đáng, một thời gian để phần còn
lại
Mộtcho
số các
ví dụ Đây là một của chạy chính trong tiếng hy lạp:
tế bào thần kinh. Chúng ta có thể vượt qua các thông số vào, các dòng lệnh với
> đi chạy chính.đi lang=el
Χαίρετε Κόσμε

2.5 bài Tập


Đây là một loạt những bài tập, bạn có thể làm:

1. Khởi động chương trình với những tiếng Urdu như cờ số


2. Khởi động chương trình không có ngôn ngữ
3. Cuối cùng, nhớ kiểm tra tất cả kịch bản có thể. Các người có thể được
hỏi cho ngôn ngữ chương trình của chúng tôi không biết. Khởi động chương trình
với một ngôn ngữ không được hỗ trợ, ví dụ như: người arcadia, akk
4. Hỗ trợ thêm cho ngôn ngữ của sự lựa chọn của bạn

Này kết luận án đầu tiên chúng tôi hy vọng cậu thích nó và đã học được
một số thông tin thực tế về Đi.

2.6 Tóm tắt


Bây giờ bạn đã hoàn thành chương này đầu tiên, hãy nhìn lại
tất cả mọi thứ đó đã bao phủ:
chạy đi có thể được sử dụng để chạy một chương trình.
Viết bài kiểm tra như mã phát triển, không phải sau.
đi kiểm tra chạy các bài kiểm tra.
Thử nghiệm trong Đi đã định đặt tên ước.
Bảng điều khiển kiểm tra là cách tiếp cận tốt nhất khi chúng tôi muốn tinker với
đầu vào hoặc thiết lập.
Những thử nghiệmgói chứa tất cả dụng cụ cần thiết để chạy các bài kiểm tra.
Ví dụ có thể được sử dụng để kiểm tra những gì một chức năng viết cho các
tiêu chuẩn
ra.
Có 4 giai đoạn trong một bài kiểm tra: chuẩn bị thi hành quyết định, và
tháocờ giai đoạn.
gói cho phép phân tích số dòng lệnh.
Những
Khóa-giá trị được lưu trữbản trong
s.
đồ
Truy cập vàobản mộttrở về các giá trị (nếu tìm thấy), và cho dù nó đã được tìm thấy.
đồ
Xác định tự mô tả loại chứ không sử dụng loại làm cho
mã hiểu hơn.
Sử dụng
nếukhi chỉ có hai lựa chọn đều có thể. Nếu không, sử dụng chuyển hoặc một
bản .đồ
3 Một con mọt sách ' s digest: chơi
với các vòng và bản đồ
Chương này, bao gồm

Khác nhau hơn lát và bản đồ


bằng cách Sử dụng một bản đồ cho cửa hàng duy nhất giá trị
Học làm thế nào để mở và đọc một tập tin
Giải mã HỆT tập tin
Phân loại một lát với chỉnh bộ so sánh

Kể từ khi phát minh của viết, người đã được sử dụng cụ để khắc của họ,
suy nghĩ thông qua nhiều thế kỷ. Cuốn sách đã được kiến thức và trở thành sở thích.
Chúng tôi đã đọc và thu thập chúng trên kệ. Với công nghệ, chúng ta
có thể chia sẻ thông tin hơn bao giờ hết, và cung cấp cho ý kiến của chúng tôi trên
tất cả mọi thứ, kể cả sách. Trong chương này, chúng tôi sẽ tham gia một nhóm các con
mọt sách
những người đã đọc cuốn sách. Miễn vận chuyển đa và Peggy đã bắt đầu đăng ký
sách, họ giữ trên kệ sách của họ, và họ tự hỏi, nếu chúng tôi có thể giúp họ
tìm thấy cuốn sách, cả hai đều đã đọc, và, có lẽ, đề nghị tương lai đọc.
Trong chương này, chúng ta sẽ củng cố những gì chúng ta học được về dòng lệnh
diện trong chương 2 bằng cách tạo ra một cuốn sách tiêu hóa từ con mọt sách' cuốn
sách
bộ sưu tập. Từng bước, từng từ một danh sách của cuốn sách mỗi đọc, chúng tôi sẽ
xây dựng một
chương trình trở về và in the cuốn sách được tìm thấy trên nhiều hơn một kệ. Như một
phần thưởng, chúng ta sẽ thực hành bản đồ và lát khác nhau để tạo ra một công cụ
giới thiệu
sách. Đầu của chúng tôi, thực thi là một MẢNG tập tin, và chúng tôi có thể tìm hiểu làm
thế nào để
đọc một file trong Đi và làm thế nào để phân tích một MẢNG sử dụng các tiêu chuẩn
Yêu cầu
thư viện từ
Đi. Vì lợi ích của đơn giản ở đây, chúng tôi sẽ cho mỗi cuốn sách chỉ có
Viết một CLI công cụ trong đó có một danh sách của con mọt sách và cuốn sách
một tác giả. Làm thế nào mỉa mai là, anh sẽ nói.
của họ
bộ sưu tập trong hình thức một MẢNG tập tin
Tìm chung sách trên kệ của họ
In chung sách trên con mọt sách' kệ để chuẩn đầu ra,
Như một phần thưởng, giới thiệu sách cho mỗi con mọt sách dựa trên của họ
phù hợp với sách với các con mọt sách

Giới hạn

Chúng ta giả sử mỗi cuốn sách đã giả chỉ có một


đầu Vào trong c tập tin sẽ không vượt qua một đơn vị

3.1 Tải dữ liệu HỆT


Một chương mới, một dự án mới, một mục mới. Chúng ta hãy khởi động các chỉ huy để
khởi module chúng ta sẽ được làm và gọi nó con mọt sách:

đi mod khởi learngo-túi/con mọt sách

Như một thực hành tốt, chúng tôi giới thiệu tới việc tạo ra mộttập
chính.đi newtin với một
đơn giản rỗngchính chức năng. Nó là một tiêu chuẩn bước đầu tiên và chúng ta sẽ làm

trong suốt chương.
gói chính

chức năng chính() {


// sẽ được hoàn thành trên đường đi
}

Trong phần này, chúng ta sẽ tạo ra những đầu vào trong c tập tin và tải dữ liệu, nó
chứa.

3.1.1 Xác định một trong c dụ

Hãy nhìn vào một số ví dụ nhập vào dữ liệu. Đó là một danh sách những người có
tên và những cuốn sách của họ. Mỗi cuốn sách này có một trong những tác giả và một
tiêu đề.
Một vài lời về những MẢNG dạng

Các đối Tượng lưu ký Hiệu gọi rộng rãi giống HỆT như là một dạng file
lưu dữ liệu bằng cách sử dụng "chìa khóa":cặp giá trị. HỆT tên luôn dây,
đính kèm với đôi giá sách giá trị có thể là bất kỳ điều sau đây:

· Số thập phân (không bao nhân vật): 4, 3.1415 , 1e12

· Dây (kèm theo giá): "Xin chào", "Năm 1789"


(khác nhau từ những số
1789)

· Mảng, kèm theo trong ngoặc vuông: [1,2.5,-10] là một mảng số

· Giá trị lôgic (không bao nhân vật): , sai


sự thật

· Đối tượng, kèm theo trong xoăn niềng răng:


{"tên":"Nergüi"}

HỆT đối tượng' lĩnh vực không sắp xếp đặc biệt: trong ví dụ bên dưới, chúng ta
có thể có tác giả xuất hiện trước hay sau khi tiêu đề, và hàng sẽ
được như vậy. Mảng được lệnh đổi thành đầu tiên và các yếu tố thứ hai
sẽ thay đổi trọng.

Chúng ta có thể viết một mẫu mọt sách tập tin.

Danh sách 3.1 testdata/con mọt sách.trong c: Dụ của đầu vào tập tin

[
{
"tên": "miễn vận chuyển đa",
"sách": [
{
"tác giả": "Margaret khả hãn",
"hiệu": "Người nữ tỳ của câu Chuyện"
},
{
"tác giả": "Sylvia-Plath",
"hiệu": "cái tháp Chuông"
}
]
},
{
"tên": "Peggy",
"sách": [
{
"tác giả": "Margaret khả hãn",
"hiệu": "Thuê và Giả"
},
{
"tác giả": "Margaret khả hãn",
"hiệu": "Người nữ tỳ của câu Chuyện"
},
{
"tác giả": "Nhiều",
"hiệu": "Jane Eyre"
}
]
}
]

Đủ đơn giản cho bây giờ.

Có một hội nghị trong Đi mà bất kỳ mục tên là testdata nên


có, bạn có thể đoán được, dữ liệu để kiểm tra. Để báo công cụ đi
tài liệu, đi công cụ sẽ bỏ qua một mục tên là "testdata", làm cho
nó có để giữ phụ trợ dữ liệu cần thiết của các bài kiểm tra. Linters và tĩnh khác
mã phân tích công cụ cũng nên bỏ qua nó.

Tạo ra một tập tin cócon


tên làsách.hệt trong vòng
mọt một thư mục, với một số
testdata
dữ liệu như chúng ta và lựa chọn của cuốn sách yêu thích. Hoặc anh có thể đi với
chúng tôi kho lưu trữ
và sao chép phiên bản của chúng tôi.
Bước đầu tiên để đọc dữ liệu này được để mở các tập tin và nội dung của nó như một
tập tin. Thứ hai là để phân tích các MẢNG.

3.1.2 Mở một tập tin

Bởi vì chúng tôi không thích bị mất trong quá lâu các tập tin chúng tôi, chúng tôi đã
chọn để cắt các chính.đi biết là nó chạy trong một thiết bị đầu cuối
logic
và cócủa
thể dự
bàyán
vănvào 2 các
bản thứ tập
hai,tin:mọt
con đầu tiên,có logic kinh doanh, và có thể
sách.đi
được sao chép lại trong một thiết lập khác nhau. Đừng nghĩ về nó chưa.

Tại thời điểm này, sơ của cây nên nhìn như sau:

> cây
.
├ tượng con mọt sách.đi
├ tượng đi.mod
├ tượng chính.đi
└ tượng testdata
└ tượng con mọt sách.hệt

Đang tải dữ liệu sẽ được mục đích của một chức năng mới rằng chúng ta có thể gọi
loadBookworms . Nó sẽ có các tập tin con đường như một số và trả lại lát
củaCon mọt sách
các đại diện của các tài liệu HỆT. Nếu có điều gì sai
xảy ra (thấy không tìm thấy không hợp lệ TRONG...), và nó cũng có thể trả lại một lỗi.
Đừng
quên để cho nó một docstring.
Danh sách 3,2, và con mọt loadBookworms
sách.đi: chữ ký

// loadBookworms đọc các tập tin và trở lại danh sách của con mọt sách, và họ yêu sách, tìm thấy trong
đó.
chức quay
năng về
loadBookworms(filePath
con số không, nil #Mộtchuỗi) ([]Máu lỗi) {
}

Chúng tôi đã nói chuyện về zero, những giá trị mà bạn có thể tham khảo phụ Lục C.
của chúng tôi Trong nil như là giá trị của zero
trường
lỗi diện.hợp zero
Đó là giátại
lý do trịsao
của lát loadBookworms
là con mọt sáchtrở về nil và nil cho
lúc này.

Đi cung cấp độc lập nền tảng gói để


hệ điều vận hành hệ thống
hành
chức năng. Đây là báo từ những tài liệu: Các thiết kế là Unix-
giống như, mặc dù các xử lý lỗi Đi-như thế, không cuộc gọi trở lại giá trị của loại
lỗi chứ không phải lỗi con số.

Bên trong hệ điều


gói có một
hành loại cung
hệ điều hành.Tập tin cấp cách để mở một tập tin
đọc và viết, thay đổi quyền của một tập tin, tạo ra một tập tin mới và nhiều người
khác hệ thống hoạt động bạn có thể thực hiện trên một tập tin. Các danh sách toàn có
thể được . Cáctinđơn giản chức năng để mở tập tin của chúng tôi là
đi doc hệ điều hành.Tập
lấy ra với
. Chúng tôi sẽ cho nó những con đường
hệ điều hành.Mở để tập tin
filePath tham
của số,
chúng
và nó
tôi như những
sẽ trở về một con trỏ vàoTập một đó là một tập tin mô tả hoặc một lỗi. Các
tin
tài liệu bị loại, đủ để cho chúng tôi biết rằng các mô tả là trong đọc-
chỉ có chế độ và sự trở lại là lỗi của loại *PathError .

Sự khác biệt giữa , hệ điều


hệ điều hành.Tạo và hệ điều hành.OpenFile
ra hành.Mở

Như chúng tôi có thể nhìn thấy trong tài liệu


hệ của
gói, hành
điều những
một vài chức năng
trở lại một mô tả tập tin, và mỗi người có tốt nhất của nó sử dụng. Hãy có một cái nhìn
xuống
khi nên được
hệ điều hành.OpenFile sử dụng để mở một tập tin thay vì hành.Tạo
hệ điều hay ra
.
hệ điều hành.Mở

Tạo ra tạo ra một tập tin với cả hai đọc và viết quyền (nhưng không thực hiện) cho tất
cảdụng (0666). Nếu các tập tin Tạo
người sử đã tồn
ra tại,
cắt nó, gửi nó
nội dung để quên lãng. Khi Tạo ra thành công, trả lại mô tả tập tin có thể
được sử dụng để viết dữ liệu cho các tập tin.

Mở mở tên là tập tin đã đọc chỉ.

OpenFile là một cách tiếp cận chung, cho phép những người quyết định cho dù họ
muốn mở một tập tin cho viết hay đọc. Hầu hết thời gian, bạn sẽ không cần nó
- một cuộcMở gọi đến
hayTạo ra nên làm các thủ thuật. Tuy nhiên, có hai rất
cụ thể trong trường hợp đó, nó là hữu ích. Trường hợp đầu tiên là khi chúng tôi muốn
thêm Mở ở đây sẽ không - những
dữ liệu vào một tập tin, mà không
*Tập tinsẽ là chỉ đọc -cũng không vứt bỏ nội dung của nó. tin
Tạo ra - các tập Bằng
nội cách
dung sửcủadụng
sẽ
được xóa. Tham số thứ hai của nhữngOpenFile chức năng là một lá cờ
kiểm soát làm thế nào chúng tôi mở các tập tin. Danh sách đầyđiđủ doccó thể được tìm thấy
. Những lá cờ đang hằng và nên kết hợp của
hệ điều hành.O_APPEND
hương vị. Khi tạo ra hoặc thêm, sử dụng
. Thứ hai trường hợp là khi chúng ta muốn
hệ điều hành.O_APPEND hành.O_CREATE hành.O_WRONLY
để tạo ra một tập tin mà các quyền không mặc định người thân Tạo racủa
.
OpenFile là người duy nhất cung cấp khả năng thiết lập cụ thể truy cập vào
quyền cho những tập tin, qua cuối cùng của nó thông số.

Nếu đó là một lỗi, cho tất cả ba phương pháp, nó sẽ là của


*hệloại .
điều hành.PathError
Lưu ý rằng hầu hết thời gian, chúng tôiMở
sẽ sử
vàdụng
Tạo ra .

Hằng số trong các hệ điều hành gói đang ở thủ đô, vì chúng là một phần của
điều hành hệ thống tiêu chuẩn. Nếu không, Đi thích hằng được xác định trong
PascalCase giống như mọi thứ khác.

Trì hoãn

Khi bạn đang thực hiện với tôi/O với một hoạt *Tập
độngtinbạn phải đóng nó bởi
bằng cáchGần
sử dụng
phương pháp của tập tin. Theo cách này, hệ thống nguồn lực được sử
dụng
tập tin được phát bởi
hành vànhững
anh không tạo ra rò rỉ với chương trình của bạn. Nếu bạn
không
đóng các mô tả, bạn có thể thải tất cả sẵn tập tin xử của
hệ thống khóa các tập tin có một số phức tạp tác dụng phụ trên cửa Sổ,
nơi mà bạn có thể kết thúc chặn mình từ viết hoặc xóa nó. Trong
lý thuyết, việc thu gom rác của Đi nên đóng các tập tin tại một số điểm (không
muộn hơn khi chương trình của bạn lối thoát hiểm), nhưng tốt hơn hết là biết khi nào

làm thế nào các tập tin mô tả được đóng lại. Nói cách khác, hãy lịch sự và rõ ràng
sạch,
sau
Làmkhisaomình.
chúng tôi biết khi chúng ta thực hiện với những hoạt động? Thông thường, đó
là bởi sự Gần() . Nhưng
kết thúc
hãy tưởngcủa các chức
tượng năng,khi
một ngày vàbạn
đó là nơicấu
phải chúng
trúctalạisẽcác
đặtchức
các cuộc
nănggọi đến bỏ rất nhiều
và bạn
mã - nếu bạn vô tình hủy bỏ cuộc gọi đến Gần hoặc nếu bạn quay trở lại
trước khi nó được gọi là? Cách tốt nhất để ngăn chặn nó từ bị mất trong phần còn lại
của Mở hay Tạo ra .
các chức năng là để giữ nó bên cạnh
trì hoãn là một từ khóa là trì hoãn một tuyên bố là để thực hiện rất kết thúc
các chức năng, ngay cả nếu trì hoãn xuất hiện ở đầu của nó. Những
điểm quan trọng là, bất cứ cách nào một trở về chức năng, mỗitrì hoãn các
thực hiện đã được thông qua, sẽ được thực hiện. Khi trở về chức năng, nó
hoãn lại cuộc gọi được thực hiện trong cuối, trong lần đầu tiên ra lệnh.

Hãy xem ví dụ đơn giản:

Bàn 3.1 chương Trình thực hiện với và không có trì hoãn
Trong trường hệ hợphoạthành
điều động, trì hoãn tuyên bố nằm ngay sau
kiểm tra lỗi trả lại bởi Mở . Nếu bạn nhìn thấyMở một cuộc
tronggọi
mã,đến
bạn
phải xem Gần trong cùng một mã block, hai dòng chỉ làm cho
ý nghĩa với nhau.

Những trì hoãntừ khóa là chủ yếu được sử dụng để đóng tập cơ sở dữ liệu kết nối,
đệm độc giả, etc. Đôi khi, bạn sẽ tìm thấy trì hoãn hữu ích để tính
thời gian trong một chức năng. Một trong những đi thư viện cho Kafka sự kiện
quản lý sử dụng Gần() bởi vì các máy chủ cần một duyên dáng ngắt kết nối để ngăn
chặn
nó từ tiếp tục cố gắng để gửi tin nhắn cho các khách hàng kết nối.
Hãy trở lại mã của chúng tôi. Chúng tôi muốn mở một tập tin để đọc. Nó có thể quay
trở lại
một lỗi mà chúng ta phải đối phó với: trong trường hợp này, chúng ta sẽ chỉ cần trả lại
nó để
gọi. Khi điều này được thực hiện, chúng ta biết rằng chúng ta có một tập tin hợp lệ mô
tả, vìsách
Danh vậy3.3chúng tasách.đi: mở một tập tin
con mọt
cần phải đóng nó.
f, err := hệ điều hành.Mở(filePath) #Một
khi err != nil { #B
trở lại nil, err
}
hoãn f.Gần() #C

io.Đọc và io.Gần hơn đang giao thông dụng trong gói và Đi


thực hiện
hệ điều hành.Tập tin cả hai của chúng.
io.Đọc cho phép đọc một luồng
dữ liệu vào một lát của bạn.

3.1.3 Phân tích các MẢNG

Để phân tích một số dữ liệu HỆT chúng ta sẽ sử dụng những


mã hóa/hệt gói
Đi. Đó là một danh sách tốt của mã hóa khác nhau hỗ trợ bên cạnh đó, bao gồm cả
TẠO, lưu trữ số 64... Chúng ta sẽ biết thêm chi tiết về phân tích trong Chương 6
chuyển Đổi Tiền, khi chúng ta bắt đầu giải mã câu trả lời từ HTTP cuộc gọi.

Cấu trúc của Đi chuẩn của thư viện của gói không đại diện cho một cây của
phụ thuộc, nhưng thay vì miền. Bất cứ điều gì để làm với mạng sẽ được trong
những
net gói, hoặc trong một gói trong lồng net . Ở đây, mã hóa gói
là rất nhẹ - nó chỉ có nghĩa 4 diện - và chúng ta không phải
đưa nó để làm cho việc sử dụng các nội dung của những gói.
mã hóa/hệt
Xác định được cấu trúc liên quan đến các MẢNG tập tin

Ý tưởng chung là đường Đi cấu trúc rằng được sử dụng để giải mã phải kết hợp
các MẢNG cấu trúc. Ở đây chúng ta có một danh sách những người mà chúng tôi sẽ
gọi các, và mỗi người trong số họ có một cái tên
Con mọt sách vàs.
Cuốn một
Đểdanh
sách nói sách
Đi mà HỆT lĩnh vực này tương ứng với một lĩnh vực của chúng tôi, Đi cấu trúc, chúng
tôi sử dụng thẻ,
mà được bao bọc trong backticks. Loại sẽ có tên của các tiêu chuẩn,
theo tên của trường đó.
Loại sách của cuốn sách ở đây được gọi là một lát. Thêm vào đó rất sớm,
hãy tập trung vào các MẢNG đầu tiên.

Danh sách 3.4 con mọt sách.đi Chơi và cuốn Sách cấu trúc

// Một con mọt sách có trong danh sách của cuốn sách về một con mọt sách của kệ.
loại mọt sách cấu trúc {
Tên chuỗi `hệt:"tên"` #A
Sách []cuốn Sách `hệt:"sách"` #B
}

// Cuốn sách mô tả một cuốn sách về một con mọt sách của kệ.
loại Sách cấu trúc {
Giả chuỗi `hệt:"tác giả"`
Tiêu đề chuỗi `hệt:"tiêu đề"`
}

Nhìn vào hệt tags. Mỗi Đi trường được đánh dấu với tên của các MẢNG
lĩnh vực này. Lưu ý rằng các tên của trường không phải phù hợp với tên của
thẻ. Đó là hơn một hội nghị, và nhiều hơn nữa, có thể đọc được. Đây là một Đi
ước: lĩnh vực đó là lát nên được đặt tên theo một số nhiều lời.

Cuối cùng, và đây là phần quan trọng nhất của thẻ chương: giải mã mà
chúng ta đang về để mã sẽ cần viết thư cho lĩnh vực của cấu trúc của chúng ta. Này,
nó cần để có thể "xem" họ, có nghĩa là những trường này phải được tiếp xúc.
Nhiều một giờ của lỗi đã cố gắng để hiểu lý do tại sao một lĩnh vực
đã luôn luôn trống rỗng.

Giải mã các MẢNG tập tin vào một kết cấu


Một khi các tập tin được mở ra và nạp đầy đủ, chúng ta có thể xác định một biến sẽ
giữ các thông tin. Biến này phải được một lát s, bởi vì đây
Con mọt sách
là những gì các giao diện cho chúng tôi. Sau đó chúng ta vượt qua một trỏ để biến đó
để
giải mã vì vậy mà nó có thể lấp đầy nó.
Hãy nhớ rằng, sự thay thế cho đi một trỏ là để vượt qua một bản sao. Các bộ giải mã
sau đó sẽ điền vào những bản sao và ném nó đi vào thu gom rác, và
chúng tôi sẽ bị bỏ tay không.

Danh sách 3.5 con mọt sách.đi: HỆT giải mã trong loadBookworms()

var con mọt sách []mọt sách #A

// Giải mã các tập tin và cửa hàng nội dung trong các giá trị con mọt sách.
err = hệt.NewDecoder(f). #B
Giải mã(&smith) #C
nếu err != nil { #D
trở lại nil, err
}

Thông báo làm thế nào chúng ta tạo ra và sử dụng một cái máy giải mã trên một
đường duy nhất. ChúngNewDecoder
tôi đã chỉ có một trở vềGiải mã (và không có lỗi).

Kể thể làmchúng
từ khi điều này nhờ sử dụng các bộ giải
tôi không mã đượcbất
NewDecoder trảcứ
lại nơi
bởi nào khác, đó là
thực tế phổ biến để tránh tuyên bố một biến cho nó, trừ khi có một cái gì đó khác
mệnh lệnh đó (ví dụ như chiều dài dòng, nghĩa). Thay vào đó, chúng ta chỉ sử dụng nó
bằng
Giải mãcách
. gọi

Có một vài tinh tế hơn cách giải mã lớn HỆT đầu vào hoặc các tập tin, ví
dụ qua một dòng cơ chế mà tránh tải toàn bộ
nội dung của tập tin. Bạn có thể nhìn chúng nếu bạn đang tò mò trong phần
Cải tiến vào cuối của chương này (3.5.2) nhưng cho các dự án, chúng tôi tin tưởng
rằng tập tin thử nghiệm của bạn sẽ không quá một vài người mà.

Sau đó, chúng ta chỉ cần để trả lại con mọt sách đã được giải mã. Các
chức năng hoàn thành nên nhìn một cái gì đó như thế này:

Danh sách bằng 3,6 con mọt sách.đi: loadBookworms() mở ra và giải mã Hệt tập tin

// loadBookworms đọc các tập tin và trở lại danh sách của con mọt sách, và họ yêu sách, tìm thấy trong
đó.
chức năng loadBookworms(filePath chuỗi) ([]Máu lỗi) {
f, err := hệ điều hành.Mở(filePath) #A
nếu err != nil {
trở lại nil, err
}
hoãn f.Gần() #B

// Khởi tạo các loại trong đó các tập tin sẽ được giải mã.
var con mọt sách []con mọt sách

// Giải mã các tập tin và cửa hàng nội dung trong biến con mọt sách.
err = hệt.NewDecoder(f).Giải mã(&smith) #C
nếu err != nil {
trở lại nil, err
}

trả lại con mọt sách, nil


}

Để thực hiện điều này cả tập tin biên dịch, bạn cần phải nhập hệ điều
và hành
mã hóa/hệt gói. Nếu bạn đang sử dụng một đủ thông minh, biên tập nó có thể
đã làm nó cho anh.

Ngay trước khi chúng ta viết một bài kiểm tra, là một đầu thưởng, chúng ta có thể tự
kiểm tra của chúng tôi, loadBookworms chức năng của bạnchính
như một
chức năngđơn giảncấp
cung in. cho
Gọi nó các MẢNG thấy là con đường như một số và in ra là kết
quả.
Danh sách 3.7 chính.đi: Gọi loadBookworms() trong chính()

gói chính

nhập khẩu "đạp"

chức năng chính() {


con mọt sách, err := loadBookworms("testdata/con mọt sách.hệt") #A
nếu err != nil {
_, _ = đạp.Fprintf(hệ điều hành.Đầu lỗi tiêu chuẩn, "không thể tải con mọt sách: %s n", err) #B
hệ điều hành.Ra(1) #C
}

đạp.Println(smith) #D
}

Để chạy nó, bạn không thể sử dụng


đi chạy chính.đi được nữa. Vâng, bạn có thể
cố gắng, nhưng bạn sẽ thấy Đi được nổi giận với bạn đã gọi điện thoại không hoạt
động. Đây là
bởi vì chính chức năng làm cho một loadBookworms
cuộc gọi đến một chức năng mà
không phải là tuyên chính.đi
bố trong tập
những
tin, cũng không phải bất kỳ củachính.đi
các gói
nhập khẩu. Thật chạyvậy,đi sẽ không nhìn xuống các tập tin rằng không được nhập bởi
chính.đi thấy (tại sao không?). Như là một kết quả, Đi chương trình thường có một duy
nhất
tập tin trong những
chính gói - những chính.đi . Ngoài ra, nó có thể chạy đi
một gói hoặc một mục, chứ không phải là một tập tin duy nhất. Trong trường hợp này,
tất cả các tập tin trong đó
xuất khẩu hoặc gói sẽ được sử dụng để thực hiện, và, trong trường hợp của chúng tôi
Đi, sẽ không được
nổi
$ giận .với chúng ta nữa:
đi chạy

Đầu ra là chuyện tầm phào, nhưng anh có thể nhận ra cấu trúc của các lát
con mọt sách:

[{Miễn vận chuyển đa [{Margaret khả hãn là nữ tỳ của câu Chuyện} {Sylvia-Plath Chuông Bình}]} {Peggy
[{Margaret khả hãn Thuê và gà nước} {Margaret khả hãn là nữ tỳ của câu Chuyện} {Nhiều Jane Eyre}]}]
Không phải là tốt nhất UI, nhưng đủ để gỡ lỗi.

3.1.4 kiểm Tra nó

Làm thế nào chúng ta hãy chắc chắn rằng rằng điều này đang xảy ra để làm việc sau
khi thay đổi tương lai?
Thực hiện một lệnh và kiểm tra kết quả, cố gắng để xem nếu xoăn
niềng răng đang ở vị trí chính xác phải không bền vững.
Hãy viết một bài kiểm tra cho các chứctestdata
năng này. thư
Những
mục là nơi hoàn hảo để
giữ khác nhau Hệt với các tập tin của chúng tôi khác nhau trường hợp thử nghiệm.
Chúng tôi đang thử nghiệm mộtcon mọt sách.đi tập tin, và vì lý do này chúng ta sẽ
bộ
gọi chức
tập tinnăng
của đó nằmtôi
chúng trong những
bookworms_internal_test.đi và viết một bài kiểm tra cho
loadBookworms tên là TestLoadBookworms .

Bước đầu tiên là để xác định được các thông số cần thiết và trở về giá trị cho các
chức năng. Chúng ta sẽ cần những con đường testdatacủacác
mộtkết
filequả,
trong
đó là một lát và bởi vì chúng ta cũng kiểm tra những người bất hạnh
Con mọt sách
đường, chúng tôi
sẽ thêm cho dù chúng tôi mong đợi một lỗi. Cho chương này, chúng tôi sẽ không thẩm
tra các
loại
Mỗi chính
trườngxác
hợpcủa
thửlỗi, nhưngcó
nghiệm chỉthể
cólàm
sự hiện
đượcdiện
mụcvới một
đích giámột
của trị lôgic.
chức năng khác nhau,
nhưng này, chiến thuật là
hiếm khi mở rộng. Thay vào đó, chúng ta sử dụng một bản đồ, mà chính là tên của
những
thử nghiệm cho con người để hiểu những gì chúng tôi muốn thử nghiệm, và các giá trị
là một
cấu trúc với tất cả những giá trị cụ thể cho thử nghiệm của chúng tôi trường hợp. Thêm
trên bản đồ ngay sau khi
kiểmloại
tra.testCase cấu trúc {
bookwormsFile chuỗi
muốn []con mọt sách
wantErr short
}
kiểm tra := bản đồ[chuỗi]testCase{
}

Để giữ cho các văn bản của những


[]Con mọt sáchtrong trường hợp thử nghiệm tối thiểu chúng tôi
có thể xác định
sách như biến toàn cầu và tái sử dụng chúng qua bài kiểm tra khác nhau. Toàn cầu
biến thường không phải là một ý tưởng*_test.đi
tốt ở mã, nhưng, trong bài
không được tiếpkiểm tra, chúng
cận bên ngoài tôi
muốn các chí nếu bạn vô tình tên biến của bạn với thủ đô.
gói, thậm
giải pháp, đặc biệt là bởi vì các tập tin tên
var (
handmaidsTale = cuốn Sách{tác Giả: "Margaret khả hãn", tiêu Đề: "Sự nữ tỳ của câu Chuyện"}
oryxAndCrake = cuốn Sách{tác Giả: "Margaret khả hãn", tiêu Đề: "Thuê và Giả"}
)

Bây giờ hãy viết các bài kiểm tra cho việc sử dụng thành công trường hợp. Chúng ta
sẽ cần một MẢNG
tập tin, hoặc chúng tôi có thể tái sử dụng một trong hiện tại - như bạn thích. Hoặc anh
có thểConnickmọtnhững
s, từng đôi tình nhân của họ tên và danh sách của họ
sách
từ khoĐây
sách. củalàchúng
một vítôi,
dụ:chúng tôi sẽ không khiếu nại. Chúng tôi lấp đầy các dự kiến quả
với các
danh
"tập tinsách
có": { của
bookwormsFile: "testdata/con mọt sách.hệt",
muốn: []mọt sách{
{Tên: "miễn vận chuyển đa", cuốn Sách: []cuốn Sách{handmaidsTale, theBellJar}},
{Tên: "Peggy", cuốn Sách: []cuốn Sách{oryxAndCrake, handmaidsTale, janeEyre}},
},
wantErr: sai,
}

Chúng tôi có thể xác định ít nhất hai lỗi trường hợp: đầu tiên, nếu chỉ thấy không tồn tại
, và thứ hai, nếu dạng của các tập tin là không hợp lệ.

Hãy phát minh ra một tập tin con đường đó không tồn tại. Anh có thể điền vào trong
mình điều gì
sẽ có những hành vi củaloadBookworms ?

"tập tin đó không tồn tại": {


bookwormsFile: "testdata/no_file_here.hệt",
muốn: nil,
wantErr: đúng,
}

Như ông có thể thấy, sự mong đợi quả là con số không như chúng ta mong đợi một lỗi
kể từ khi
mở các file sẽ thất bại đầu trong quá trình, và chúng tôi muốn làm một lỗi.
Thứ hai không hài lòng con đường mà chúng ta có thể đối mặt được nếu các tập tin là
không hợp lệ, các dạng
không được tôn trọng trong c ví dụ mất một khung hoặc một dấu phẩy:
} . Một lầntrong
nữa,thử
chúng ta
nghiệm của chúng tôi
muốn xác thực sự hiện diện của các trở về lỗi. Tạo một file trong testdata
trường
thư mục rằng có một số người không hợp lệ dạng và viết tương nó
hợp các tập tin đã được cắt ngắn, và do đó mất tích của ứngđóng cửahợp thử
trường
nghiệm.
"không hợp lệ trong c": {
bookwormsFile: "testdata/không hợp lệ.hệt",
muốn: nil,
wantErr: đúng,
}

Dễ dàng, phải không? Chúng tôi chải qua nó trong chương trước khi viết bảng-
kiểm tra hướng, nhưng để đúng vòng trên bản đồ, bạn sẽ cần phải
biết nhiều hơn về các vòng.

Bằng cách sử dụng vòng trong Đi

Tất cả lặp cú pháp ở sử dụng từ khóa Đicho . Tất cả chúng. Ngôn ngữ khác
có thể sử dụng , cho riêng ,mình
trong khi cho etc. Chúng ta hãy xem một vài ví dụ.

Đầu tiên, cổ điểncho . Không có gì hoàn toàn bình thường ở đây. Đếm từ một
số khác số chúng tôi biết.

cho tôi := 0, tôi < 5; i + + {


cho tôi := 0, tôi < arrayLength; i + + {
cho tôi := firstIndex, tôi < giới hạn; i + + {

Là một lưu ý, Đi khác với một số tiếng với các tố khai thác ++
và -- . Trong ngôn ngữ, i + + có nghĩa là "tăng tôi và 1 cửa hàng đó vào tôi".
Nơi Đi là khác nhau từ ngữ như C hay Java trong này
cú pháp đượci +mà + không phải là một trái giá trị. Nó không phải so sánh với bất cứ điều
viết i + + < 5 hayđgì. Chúng ta khôngtrong
ạp.Println(i++) thể Đi. Này cũng có nghĩa là Đi không
có một tiền tố điều hành - chúng ta không++tôicó
thểnghĩa
viết là "tăng giá trị của tôi
, và trả lại tăng giá trị".

Tiếp theo luận lý biểu hiện, được gọi là khi trong một ngôn ngữ.

cho lặp.Tiếp theo() {


cho dòng != lastLine {
cho !gotResponse || phản ứng.không hợp lệ() {

Bất kỳ lôgic biểu hiện có hiệu lực tại nơi này, chỉ cần chắc chắn rằng bạn không kết
thúc
ở một vòng lặp vô hạn.
Nếu bạn không cần vô vòng trên mục đích, đó là một cách:

cho {

Đi đi, mãi mãi. Thông thường, những chứa cả một trở lại, một break, hoặc một cái gì
đó sẽ thoát khỏi chương trình hoàn toàn.

Cuối cùng, khi chúng ta cần phải lặp lại trên các mặt hàng trong một mảng, một lát, một
bản đồ hoặccho có thể được kết hợp với các từ khoảng
khóa . Trong trường hợp của một lát,
một kênh
ví dụ, các danh sách của con mọt sách chúng tôi muốn đọc từ nhữngkhoảngtập tin,
trở về các chỉ số, và một bản sao của các giá trị ở chỉ số này.

cho tôi, botswana := phạm vi con mọt sách {

Tại mỗi lần lặp ở đây, tôisẽ tăng lên từ 0 trở đi đến len(smith)-1 ,
và botswana
là giống như con mọt sách[i] . Chính là sự khác biệt mà botswana
là một bản sao,
vì vậy, nếu bạn thay đổi nó, sẽ không có sự thay đổi trong nội dung của các
lát chính nó, mà có thể là một người tốt hay xấu, tùy thuộc vào những gì anh
mong đợi.

Trong trường hợp của bản đồ, như khoảng


trong thử
chỉ đơn
nghiệm
giảncủa
là về
chúng
một bản
tôi, sao của chìa khóa
và một bản sao của các giá trị.

cho tên testCase := tầm kiểm tra {


Trước khi Đi 1.20, các bản đã được thực hiện vào biến giống nhau, dẫn đến một
ít bất ngờ khi biến đã được sử dụng trong đồng thời cách bên trong vòng lặp.

Nếu bạn không cần một trong hai số bạn có thể bỏ qua nó trong nhiều
cách. Tất cả các dòng dưới đây là hợp lệ, không có sự khác biệt cho các
máy giữa 2 và 3 phiên bản.

cho _, botswana := phạm vi con mọt sách


cho tôi, _ := phạm vi con mọt sách
cho tôi := phạm vi con mọt sách

Dành thời gian để viết các bài kiểm tra đầy đủ một mình, sau đó so sánh các giải pháp
cho các
dưới đây.
Danh sách 3.8 bookworms_internal_test.đi: kiểm Tra LoadBookworms()

gói chính

nhập khẩu (
"suy nghĩ"
"kiểm tra"
)

var (
handmaidsTale = cuốn Sách{tác Giả: "Margaret khả hãn", tiêu Đề: "Sự nữ tỳ của câu Chuyện"}
oryxAndCrake = cuốn Sách{tác Giả: "Margaret khả hãn", tiêu Đề: "Thuê và Giả"}
theBellJar = cuốn Sách{tác Giả: "Sylvia-Plath", tiêu Đề: "cái tháp Chuông"}
janeEyre = cuốn Sách{tác Giả: "Nhiều", tiêu Đề: "Jane Eyre"}
)

chức năng TestLoadBookworms_Success(t *thử nghiệm.T) {


kiểm tra := bản đồ[chuỗi]cấu trúc {
bookwormsFile chuỗi
muốn []con mọt sách
wantErr short
}{
"tập tin có": {
bookwormsFile: "testdata/con mọt sách.hệt",
muốn: []mọt sách{
{Tên: "miễn vận chuyển đa", cuốn Sách: []cuốn Sách{handmaidsTale, theBellJar}},
{Tên: "Peggy", cuốn Sách: []cuốn Sách{oryxAndCrake, handmaidsTale,
}, janeEyre}},
wantErr: sai,
},
"tập tin đó không tồn tại": {...},
"không hợp lệ trong c": {...},
}
cho tên testCase := tầm kiểm tra {
t.Chạy(tên, và(t *thử nghiệm.T) {
có, err := loadBookworms(testCase.bookwormsFile)
nếu err != nil && !testCase.wantErr { #A
t.Fatalf("dự kiến một lỗi %s, không có", err.Lỗi())
}

nếu err == nil && testCase.wantErr { #B


t.Fatalf("dự kiến sẽ không có lỗi, có một %s", err.Lỗi())
}

nếu !equalBookworms(có testCase.muốn) { #C


t.Fatalf("kết quả khác nhau: có %v, dự kiến sẽ %v", có, testCase.muốn)
}
})
}
}

Những gì bạn đã sử dụng để so sánh các dự kiến con mọt sách và trở về
người?

Các câu trả lời đơn giản sẽ phải viết một bằng chức năng để so sánh
các nội dung của hai danh sách của con mọt sách. Chúng tôi sẽ đặt tên cho các chức
năng
equalBookworms hãy xem những gì nó trông giống như trong từng chi tiết. Đầu tiên, các
chữ ký
nên có hai danh mụcsách
tiêu. của
Bởi con
vì nómọt sách,
là một chúng
chức tôitiện
năng sẽ đặt
ích,tên một tôi
chúng chống lại xác
có thể mà chúng
định để
tôi
đọc, nó có thể bỏ qua và không in line thông tin bằng cách thêm
muốn kiểm tra
t.Trợ giúp() lúc đầu của chúng tôi, chức năng trợ giúp. Để làm vậy, chúng ta cần phải
vượt*thử
quanghiệm.T như số bằng nhau. Chúng ta sẽ làm điều tương tự cho tất cả các
người giúp đỡ trong chương này.

Các nội dung của các chức năng khác nhau, bao gồm trong những con mọt sách và
so sánh mỗi trường đầu tiên tên đó khá là đơn giản và sau đó,
những cuốn sách.

Danh sách 3.9 bookworms_internal_test.đi: trợ giúp để so sánh con mọt sách

// equalBookworms là một người trợ giúp để kiểm tra sự bình đẳng của hai danh sách của con mọt sách.
chức năng equalBookworms(smith, mục tiêu []mọt sách) short {
nếu len(smith) != len(mục tiêu) {
trở lại sai #A
}

cho tôi := phạm vi con mọt sách {


nếu con mọt sách[i].Tên != mục tiêu[i].Tên { #B
trở lại sai
}

nếu !equalBooks(con mọt sách[i].Sách, mục tiêu[i].Sách) { #C


trở lại sai
}
}

trở về true #D
}

Để so sánh danh sách, chúng tôi có thể viết một subfunction,


equalBooks ,
đóng gói chỉ có sự so sánh danh sách mà làm cho nó dễ dàng để đọc
và để tái sử dụng.

Liên quan đến việc thực hiện, đừng quên rằng chúng ta có thể thoát ra sớm bởi
so sánh chiều dài của hai danh sách. Sau đó chúng ta có thể nhiều hơn những cuốn
sách và
so sánh hai danh sách và trở lại sai nếu họ là khác nhau.
Danh sách 3.10 bookworms_internal_test.đi: trợ giúp để so sánh Sách

// equalBooks là một người trợ giúp để kiểm tra sự bình đẳng của hai danh Sách.
chức năng equalBooks(sách, mục tiêu []cuốn Sách) short {
nếu len(sách) != len(mục tiêu) {
trở lại sai #A
}

cho tôi := phạm vi sách {


nếu sách[i] != mục tiêu[i] { #B
trở lại sai
}
}

trở về true #C
}

Một cách khác để làm nó bằng cách sử dụng các tiêu chuẩn
phản gói
ánh cung cấp
một đơn giản nhưng xấu thực hiện chức năng để so sánh diện:
phản ánh.DeepEqual đó chúng ta sẽ khám phá sau đó trong cuốn sách. Nó không phải
là không
đề nghị sản xuất mã, bởi vì nó không được thiết kế cho suất
nhưng trong trường hợp của chúng ta, nó sẽ làm các thủ thuật: ít mã để viết luôn luôn
là một điều tốt.
nếu !phản ánh.DeepEqual(có testCase.muốn) {
t.Fatalf("kết quả khác nhau: có %v, dự kiến sẽ %v", có, testCase.muốn)
}

Bây giờ chúng ta đã đọc và phân tích các đầu vào tập tin vào một cấu trúc Đi, chúng tôi

thể tìm được cuốn sách trong nhiều hơn một bộ sưu tập.
3,2, và Tìm chung sách
Hãy nhớ rằng toàn bộ mục đích của chúng tôi là công cụ để tìm được cuốn sách đã
được
đọc bởi cả hai miễn vận chuyển đa và Peggy, hoặc các con mọt sách. Trong phần này
chúng ta sẽ đi
qua tất cả các con mọt sách' kệ, đăng ký sách, chúng tôi tìm thấy ở đó, và sau đó
lọc trêntôi
Chúng những điều
sẽ viết mộtđóchức
xuấtnăng
hiện findCommonBooks
nhiều hơn một lần.
cho rằng: . Hãy viết chữ ký của mình,
đầu tiên. Nó có các dữ liệu chúng tôi có, đó là một danh sách của con mọt sách và họ
bộ sưu tập, và trả lại cuốn sách ở phổ biến trong hình dạng một miếngCuốn .sách

Danh sách 3.11 con mọt sách.đi: findCommonBooks() chữ ký

// findCommonBooks trở về cuốn sách nhiều hơn một con mọt sách của kệ.
chức năng findCommonBooks(con mọt sách []mọt sách) []cuốn Sách {
trở lại nil
}

Làm sao chúng ta biết rằng một cuốn sách xuất hiện nhiều lần trên kệ? Vâng, chúng ta
cần phải đếm số lần xuất hiện của mỗi đăng ký trên tất cả các con mọt sách'
kệ.

Nhưng đó là đủ, thực sự? Chúng ta nên làm gì nếu một người duy nhất có cùng một
cuốn sách nhiều hơn một lần trên kệ của họ? Chúng tôi tác giả đã có một cuộc trò
chuyện
về rằng: nó không bao giờ xảy ra? Ai có nhiều bản sao của cuốn sách này?
Nó chỉ ra rằng một trong chúng tôi có cùng một tiểu thuyết loạt trong ba khác nhau
ngôn ngữ. Một phiên bản khác nhau trong cùng một cuốn sách. Thứ ba là
ngạc nhiên.
Dù sao, chúng ta làm gì? Hãy cho phép, trong thời gian này, mỗi
người chỉ có một ví dụ của mỗi cuốn sách. Nó sẽ hơi đơn giản hóa các
thuật toán.

3.2.1 Đếm những cuốn sách

Làm thế nào chúng tôi đếm tất cả những cuốn sách? Chúng ta có thể truy cập vào một
lát con mọt sách, do đó,
chúng ta sẽ bắt đầu ở đó: nhìn vào mỗi con mọt sách của bộ sưu tập, và "đăng ký" mỗi
cuốn sách, chúng tôi tìm thấy ở đó. Để "đăng ký" một cuốn sách, bây giờ, chúng tôi có
thể sử dụng một truy cập
đại đồ
Bản diện
Đi cho số lần mà cuốn sách đã được nhìn thấy trên kệ vì vậy, đến nay.

Đi cung cấp một số xây dựng trong các loại. Mảng, lát, bản đồ và đến một
mức độ thấp hơn, kênh là cốt lõi gạch cho phép dữ liệu bộ sưu tập. Một bản đồ
trong Đi là một cách có thứ tự mảng kết hợp đó có cặp của khóa và giá trị.
Mỗi chính là liên kết với một giá trị (nhưng hai chìa khóa có thể có một
giá trị). Bản đồ được Đi là thành ngữ, cách tạo ra bộ sưu tập của duy nhất chìa khóa,
như chúng ta sẽ xem trong chương này. Một bản đồ của chìa khóa có thể là bất cứ
điều gì mà là
"có thể so sánh". Hãy nghĩ về nó như "chúng ta có thể viết key1 == key2?". Mặc dù, ở
cái nhìn đầu tiên, nó có thể sẽ dễ dàng để suy nghĩ tất cả mọi thứ có thể so sánh trong
Đi, các khó
thật là không phải tất cả mọi thứ được. Lát, bản đồ, và chức năng loại không, và điều
này
có nghĩa là bất kỳ cơ cấu đó có chứa một lát, một bản đồ hoặc một chức năng loại
Viết
không.một bản đồ
Chúng ta sẽ phải đối mặt với điều này thôi.
Chúng ta sẽ đến cửa hàng bên trong dữ liệu của chúng tôi bản đồ. vTrong
để những
Đi, kết hợp
những
chìa kkhóagiá trịbản đồ được thực hiện với dòng sau đây:
trong

myMap[k] = v

Nó đơn giản như vậy.

Đọc từ một bản đồ

Trong cùng một cách mà lấy một mục từ một lát tại số 3 được thực hiện với
ngoặc vuông, lấy một mục từ một bản đồ ở key 3 trông chính xác là
giống nhau:

var lát []chuỗi


...
v := lát[3]

ánh xạ := bản đồ[quốc]chuỗi{}


...
v := ánh xạ[3]

Sự khác biệt duy nhất là một bản đồ cũng sẽ trở về một phép, nói cho bạn
cho dù nó tìm thấy chìa khóa. Trong trường hợp của lát, bạn biết rằng có một
giá trị tại số 3 như là chiều dài của lát ít nhất 4 (vâng, chúng tôi vẫn
đếm từ zero) - cách khác, anh đang phải đối mặt với một lỗi dẫn đến loạn
hoảng .một
Trong
trường hợp của bản đồ, nếu điều quan trọng là không tìm thấy, sự trở về giá trị là chỉ
đơn giản là không- bản đồ[quốc]chuỗi các chuỗi rỗng "" ,
giáluận
và trị của loại hình
lý được thiếtcủa
lậpnó
đểởsai.
đây, trong trường hợp

v, ok := ánh xạ[3]
nếu ok {...

Hoặc ở một phiên bản nhỏ gọn hơn, làm giảm phạm vi của
v biến
những
để bên trong
những
nếu:

nếu v, ok := ánh xạ[3]; ok {


// làm một cái gì đó với v
}

Bây giờ chúng ta biết làm thế nào để truy cập vào các yếu tố trong một bản đồ, đó là
thời gian để đếm
sách.
Khởi tạo phản

Truy cập sẽ được lưu lại trong một bản đồ, mà chính là cuốn sách và những
giá trị là một
uint một số nguyên dấu. Trong khi đó, nó có thể có vẻ kỳ lạ khi có
nhiều bản sao của cùng một cuốn sách (rõ ràng), có ít hơn bản sao không
là hết sức không thể.

Việc xây dựng


hãytrong
chức năng tạo ra một bản đồ (hoặc lát hay kênh) bởi phân bổ
nhớ cho nó. Nếu chúng ta biết trước tương số khác biệt sách, nói 451,
chúng ta có thể được rõ ràng về kích thước của căn bản đồ chúng ta cần. Điều này sẽ
hơi
tối thực hiện.
đếm := làm cho(bản đồ[Sách]uint, 451)

Những 2 dòng có giống hệt nhau hành vi, bạn có thể lựa chọn để được là rõ ràng
hay ngắn gọn:

đếm := làm cho(bản đồ[Sách]uint)


đếm := làm cho(bản đồ[Sách]uint, 0)

Bây giờ hãy điền vào cái quầy này.

Đầu tiên chúng ta cần phải lặp hơn chúng tôi, con mọt sách.
choChúng
từ khóa
tôimà
sẽ sử dụng
chúng tôi đã nói trước đó. Trong trường hợp này, chúng tôi sẽ tận dụng giá trị của các
lặp, và chúng tôi không cần các chỉ số. Hãy cho chúng tôi lặp hơn một chút
tiết một tên hơn mặc dù.
botswana

cho _, mọt sách := phạm vi con mọt sách {


}

Bên trong vòng này, chúng ta có thể lặp với chính xác cùng một cú pháp hơn những
cuốn sách này
người đã đọc.
cho _, cuốn sách := phạm vi con mọt sách.Sách {
}

Chìa khóa của chúng tôiCuốn là một


cấubản
sách trúc.
đồ,Bản đồ Đi có thể có một cấu trúc
như chính của họ. Điều quan trọng ở đây là ta cần những cơ cấu để được hashable.
Như trái ngược để tìm Đi là biên dịch không cần có Bămmộtchức năng - thay vào đó, nó
sẽ biết làm thế nào để băm một cấu trúc trong, để biến nó thành một khóa hợp lệ. Bất
kỳ
loại có thể là một bản đồ chìa khóa như là nó là hashable. Loại đó không thể so sánh
cũng không hashable. Một lát, ví dụ, phải không hashable, và điều này có nghĩa là
bất kỳ cơ cấu đó có chứa một lát không thể là một bản đồ của chính - nếu bạn đã bao
giờ cố gắng sử dụng
một cấu trúc với một lát như một chìa khóa, Đi sẽ cho bạn biết, với các
tập
Cuốitincùng,
nhắnchúng
"không
tôihợp lệ bản
có thể tăngđồtruy
loạicập
chìa khóa".
cho cuốn sách này.
đếm[sách]++

Nhưng đợi đã, chúng ta không bao giờ đặt nó vào 0 ở nơi đầu tiên. Nên chúng tôi
không phải
1 nếu nó thiết lập mặt
là vắng truy và
cậpchỉ ++
sửnếu
dụng
nó đã tồn tại? Vâng, không, chúng ta không có
đến
đến. Đây là vẻ đẹp của zero-giá trị trong Đi.

Thấy,đếm[sách] trở về các giá trị trong bản đồ xuống cáccuốnchỉ sách
hoặc,
số nếu không có,
chiếc zero loại giá trị. Ở đây các loại giá trị đượcuint vì vậy, nó có 0nghĩa
.

Như trong hầu hết C-giống nhưa + ngôn


chỉ làngữ,
cú pháp đường chomột = 1 . Nếu bạn
thay thếmột
với đếm[sách] đầu tiên chúng ta lấy giá trị từ bản đồ hoặc có được một
zero nếu nó không tồn tại, rồi thêm một và viết lại các bản đồ trong cùng một
nơi.

Này, ít đếm logic là nguyên tử đủ rằng nó sẽ được hưởng lợi từ sống trong
chức năng riêng của mình. Hãy gọi nó và gọi nó trong
booksCount findCommonBooks() .
Nó nên bây giờ trông như thế này:

Danh sách 3.12 con mọt sách.đi: booksCount() chức năng

// booksCount đăng ký tất cả những cuốn sách, và họ xuất hiện từ sự giáo kệ.
chức năng booksCount(con mọt sách []mọt sách) bản đồ[Sách]uint {
đếm := làm cho(bản đồ[Sách]uint) #A

cho _, mọt sách := phạm vi con mọt sách { #B


cho _, cuốn sách := phạm vi con mọt sách.Sách {
đếm[sách]++ #C
}
}

trở lại đếm


}

Chúng ta có thể kiểm tra nó? Nó nên được khá đơn giản.

Kiểm tra nó

Viết các bài kiểm tra cho nhỏ này chức năng đặc biệt là không khó khăn.

Đầu tiên, chúng ta có thể viết một người trợ giúp để so sánh các bình đẳng của hai bản
đồ của cuốn sách bởi
xác minh đầu tiên mà chìa khóa trong
muốn bản đồ có mặt ở những gì chúng. Hãy
đã nhậnta
nhiều hơn những
muốn và gọi chìa khóa trong .
đã nhận

Danh sách 3.13 bookworms_internal_test.đi: trợ giúp để so sánh sách đếm

// equalBooksCount là một người trợ giúp để kiểm tra sự bình đẳng của hai bản đồ của cuốn sách đếm.
chức năng equalBooksCount(có muốn bản đồ[Sách]uint) short {
nếu len(có) != len(muốn) { #A
trở lại sai
}

cho cuốn sách, targetCount := phạm vi muốn { #B


đếm, ok := có[sách] #C
nếu !ok || targetCount != đếm { #D
trở lại sai #E
}
}

trở về true #F
}

Lưu ý rằng trong phiên bản này, nil và trống rỗng bản đồ được coi là bình đẳng.

Các người trợ giúp được viết, hãy di chuyển đến phần dài nhất: suy nghĩ của tất cả các
bài kiểm tra
trường hợp. Các thử nghiệm đầu tiên trường hợp, chúng tôi có thể nghĩ là danh nghĩa
sử dụng trường hợp, và người
thứ hai là không có con mọt sách ở tất cả. Chúng tôi cũng có thể có một con mọt sách
mà không
Một có viết các bài kiểm tra một mình và so sánh các giải pháp.
lần nữa,
cuốn sách, có lẽ không phải là một con mọt sách hay một người không hạnh phúc.
Danh sách 3.14 bookworms_internal_test.đi: kiểm Tra booksCount

chức năng TestBooksCount(t *thử nghiệm.T) {


tt := bản đồ[chuỗi]cấu trúc {
đầu vào []con mọt sách
muốn bản đồ[Sách]uint
}{
"sử dụng danh nghĩa trường hợp": {
đầu vào: []mọt sách{
{Tên: "miễn vận chuyển đa", cuốn Sách: []cuốn Sách{handmaidsTale, theBellJar}},
{Tên: "Peggy", cuốn Sách: []cuốn Sách{oryxAndCrake, handmaidsTale,
#A
}, janeEyre}},
muốn: bản đồ[Sách]uint{handmaidsTale: 2, theBellJar: 1, oryxAndCrake: 1, janeEyre:
1},
},
"không có con mọt sách": {
đầu vào: []mọt sách{},
muốn: bản đồ[Sách]uint{},#B
},
"con mọt sách mà không có sách": {...},
"con mọt sách với hai lần cùng một cuốn sách": {...},
}

cho tên, tc := phạm vi tt {


t.Chạy(tên, và(t *thử nghiệm.T) {
có := booksCount(tc.đầu vào)
nếu !equalBooksCount(t, tc.muốn có) { #C
t.Fatalf("có một danh sách khác nhau của cuốn sách: %v, dự kiến sẽ %v", có,
} tc.muốn)
})
}
}

Khởi động các bài kiểm tra. Tất cả mọi thứ là đi qua? Tốt, điều này có nghĩa là chúng
ta có thể sử chức
booksCount dụng năng.
những

Bây giờ chúng ta có thêm các cuộc gọi vào


findCommonBooks chức năng. Hiện tại, bạn
cần phải có một cái gì đó như thế này:

Danh sách 3.15 con mọt sách.đi: findCommonBooks() với booksCount() gọi

// findCommonBooks trở về cuốn sách nhiều hơn một con mọt sách của kệ.
chức năng findCommonBooks(con mọt sách []mọt sách) []cuốn Sách {
booksOnShelves := booksCount(smith) #A

trở lại nil


}

Lưu ý rằng nó không nên biên dịch bởi vì chúng ta không được sử dụng những
booksOnShelves
biến cho thời điểm này. Nhưng đó là thời gian để sử dụng nó!

3.2.2 Giữ xuất hiện cao hơn

Bây giờ chúng ta đã tính số lượng bản sao của mỗi cuốn sách về mọi
kệ sách, bước tiếp theo là vòng qua tất cả chúng và giữ cho những người có nhiều
hơn 1 bản sao.
Hãy tuyên bố một lát rằng sẽ có tất cả những cuốn sách đã được tìm thấy nhiều
lần trong các bộ sưu tập của tất cả các con mọt sách.

var commonBooks []cuốn Sách

trở lại commonBooks

Chúng ta có thể sử
hãydụng
được
những
xây dựng trong chức một lần nữa. Làm thế nào?

Những gì là một lát, một mảng là gì?

Chúng tôi giữ sử dụng từ lát cho một danh sách của các giá trị của cùng loại. Nhiều
ngôn ngữ sẽ chỉ đơn giản gọi đây là một mảng, vì vậy thỏa thuận này là gì, là nó chỉ là
một
ưa thích từ mới? Bạn đã biết làm thế nào để nhiều hơn một lát, nhưng có một
chút về lý thuyết yêu cầu ở thời điểm này.
Các loại [n]T là một mảng của n giá trị của loại
T. Ví dụ: một var
[5]chuỗi tuyên bố một biến mộtnhư là một mảng của năm dây. Một mảng của chiều dài
là một phần của loại hình của nó, vì vậy mảng không thể thay đổi kích cỡ. Rất hạn chế.
Trong đời thực, chúng tôi
thực tế không bao giờ sử dụng mảng trực tiếp.
Một lát, ngược lại, là một động vừa mềm dẻo, xem vào các
yếu tố của một mảng, như mô tả bởi đường Đi chính thức trang web. []T Cáclàloại
một lát của các yếu tố củaTloại
được xây dựng dựa trên một mảng. Như ông có thể thấy,
chúng ta không
xác định kích thước của nó.
Lát có 3 lĩnh vực bất kỳ phát triển cần phải biết về cơ bản
mảng, lưu trữ như một con trỏ, chiều dài của lát, và năng lực. Chiều dài
là số nguyên tố mặt trong lát, trong khi khả năng là
số nguyên tố có thể được lưu trữ khi một việc là cần thiết. Bạn
có thể bắt được họ thông
len vàquacap chức năng, và đặt chúng khi bạn
khởi lát với hãy . Lưu ý rằng việc giữ chiều dài như lĩnh vực làm
tiếp cận thông tin này một O(1) hoạt động.

Cuối cùng, rất hữu ích gì bạn có thể làm cho một lát được thêm một món hàng bằng
cách sử dụng
thêm được xây dựng trong chức năng.
những
Hãy nhìn vào một số ví dụ.
var sách []Cuốn
sách = thêm(sách, cuốn Sách{...})

Lúc đầu, những khả năng và chiều dài là cả hai 0, lát được không và cơ bản
mảng không phải là khởi tạo. Sau khi chúng tôi nối, số mục trong lát là
một trong, vì vậy là 1, dễ dàng. Lát được không nil nữa. Nhưng phức tạp hơn, các
len(sách)
mảng mới mà chúng ta điểm để có một năng lực của 1 yếu tố.

Lưu ý rằng thêm trả lại một lát. Chúng tôi trải những gì xảy ra trong nội các
phụ Lục E, nhưng, bây giờ, các thông điệp quan trọng là, khi cách thêm vào một
lát, nó luôn luôn an toàn để ghi đè lên các mở rộng với lát thêm 's ra.

Một ví dụ khác của một lát khởi động, nơi chúng tôi tạo ra một lát với một
chiều dài 5, một năng lực của 5, và một cơ bản mảng số 5 zero-giá trị sách.

sách := làm cho([]cuốn Sách, 5)


sách[1].Giả = "bell móc"

Tất cả năm sách đều được tạo ra và nhắm, mà có nghĩa là chúng ta có thể truy cập
trực tiếp và viết vào chúng.

Cũng có thể, nếu chúng


thêm một
Cuốntasách
để lát này, nó sẽ xuất hiện trong 6 vị, sau
5 zero-giá trị sách đã có.

Cuối cùng, nếu tôi biết các kích thước cuối cùng của lát, nhưng
thêm muốn
chúng
sử tôi
dụngcó thể
xác định cả chiều dài ban đầu, và những khả năng cần thiết.

sách := làm cho([]cuốn Sách, 0, 5)

Bây giờ chúng ta biết mọi thứ về việc tạo ra lát, hãy nhìn vào những mã
chúng ta đã viết cho đến nay. Trước đó, trong phần 3.1.3, chúng ta giải mã các MẢNG
tin nhắn var cú pháp
mô tả những
ở trên. Chúnggiá
tôi sách vào một
đã không thựclát biến
hiện
hãy và tuyên
một
vì vậy
cuộcbố vớiđến
chúng
gọi những
tôi đã không gây ra bất kỳ
phân bổ. Vấn đề là chúng tôi đi qua các địa chỉ của lát đến Giải mã
chức năng, sau đó có thể lấp đầy nó với giá trị. Chúng tôi khám phá những điều bí ẩn
của
đi một lát bởi địa chỉ, hoặc sao chép trong phần phụ Lục E.
Nhiều hơn một bản đồ
Để điền vào lát, chúng ta cần phải lặp trên bản đồ
booksOnShelves trả lại bởi
booksCount() và kiểm tra giá trị của cuộc phản đối với mỗi cuốn sách. Sách với
một truy cập lớn hơn so với 1 đã được đọc ít nhất hai con mọt sách - trong trường hợp
của chúng tôi
đều miễn vận chuyển đa và Peggy.
cho cuốn sách, đếm := phạm vi booksOnShelves {
nếu đếm > 1 {
commonBooks = thêm(commonBooks, cuốn sách)
}
}

Dưới đây, toàn bộ mã của các chứcfindCommonBooks


năng :

Danh sách 3.16 con mọt sách.đi: findCommonBooks() thực hiện

// findCommonBooks trở về cuốn sách nhiều hơn một con mọt sách của kệ.
chức năng findCommonBooks(con mọt sách []mọt sách) []cuốn Sách {
booksOnShelves := booksCount(smith) #A

var commonBooks []cuốn Sách

cho cuốn sách, đếm := phạm vi booksOnShelves { #B


nếu đếm > 1 { #C
commonBooks = thêm(commonBooks, cuốn sách)
}
}

trở lại commonBooks


}

Kiểm tra nó

Thử nghiệm này nên khá dễ dàng. Chúng tôi có một con mọt sách trong một số
sách ra. Một số trường hợp thử nghiệm dễ dàng và mày có thể viết nó mà không có trợ
giúp của chúng tôi:
Tất cả mọi người có đọc những cuốn sách cùng một
Người đã hoàn toàn khác nhau danh sách
Nhiều hơn 2 con mọt sách có một cuốn sách ở chung
Một con mọt sách không có sách (oh những nỗi buồn!)
Không ai có bất kỳ cuốn sách (oh đau đớn!)
Đây là phiên bản của chúng tôi kiểm tra.

Danh sách 3.17 bookworms_internal_test.đi: kiểm Tra findCommonBooks()

chức năng TestFindCommonBooks(t *thử nghiệm.T) {


tt := bản đồ[chuỗi]cấu trúc {
đầu vào []con mọt sách
muốn []cuốn Sách
}{
"không có thông thường cuốn sách": {
đầu vào: []mọt sách{
{Tên: "miễn vận chuyển đa", cuốn Sách: []cuốn Sách{handmaidsTale, theBellJar}},
{Tên: "Peggy", cuốn Sách: []cuốn Sách{oryxAndCrake, janeEyre}},
},
muốn: nil,
},
"một phổ biến cuốn sách": {...},
"ba con mọt sách có cùng một cuốn sách trên kệ của họ": {...},
}

cho tên, tc := phạm vi tt {


t.Chạy(tên, và(t *thử nghiệm.T) {
có := findCommonBooks(tc.đầu vào)
nếu !equalBooks(t, tc.muốn có) { #A
t.Fatalf("có một danh sách khác nhau của cuốn sách: %v, dự kiến sẽ %v", có,
} tc.muốn)
})
}
}

Chạy các bài kiểm tra. Bây giờ nó chạy lại một vài lần. Anh có thấy cái gì
kỳ lạ?

3.2.3 định mệnh

Nếu bạn chạy mã của bạn một vài lần anh sẽ thấy rằng thứ tự của các ra
luôn thay đổi. Đầu ra, có thể không phải luôn luôn được ở cùng một thứ như chúng ta
mô tả những kết quả như mong đợi trong[]muốn
lát .

Khi chúng tôi lặp lại, một bản đồ không có đảm bảo từ Đi ngôn ngữ đó
chìa khóa và giá trị sẽ được trả lại theo thứ tự. Tùy thuộc vào
tình huống, đó có thể là tốt hơn để được xác định, và luôn luôn trở về cùng một kết quả
trong việc theo thứ tự. Nó đơn giản hóa cuộc thử nghiệm, cho một điều.
Trong trường hợp này, phân loại thế lát sách sẽ làm cho cuộc sống sắp dễ dàng
xếp hơn.
Những
có một gói Lát chức năng đặc biệt thiết kế cho tình huống này: nó cần một
lát và một chức năng. Các chức năng phải quay lại, cho dù các mục đầu tiên của
2 chỉ số phải xuất hiện trước khi các mục chỉ số thứ hai. Chúng ta có thể sử dụng một
chức năng vô danh xác định ở mức này, cuộc gọi nó là dễ dàng hơn để phát triển hơn
đặt tên nó ở một nơi khác. Bây giờ, chúng ta sẽ sắp xếp sách của tác giả đầu tiên, và
sau đó
theo tiêu đề. Nếu tại một số điểm sau đó, bạn muốn sắp xếp bởi hiệu đầu tiên, và sau
đó, tác giả,
đây
Nhưlàphân
nơi. loại logic không phải là một phần của thuật toán để tìm chung sách, chúng
tôi sắp xếp.Lát .
muốn đặt nó trong một chức năng khác nhau mà kết thúc tốt đẹp các cuộc gọi đến
Danh sách 3.18 con mọt sách.đi: sắp Xếp sách

// sortBooks loại sách của tác Giả và sau đó, tiêu Đề.
chức năng sortBooks(sách []cuốn Sách) []cuốn Sách {
sắp xếp.Lát(sách, và(i, j quốc) short { #A
nếu sách[i].Tác giả != sách[j].Giả {
trả sách[i].Giả < sách[j].Giả #B
}
trả sách[i].Tiêu đề < sách[j].Tiêu Đề #B
})

trả sách
}

Lưu ý rằng các ban đầu lát được sửa đổi, sắp xếp.Lát chức năng không
tạo ra một sắp xếp bản sao của các mảng. Nó có thể là một điều tốt, tùy thuộc
vào tình hình. Chức năng này là chữ ký có thể được sortBooks([]cuốn Sách) với
không trở lại giá trị. Chúng tôi sẽ yểm trợ các chi tiết này trong phần phụ Lục E.

Như chúng tôi đã đề cập runes trong chương trước, đây là một nhiệm: sử< dụng
để so sánh chuỗi sẽ nhìn vào các KÝ hóa của các tiêu đề. Hy lạp danh hiệu cho
dụ do đó sẽ luôn luôn xuất hiện khi những người được viết bằng tiếng Latin
nhân vật.

Để sử dụng chúng tôi, chức năng phân loại, chúng ta chỉ cần phải quấn trở về giá trị
của
findCommonBooks :

trở lại sortBooks(commonBooks)


Thử nghiệm bây giờ sẽ trở thành cách dễ dàng hơn. Bạn có thể cần phải sửa chữa các
thứ tự của các
kết quả dự kiến, nhưng bây giờ anh có thể tự động thử nghiệm của bạn và dựa vào nó.
Mã của chúng ta bây giờ là thử nghiệm và chúng ta cóinthể
chính.đi ra quay
là kếttrở
quả.
lại

3.3 In
Trở lại trongchính
nhữngchức năng, chúng tôi đã nạp các dữ liệu, và không có gì hơn. Hãy
gọi findCommonBooks . Bây giờ chúng tôi có một miếng sách. Làm thế nào chúng ta có
thể trưng
đó chính xác? đạp.Println này,bày
nhưng chúng ta cần phải lặp lại các bộ sưu tập của
cuốn sách.

Hãy viết một chức năng in ra một danh sách của cuốn sách. Bạn có thể làm
nó trong năm dòng,

Danh sách 3.19 con mọt sách.đi: displayBooks

// displayBooks in ra danh hiệu và tác giả của một cuốn sách


và displayBooks(sách []cuốn Sách) {
cho _, cuốn sách := phạm vi sách {
đạp.Println("-", cuốn sách.Đề, "bởi", cuốn sách.Giả)
}
}

Nếu bạn muốn kiểm tra cái này này, bạn phải hoặcVí là
dụviết một
hoặc cung cấp một
io.Nhà văn . Chúng ta sẽ xem làm thế nào để cung cấp cho nhà văn chương tiếp theo.

Danh sách 3.20 chính.đi: chức năng chính

chức năng chính() {


con mọt sách, err := loadBookworms("testdata/con mọt sách.hệt")
nếu err != nil {
_, _ = đạp.Fprintf(hệ điều hành.Đầu lỗi tiêu chuẩn, "không thể tải con mọt sách: %s n", err)
hệ điều hành.Ra(1)
}

commonBooks := findCommonBooks(smith)

đạp.Println("Đây là những cuốn sách điểm chung:")


displayBooks(commonBooks)
}
Trước khi tự động hoá kiểm tra, hãy chạy chương trình tự.

đi chạy .

Tập thể dục 3.1: Sử dụng cờ từ chương 2 để vượt qua các tập tin con đường như một
số
chương trình của bạn.
Bạn đã biết làm thế nào để viết mộtVí dụ kiểm tra.

Danh sách 3.21 main_internal_test.đi: thử nghiệm chính

gói chính

chức năng Example_main() {


chính()
// Ra:
// Đây là chung sách:
// - Là nữ tỳ của câu Chuyện của Margaret khả hãn
}

Chơi đùa với cuốn sách khác danh sách!

3.4 Cải tiến


3.4.1 Tập thể dục: đọc đề nghị

Bây giờ bạn đã tập hợp tất cả dữ liệu của bạn, bạn có thể làm một số giá rẻ dữ liệu
phân tích. Biết những gì cuốn sách của bạn, bạn và bạn cả đọc là một
khởi đầu cuộc trò chuyện, nhưng chúng ta có thể đi sâu hơn và viết một chương trình
đó sẽ
đề nghị một danh sách đọc từ những gì giống như mọi người khác đọc, và hy vọng
đã thích. Suy nghĩ của những phần trên các cửa hàng sách trực tuyến: "khác độc giả
mua",
mặc
Trongdùphần
mua,này,
đọcchúng
sách và
ta thích đượcphương
sẽ đi một ba rất khác
phápnhau.
đơn giản: hãy xem xét của chúng tôi
mọt sách chỉ giữ trên kệ sách họ đánh giá cao. Chúng tôi không biết
những gì xảy ra với cuốn sách khác - nhưng chúng tôi hy vọng họ đang giao dịch cho
thậm chí còn nhiều hơn
cuốn sách, hoặc đưa ra cho tổ chức từ thiện. Chúng ta có thể giả định, từ bây giờ, mà
cuốn sách về một
kệ đang yêu và yêu mến, và vì lý do này chúng ta sẽ đi đường tắt
của xem xét rằng, nếu miễn vận chuyển đa đã giữ cùng một cuốn sách trên kệ cô ấy là
Peggy đã làm,
cô ấy có thể quan tâm đến những gì cuốn sách khác Peggy đã giữ cô ấy.

Cho một người được đọc mục tiêu, chúng tôi đi qua tất cả độc giả khác, và tính toán
một
, như đầu óc điểm, hoặc tương tự. Sau đó, nếu có sự giống nhau nhiều hơn 0, chúng
ta
có thể thêm rằng điểm số mỗi cuốn sách mà không được đọc bởi đọc mục tiêu nhưng
đã
Danhđược
sách 3.22 kiến nghị.đi: có thể thực hiện
đọc bởi một tương tự người. Viết mã đôi khi có thể được rõ ràng hơn.
loại đề Nghị cấu trúc {
Cuốn Sách, Cuốn Sách
Điểm float64
}

chức năng đề nghị(allReaders []Đọc Đọc mục tiêu, n quốc) []đề Nghị {
đọc := mới(mục tiêu.Sách...) #A

đề nghị := bản đồ[Sách]float64{}


cho _, đọc := phạm vi allReaders {
nếu đọc.Tên == mục tiêu.Tên {
tiếp tục
}

var giống nhau float64 #B


cho _, cuốn sách := phạm vi đọc.Sách {
nếu đọc.Chứa(sách) {
giống nhau++
}
// bạn cũng có thể sau này kéo dài tới thích và không thích điểm
}
nếu tương tự == 0 {
tiếp tục
}

điểm := toán.Log(giống nhau) + 1 #C


cho _, cuốn sách := phạm vi độc giả.Sách {
nếu !đọc.Chứa(sách) {
đề nghị[sách] += điểm
}
}
}

// TODO: sắp xếp bởi điểm


// TODO: chỉ ra một số đề nghị (n)
}
Chúng ta đang sử dụng toán.Đăng nhập, các chức năng tự nhiên, vì vậy mà các điểm
đó không chế ngự được tất cả lời đề nghị khác khi một người có quá nhiều
điểm giống nhau.

Chúng tôi cần một loại cho những cuốn sách đọc bởi mục tiêu của chúng ta. Nó phải
có khả năng một cách nhanh chóng
cho chúng tôi biết nó có một cuốn sách, và chắc chắn rằng mỗi cuốn sách là chỉ
có một lần.
loại thiết lập bản đồ[Sách]cấu trúc{}

chức năng (các thiết lập) Chứa(b cuốn Sách) short {


_, ok := các[b]
trở lại ok
}

Lưu trữ trong một bản đồ, như chúng ta thấy, là tốt nhất (hiểu nhanh nhất) cách để nói
cho dù danh sách (của chìa khóa) có một giá trị nhất định. Chúng ta đang sử dụng các
rỗng
cấu trúc chứ không phải là một phép, vì một phép tung lên một chút
trí nhớ, và trống rỗng cấu trúc mất zero. Nó có nghĩa là bản đồ sẽ không mất
nhiều bộ nhớ hơn một lát cùng kích thước.
Hãy dành thời gian để viết phần còn lại của mã, kiểm tra nó, và chơi với nó.

3.4.2 thực Hiện sắp xếp.Diện

Để sắp xếp những cuốn sách vào ra của các chức năng chính, chúng tôi sử dụng các
loại ngắn.Lát, trong đó có một chức năng như phân loại chiến lược. Đó là một
lựa chọn thứ hai mà bạn có thể thích.

Gói cung
sắp xếp sắp xếp.Diện có thể được thực hiện để sắp xếp
lát hoặc xác định bộ sưu tập. Nó trở nên rất tiện dụng khi thực hiện
chỉnh phân loại, trong trường hợp của chúng tôi mỗi giả
sắp và tiêu đề. Những
xếp.Diện
diện cho thấy nhiều phương pháp 3 nơi yếu tố được chỉ bởi một số nguyên chỉ.
Lưu ý rằng bạn cần để hoàn toàn thực hiện diện, có nghĩa là cả ba
phương pháp, thậm chí nếu bạn không sử dụng tất cả chúng.

// Len là số nguyên tố trong bộ sưu tập.


Len() quốc

// Ít báo cáo cho dù các yếu tố với chỉ số tôi


// phải sắp xếp trước khi các yếu tố với chỉ số j.
Ít(i, j quốc) short

// Trao đổi hoán đổi các yếu tố với chỉ tôi và j.


Trao đổi(i, j quốc)

Như này chỉ áp dụng cho bộ sưu tập, chúng ta thêm một trung gian tùy loại
đại diện cho một bộ sưu tập của cuốn Sách và thực hiện các phương pháp trên nó.
Đây
là loại được đặt tên sau cuộc cách nó sắp xếp.
Danh sách 3.23 thực Hiện sắp xếp.Diện trên Sách

// Sách là một danh sách của cuốn Sách. Xác định một loại tùy chỉnh để thực hiện sắp xếp.Diện
loại byAuthor []cuốn Sách

// Len thực hiện sắp xếp.Diện bởi trở về chiều dài của bộ sưu tập.
chức năng (b byAuthor) Len() quốc { trở lại len(b) } #A

// Trao đổi thực hiện sắp xếp.Diện và hoán đổi hai cuốn sách.
chức năng (b byAuthor) trao Đổi(i, j quốc) {
b[i], b[j] = b[j], b[i] #B
}

// Ít thực hiện sắp xếp.Diện và trả lại cuốn sách được sắp xếp bởi Giả và sau đó, tiêu Đề.
chức năng (b byAuthor) Ít(i, j quốc) short {
nếu b[i].Tác giả != b[j].Giả { #C
trở lại b.LessByAuthor(i, j)
}
trở lại b[i].Tiêu đề < b[j].Tiêu Đề #D
}

Các chức năng mới sortBooks bây giờ có thể gọi sắp trựcxếp.Sắp
tiếp xếp
thực hiện. Nó quan trọng để thông báo rằngsắp xếp.Sắp xếp đó không trở lại
bất cứ điều gì. Thay vào đó, nó cập nhật các lát là nội dung với các yếu tố tương tự,
nhưng
trong một sắp xếp trật tự.
// sortBooks loại sách của tác Giả và sau đó, tiêu Đề theo thứ tự chữ cái.
chức năng sortBooks(sách []cuốn Sách) []cuốn Sách {
sắp xếp.Sắp xếp(byAuthor(sách))
trả sách
}

Bạn có thể tìm thấy đầy đủ mã trong kho lưu trữ.


3.4.3 Sử dụng bufio để mở một tập tin

Bước đầu tiên chúng tôi đạt được trong chương này là để đọc nội dung của một tập tin.
Hãy có một cái nhìn gần hơn làm thế nào chúng ta đã làm điều này.

Đầu tiên chúng ta mở được các tập tin, mà quay trở lại một mô tả tập tin, mà thực hiện
những
io.Đọc diện.

f, err := hệ điều hành.Mở(filePath)


nếu err != nil {...}
hoãn f.Gần()

Sau đó chúng tôi cung cấp này reader đến


hệt.NewDecoder chức năng, và từ
biên bản, điều kỳ diệu xảy ra trong Giải
nhữngmã phương pháp hệt
của những
gói.

var con mọt sách []con mọt sách

err = hệt.NewDecoder(f).Giải mã(&smith)


nếu err != nil {...}

Nhưng chúng ta thực sự biết cách đọc thực sự xảy ra?

Truy cập tập tin, hoặc là để đọc và viết làm cho hệ thống cuộc gọi. Hệ thống
gọi đang ở giữa chương trình của chúng tôi và hệ điều hành.
Hệ thống gọi là đắt tiền, và chúng ta thường muốn giảm số lượng của họ
đó, may mắn thay, có thể kiểm soát khi đọc và viết một tập tin.

Đầu tiên, chúng ta cần phải hiểu được vấn đề: bao nhiêu cuộc gọi hệ thống làm, chúng
tôi đi
qua, khi đọc một tập tin của kích thước 10 theo dõi với hiện tại của chúng tôi thực
hiện? và chúng tôi đã không nói giá trị của nó. Thậm chí tồi tệ
hệ điều hành.Mở
Câu thế
hơn trả lời là không,
- những rõloại
rànglàđó
hệ điều hành.Tập làđiều
hơn
hệ
tin ẩn trong định
hành-cụ đệm
thể. kích
Vậy làmthước củachúng
thế nào các tập
ta tin
có thể cải
mô tả được trả lại bởi thiện điều này?
Câu trả lời nằm trong bufio gói. Gói này cung cấp một
NewReaderSize chức năng đó có mô tả sau đây: NewReaderSize
trả về một Đọc mới có đệm có ít nhất định kích thước. Nếu các
lập luận io.Đọc đã là một người Đọc lớn đủ kích thước, nó trả về
cơ bản Đọc. Điều này có nghĩa là, nếu chúng NewReaderSize
tôi gọi với bất kỳ
io.Đọc và cho nó một kích thước của 1 theo dõi chúng tôi được đảm bảo rằng việc
đọc sách
sẽ xảy ra bằng cách làm cho hệ thống cuộc gọi với khối 1 theo dõi. Bằng cách này,
chúng ta có
thể sửa lại chương trình của chúng tôi để có nó cư xử đúng như chúng ta muốn. Lưu ý
rằng
việc tìm kiếm tốt nhất kích thước cho đệm của một người đọc không phải là một công
việc dễ dàng - cho nó 1
GiB
Hãy rõsửràng
dụngsẽtính
làmnăng
cho này
hệ thống gọivà
tốt đẹp, ít thường xuyên
quyết định rằng- nhưng nó chúng
đệm của cũng sẽ
tôi nên của
tiêu thụ 1 GiB của nhớ điều đó - có lẽ - không hoàn toàn được sử dụng.
"một kích thước trung bình" cho một tập tin, một cái gì đó trong đơn vị bộ. Đây là một
ví dụ:

Danh sách 3.24 đọc một tập tin với một đệm đọc

f, err := hệ điều hành.Mở(...)


nếu err != nil { ... }
hoãn f.Gần()

buffedReader := bufio.NewReaderSize(f, 1024*1024) #Một


// bufio.Đọc không thực hiện Gần hơn
giải mã := hệt.NewDecoder(buffedReader)
err = giải mã.Giải mã(...)

Cuối cùng, bufio gói cũng cung cấp một thực hiện io.Nhà văn . Người
này rất hữu ích khi viết bản các tập tin, rõ ràng như nó làm giảm số
hệ thống gọi mạnh. Thay vì viết từng dòng mở để dễ dàng thấy,
chúng tôi có thể làm việc với lô số 1, tiệc tùng.

Nhưng đó là một điểm rất quan trọng để giữ trong tâm trí ởbufio.Viết
đây:
phương pháp sẽ chỉ viết dữ liệu khi nội bộ của mình đệm là đầy đủ. Hầu hết thời gian,
các cuộc gọi cuối cùng đến sẽ không chính xác điền vào phần còn lại của nó đệm,
bufio.Viết
và điều này đoạn cuối cùng có thể bị mất! Nhưng đừng hoảng sợ, đó là một cách để xả
còn lại nội từ đệm đến đích của nó, và nó bao gồm trong một
đơn giản gọi đến
nhà văn.Tuôn ra() .

f, err := hệ điều hành.Tạo(...)


nếu err != nil { ... }
hoãn f.Gần()

buffedWriter := bufio.NewWriter(f,1024*1024) #Một


cho _, dữ liệu := dung {
_, err = buffedWriter.Viết(dữ liệu)
nếu err != nil { ... }
}

err = buffedWriter.Tuôn ra() #B


nếu err != nil { ... }

3.5 Tóm tắt


Các MẢNG dạng thường sử dụng khi đại diện cho dữ liệu có cấu trúc.
Nó có thể giữ lát, bản đồ, và các đối tượng.
Những mã hóa/hệt gói cho thấy hai cách giải mã, một MẢNG
tin nhắn: những hệt.Giải mã loại, và những hệt.Unmarshal chức năng.
Hầu hết thời gian, hệt.Giải mã nên được sử dụng, như nó có thể đọc từ
mộtio.Đọc chứ không phải từ một lát hoàn thành nội. Những
mã hóa/hệt gói cũng có chức năng tương để mã hóa một MẢNG
tin nhắn: hệt.Mã hóa và hệt.Cảnh sát trưởng .
Lĩnh vực của Đi cấu trúc phải được xuất nếu có điều gì đó phải được giải mã
vào họ, vì vậy các bộ giải mã có thể truy cập vào chúng.
Kiểm tra dữ liệu nên sống trong một mục tên là testdata, đó sẽ là
tự động, bị lờ đi bởi những đi công cụ khi biên dịch mã.
Tất cả các vòng cho được
vòng lặp, từ khóa có thể bị theo dõi bằng một loạt
cách khác nhau. Phổ biến nhất là cho tôi := phút, tôi < giới hạn;
i + + { và cho tình trạng() { .
Vòng qua giá trị được lưu trữ trong một lát hoặc một mảng, cho sốsử dụng những
trị giá := phạm vi mySlice { cú pháp. Những chỉ số sẽ bắt đầu ở 0 và cuối cùng của nó
sẽ có giá trị len(giá trị)-1 . Nếu bạn không thực sự cần các chỉ số, sau đó
cậu có thể lặp lại trên các giá trị vớicho _, trị giá := mySlice giá trị
{ . Theo thời gian, bạn sẽ thấy những cú pháp sau đây:cho số :=
phạm vi mySlice { . Trong trường hợp này, chỉ là chỉ là lấy. Điều này có thể
đôi khi có ích, như nó không gây ra mỗi giá trị của lát được
sao chép.
Vòng qua tất cả các cặp khóa và giá trị được lưu trữ trong một bản đồ, một lần
nữa chúng tôi sửcho dụngchìa khóa, trị giá := phạm vi myMap { . Nếu bạn chỉ được
cho các khóa:
quan tâm đến những giá trị mà bạn có thể, giống như chúng tacho đã _,làm với lát, sử
dụng
trị giá := phạm vi myMap { . Nếu mặt khác, bạn chỉ quan tâm
trong chìa khóa của bản đồ, sau đó nó ngắn cho hơn:chính := đồ phạm vi { .
Lát được một trừu tượng của mảng. Khi nghi ngờ, sử dụng lát.
Sắp xếp lát với những sắp xếpgói, hoặc là bằng cách cho một chức năng phân loại như
một số để sắp xếp.Lát hoặc bằng cách thực hiện các gói
Diện trong đó bao gồm một Len chức năng trở về chiều dài của đường
lát, mộtTrao đổi
chức năng đó permutes hai mục, và một Ít chức năng
đó trở về sự so sánh của hai mục.
Một bản đồ của chìa khóa có chuỗi
quốcmột thể là một
hoặc ngay cấu cả một
trúc . Khi nó là một
cấu trúc không thể chứa một con trỏ, một lát, một bản đồ, một kênh,
một chức năng, hoặc một lĩnh vực của loại
diện{} . Thật vậy, những loại không thể
so sánh với giá trị khác.
Nếu bạn muốn bỏ giá trị lặp đi lặp lại từ một lát - ví dụ, bạn
muốn có danh sách của ban nhạc xuất hiện trong bộ sưu tập của đĩa Cd - bạn
có thể sử bản
dụng một trúc{} . Lặp qua danh sách và viết vào
đồ[X]cấu
bản đồ yếu tố bạn muốn giữ. Trong âm nhạc dụ, chúng tôi muốn viết
myMap[cd.Ban nhạc] = cấu trúc{}{} . At the end of the loop, the keys of the
map will be the unique values you had in your original list.
When you want to know whether an element appears in a slice, and you
know you’ll do the operation so many times that you can’t afford to
iterate through the slice every time, you can usemap[X]bool . Start by
adding keys to this map by iterating over your list once, and set their
value to true . After that, you can check whether an element is in the
map by running found := myMap[element] . If it was initially in the
slice, then the value returned istrue - what we set it to -, and if the
element wasn't in the slice, the value returned is the zero-value of a
boolean - false .
Remember to call Close() when you Open or Create a file. The defer
keyword is your friend here.
Use defer to keep together in the code lines that make sense together
but need to be executed at different moments.
Using a bufio.Reader or a bufio.Writer will reduce the amount of
system calls your program executes. It will also make it simpler to count
such system calls.
When using a bufio.Writer , always keep in mind that the buffer needs
to be Flush ed before the data can be considered fully written.
4 A log story: create a logging
library
This chapter covers

Understanding the need for a logger


Implementing a 3-level logger
Using an integer-based new type to create an enum
Publishing a library with a stable exported API
Implementing external and internal testing
Understanding package-level exposition

The night is dark. Your colleague Susan and you have been working on
trying to fix this bug for 2 hours straight. You don't understand what's
happening with that count variable that should have the value1 , but the result
of the program seems to indicate that the value there is 2 instead. It’s late.
You try to read the code, but the problem isn’t obvious. Iscount really 2?
You decide to add a small line in the code, and relaunch everything, to get a
better insight as to what's going on. The line you add will help you, at least,
understand what the variable’s value is. You use:

fmt.Printf("counting entries, current value is %d\n", count)

We’ve all been there. Having the code say something we can understand at
specific steps is our easiest way of following the program as it executes.

Then Susan notes - wouldn’t it have been nice to have this information from
the initial run, without needing to redeploy an updated code? But then, would
you also want to deploy this unconditional fmt.Printf (hint: the answer is an
absolute no)? Were there other options that could have made your life
simpler?

Debugging isn’t the only time when we want to know what’s happening in
the entrails of our program. It is also valuable to inform the user that
"Everything is going extremely well". Or that something bad has happened,
but the system recovered. Any trace of what’s happening might be useful -
but that’s also a lot of messages, some aren’t as important as others.

Keeping track of the current state or events via readable messages is called
logging. Every piece of tracked information is a log, and to log is the
associated action.

What is a logger?

In computer science, a logger is in charge of noting down log messages (an


activity called logging). Historically, a log relates to a ship's logbook, a
document in which were written records of the speed and progress of the
ship. The logbook's name derives, itself, from a chip log, a piece of wood
attached to a string that was tossed into the water to measure the speed of the
vessel. The string had knots, at regular intervals, and measuring speed
consisted in counting the number of knots that were unrolled during a given
amount of time.

Every application needs a logger, whose task is to write messages at specific


moments in its execution so that they could be read and analysed later, if
need be. Sometimes, we want these messages to be written to a file.
Sometimes, to the standard output. Sometimes, to a printer, or streamed
through the network to an aggregating tool, such as a database.

However, not all messages carry the same amount of information.


"Everything is going extremely well" is very different from "I just picked up a
fault in the AE-35 unit". We might want to emphasise critical messages, or
discard those of lesser matter. Acknowledging that there are different degrees
of importance was already performed by scribes in Ancient Egypt, when
they'd highlight specific sections of the text by writing them in red (this is
where the word rubric originates).

This chapter will cover a specific need: write a piece of code that other
projects can use. It is extremely common, as a programmer, to use existing
code that we didn’t write. Think of it - it would be painfully tiresome to write
over and over again simple functions, such ascosine or ToUpper , when
they’ve already been written, thoroughly tested, and documented. Instead of
copy-pasting code from other people, developers came up with the notion of
“libraries” : code that one uses, but didn’t write. In Go, libraries come in the
shape of packages that we import. Go libraries are, of course, written in Go,
and are made of (always) exported and (almost always) unexported types and
functions. The exported part of the library (both functions and types) is called
its application programming interface, which is always shortened down to
API.

Now, let’s write a library that anyone can use, and that you can reuse in any
of your future projects. Susan will take care of the user code, and she will be
interacting with our code via its API. First, we want to define the API -
exported functions and types - and agree with any already identified user that
it covers their needs. Then we will be able to publish the API, even before
implementing the logic. Indeed, the sooner your library is out in the world,
the sooner you can get feedback and improve it. Finally, we’ll write the
logging functions and test them.

Requirements

A library that enables the user to log information of any type


A library that makes available functions with signatures resembling that
of fmt.Printf
The user can set the threshold of importance for logging messages from
their code
The user can choose where logs are written

4.1 Define the API


Defining the way a caller interacts with a library is essential in making it
stable and easy to use.

In order to make our users happy, the exported types and functions should be:

easy to grasp - people don’t want to spend hours trying to figure out how
to use it. For this, making it small and simple is usually the better
option: there should only be a single function to achieve each specific
functionality
stable - if you make evolutions, fix bugs or add functionalities, users
should be able to take the latest version without changing their own
code.

What we will export is an object that provides different methods to address


criticality. For this, we will first define the different importance levels of
logging. Then, our library will provide an object and a function to create it.

These tools will be implemented in a package.

Package summary

Go applications are organised in packages, i.e. collections of source files


located in the same directory, each declaring the same package name. Go’s
convention states that the name of the package is the name of the directory.
As an example, we’ve used the"fmt" package in the previous chapter - it is
located in Go’s sources, in a directory namedfmt .

In Go, packages are the way we isolate the scope of functions and types. As
previously mentioned, when a symbol’s name starts with a capital, it is
visible outside the package. Exported symbols are available to users, the rest
remains inside the package. If you know Java, the package has roughly the
same level of importance as a Java class when it comes to what you can do
with it, but it is close to a Java package in that it gathers together related
types. What is public or exported should change as little as possible from
version to version. This is especially important for packages intended to be
consumed by other people. We want to preserve backward compatibility and
avoid breaking our users’ code. If you want to improve performance or fix
bugs, what is private or unexported can change.

Rules of a Go package

· A package is a collection of files located in the same folder that all share
the same package name. Each Go file starts with the package declaration.

· It is customary to name the package after the name of the directory.

· Avoid overly long names if possible. Prefer a lowercase single word. If


you compress the word, avoid abbreviations that make the package name
more ambiguous.

· Any symbol (functions, types, variables, constants) starting with a capital


letter will be exported outside the package and can be used by other
packages, while those starting with a lowercase will not.

· camelCase and PascalCase are the conventions for functions, variables,


constants and types. Package names stay lowercase.

Before you can start coding, don’t forget to go mod init learngo-
pockets/logger your module (see Appendix A.4 if you forgot how).

Go modules

A module is a collection of Go packages stored in a file tree with a go.mod


file at its root. The go.mod file defines the module’s module path, which is
also the import path used for the root directory, and its dependency
requirements, which are the other modules needed for a successful build.
Each dependency requirement is written as a module path and a specific
version.

Create a folder namedpocketlog . The name of the package reflects its


purpose. In there, create a file namedlogger.go . The name of the file should
be explicit about its contents.

We add in there theLogger struct.

Listing 4.1 Logger empty struct

// Logger is used to log information.


type Logger struct {
}

Before we can start adding fields or methods to it, we need to think about the
logging levels we want to support.

4.1.1 Exporting the supported levels


It is mandatory, when using a logger, to assign an importance level to a
message. This is the task of the user, who has to think about the criticalness
of the information that is about to be recorded. Loggers around the world
have a wide variety of levels, which always follow the same pattern: they
start with those of the lowest importance (usually "Trace" or "Debug"), and
then they go up to (usually) "Error" or "Fatal". The number of different log
levels varies from project to project, but these three are quite common:

Debug: used by developers to help monitor any information - which way


did a message go, how long did it take to process a request, what was the
url of the request, etc. They’re usually used to print the contents of a
single variable. In a production environment, we don’t print Debug
messages;
Info: used to track meaningful information, for instance “Payment of
amount X from account Y received”;
Error: used when something goes wrong, before we try to recover. Error
logs are useful to help the maintainers investigate the source of
problems, with messages such as “Database not responding”,
“Processing request would cause a division by 0”.

We can declare these levels as an enumeration: they are a finite and defined
list of possible values.

A matter of file size

Don’t be afraid to keep your files small. When a type is starting to support
more and more methods, think about splitting them into multiple files: the
scope for declaring methods on a type or accessing its unexported fields is the
package in which this type is declared. You can split by usage, and business
logic or keep exported methods together for example. Make sure reading
your file does not get overwhelming. Incidentally, it reduces conflicts in your
version control.

Create a file namedlevel.go . Again, the first line of this file will be the
package we’re developing: package pocketlog . Considering the targeted
size of this new file, we could very well keep everything in the logger.go
file, but we find it easier to open a file named level when looking for levels.
In this file, we declare a named typeLevel , of the underlying type byte . We
could use int32 as an underlying type as all we want is a number, but this
would take 4 times more memory for no good reason. Other packages can use
the type Level , as it is exported.

type Level byte

Wait! Experience and code reviews will tell you something is wrong. Any
exported symbol requires a line of documentation - a commented sentence
that starts with the name of the type, function or constant that you are
documenting. Let’s fix this.

// Level represents an available logging level.


type Level byte

Once we have a type for our levels, we can export them. Logging levels are
constants that we declare as an enumeration - a finite list of entities of the
same kind. This list of Level s belongs in thelevel.go file.

const (
// LevelDebug represents the lowest level of log, mostly used for debugging purposes.
LevelDebug Level = iota
// LevelInfo represents a logging level that contains information deemed valuable.
LevelInfo
// LevelError represents the highest logging level, only to be used to trace. errors
LevelError
)

Enumerations

The syntax here is to use= iota to let the compiler know that we are starting
an enumeration.iota allows us to create a sequence of numbers incremented
on each line. We don't need to assign explicit values to these constants, the
compiler does it automatically for us thanks to the iota syntax. iota can be
used on any type that is based on an integer. The behaviour of iota is to
increase on every line, which means we need to sort our levels by order of
importance.

We now have 3 log levels; each one will have its own purpose. If we decide
to add a level later, we will only need to add a line and not worry about
renumbering everything. Feel free to add more such as Warn (between Info
and Error) or Fatal (guess where).

4.1.2 Object-oriented Go ("GoOOP"?)

Object-oriented programming is a paradigm based on entities (the “objects”)


that usually contain data (the “fields”, or “attributes”). OOP (Object-oriented
programming) is a common paradigm amongst back-end languages - Java,
C++, Python are amongst its most popular examples. But how about Go? The
official documentation reads “Although Go has types and methods and allows
an object-oriented style of programming, there is no type hierarchy”. Go is
indeed not natively an object-oriented language, but it has all of the necessary
functionalities. Most of the principles that apply to object-oriented languages
can apply to Go. Go has no inheritance, but don’t worry, it has other features
that let you achieve similar goals, such as composition (more on that in a later
chapter).

Let’s head back to thelogger.go file. What we have created here for our
logger is the definition of a structure. We want it to log lines of text at
different levels. The user will then be able to pass this object as a dependency
to any function that needs to log something.

Let’s take a look at two approaches that would fulfil the expectation of
exposing methods on a variablel of type Logger , each with a signature
similar to that of fmt.Printf :

one method with a level parameter:l.Log(pocketlog.Info,


"message") - in this case, the caller passes the level of log as the first
parameter;
as many methods as there are levels: l.Info("message") - in this case,
the caller decides which function to call.

We picked the second option as we consider it clearer and simpler than the
former, which requires a lot of dots and text before we reach the interesting
part of the line. Remember that code needs to be easy to read.

To declare a method that can be called on an object, we use receiver methods:


these functions are attached to an instance of a struct. A receiver method is a
function that operates on the structure specified in parentheses before the
function’s name. Receiver methods can accept a copy or a reference - a
pointer - to the structure. For the difference between these, check Appendix
E. Since these methods operate on Loggera structure, the most intuitive place
for them is in the logger.go file.

Listing 4.2 logger.go: Receiver methods

// Debugf formats and prints a message if the log level is debug or higher.
func (l *Logger) Debugf(format string, args ...any) {
// implement me
}

// Infof formats and prints a message if the log level is info or higher.
func (l *Logger) Infof(format string, args ...any) {
// implement me
}

As required, we’ve used the same signature as the


Printf method of the fmt
package, a signature that developers are already accustomed to and that they
know and understand. This is why we end the function name with the letterf .

Variadic functions

Sometimes, you want to pass a variable number of parameters to your


function - none, exactly one, or more. The best approach in this case is
offered by Go’s variadic function syntax. The last argument of a function can
be of the type ...{some type} . When calling such a function, the user can
provide any number of parameters - from 0 to too many. Inside the function,
we can access the parameters as we would access elements from a slice, using
the [2] notation.The most used variadic functions are thefmt.Printf ones,
but we will see more examples before the end of this chapter!

Exercise 4.1: write the signature of theErrorf method on your logger. It


should be a lot of copy pasting, but make sure you understand what you are
writing.

Your Logger does nothing, but it can already be called from Susan’s code.
Before you publish it to her, who is jumping on her chair, impatient to use it
in her service, there is one thing we want to add.
4.1.3 The New() function

At the moment, creating a new logger can be done in these two completely
equivalent lines of code:

var log pocketlog.Logger log := pocketlog.Logger{}

The former is explicitly defining a zero-value logger, the latter leaves room
for initialisation, if we later want to add exported fields and give them a
specific value. Picking one over the other is a question of how you think the
code might need to change.

But as your logger evolves, there will be mandatory parameters, such as the
threshold where it should start caring about messages. To gently convince
users to stay up to date with evolutions, Go does not provide any constructor
mechanism, but we can write aNew() method that builds a new instance.
People can still use the above syntax (you must make sure it is safe, as it will
set every field of the structure to its zero value) but they should preferably
not. We don’t need to specify Logger in the name of that function, because
users will be calling the name of thepocketlog package first, making it clear
that we are creating a new pocket logger. This way we avoid the stuttering
pocketlog.Logger where log appears twice.

We add the threshold of the logger to the struct and define theNew()
method.

Listing 4.3 logger.go: Define a new object

// Logger is used to log information.


type Logger struct {
threshold Level
}

// New returns you a logger, ready to log at the required threshold.


func New(threshold Level) *Logger {
return &Logger{
threshold: threshold, #A
}
}
You’ve probably noticed that the threshold field of the Logger is not
exported. This is a decision we must make, whenever we declare a new field
in a structure. When in doubt, don’t export: it is a lot safer than exposing
everything. In this case, the user needs to define a logging threshold, and this
is done via the New function. The internal structure of the logger is none of the
business of our library’s callers.
We also made a second decision with thisNew function - we returned a
pointer to a Logger , rather than aLogger itself. It is generally more useful
that New should return a pointer. Think of it - the new built-in function in Go
also returns a pointer, the bottom-line being that returning a pointer makes it
easier for the caller to share the resource - the Logger - in their program.

Our logger still does nothing, but it can be used on Susan’s development
branch and she won’t need to change anything while you make it work. You
can commit.

Go’s zero-values

Every type in Go has a zero-value. This includes basic data types, struct
types, functions, channels, interfaces, pointers, slices, and maps. Basically,
any type for which you can declare a variable has a zero-value. The zero-
value of a type is the value held by a non-initialised variable of that type. You
can refer to Appendix C for details.

Exercise 4.2: What is the logging level of a logger defined with thevar log
pocketlog.Logger syntax?

4.1.4 And what about testing?

We just committed, but we have no test. This is subpar! Early is always the
best moment to write a unit test.

How can we test this? We have a very clear definition of how theLogger
should behave from the point of view of the user, but we don’t know much
yet about how it will work internally. This is the perfect situation for closed-
box testing, where we test a system from the outside. “Outside”, here, means
“from another package”. We could test it from the same package, but we’d be
able to access fields or functions that an external user won’t be able to access.

Here, we will start by creating a logger_test.go file: contrary to the


previous chapter’s open-box tests, this one is not an internal test. As we want
to test from the outside, the file will have to be part of another package, one
different from the rest of the code, but still in the same directory, for
consistency.

+ pocketlog/
||
| + -- level.go #A
| + -- logger.go #A
| + -- logger_test.go #B
+ -- go.mod
+ -- main.go

Go will complain if we write two packages in the same directory, but there is
an exception to this rule that allows for tests to be written close to the source
code: we can have afoo_test package alongside afoo package. This is what
we’ll use here:

package pocketlog_test

To accesspocketlog functions, we need to import it:

import "learngo-prockets/logger/pocketlog"

From this pocketlog_test package, we only have access to what the package
pocketlog exports - hidden functions, variables, constants, types, and fields
of exported types aren’t accessible. As the logger is currently writing to the
standard output, we can start with anExampleXxx function to test it. We are
testing the Debug method of the Logger struct, so the signature of the testing
function is ExampleLogger_Debugf . We can optionally add details about the
expected output or the test scenario after yet another underscore, i.e.
ExampleLogger_Debugf_runes or ExampleLogger_Debugf_quotes .

Listing 4.4 logger_test.go: Test the standard output

func ExampleLogger_Debugf() {
debugLogger := pocketlog.New(pocketlog.LevelDebug)
debugLogger.Debugf("Hello, %s", "world")
// Output: Hello, world
}

Run the test. It should be returning an error, because ourLogger still does
nothing. Fixing this error will be our next task. Then we can add test cases,
because this one is not covering enough of the use cases.

4.1.5 Documenting code

An important part of exposing a library is to document it so that other people


understand how to use it. Comments on exported functions, methods, structs
or interfaces are extremely important - some IDEs will automatically show
them as we hover over them. Tests are the second place where other users
might look for advice as to how to use your library - sometimes, it’s even
comments in tests that resolve the biggest mysteries.

We’ve already seen previously that a comment on an exported type or


function should be a line starting with the type or function name as the first
word of the line. It is also good practice to end the sentence with a period.

// New returns you a logger, ready to log at the required threshold.


func New(level Level) *Logger {

doc.go : a special file

There is an unofficial convention to write a special file, in each Go package,


that will describe the purpose of this package. Almost like a README, but
intended for developers only. This file is named doc.go , and is called a
package header. You’ll find one in most packages you use, if you venture in
there.

The doc.go file contains no Go code, only one uncommented line: the
package. And before that line, a verbose description of what the package is
about. This is where we can tell how to properly use the package, in which
order to call functions, and what we shouldn’t forget to defer , if need be.

The comments prior to the package name should be a multiline comment,


where the first line starts with Package pocketlog , in our example. The
capital matters, for code linters. Write this file, with every piece of
information you deem important for the callers of our library. Here is our
doc.go file:

Listing 4.5 doc.go: Documenting a package

/*
Package pocketlog exposes an API to log your work.

First, instantiate a logger with pocketlog.New, and giving it a threshold level.


Messages of lesser criticality won't be logged.

Sharing the logger is the responsibility of the caller.

The logger can be called to log messages on three levels:


- Debug: mostly used to debug code, follow step-by-step processes
- Info: valuable messages providing insights to the milestones of a process
- Error: error messages to understand what went wrong
*/
package pocketlog

The go doc command

One of the tools Go is shipped with is thego doc command. We’ve already
mentioned it earlier to inspect the contents of standard packages. This
command will give you the documentation of a package or symbol that thego
command can find in the subdirectories. There is a minor limitation: go doc
won’t go looking on the internet - it’s a local tool. This means that, in order
to use it, you need to be working inside a project (with ago.mod file) for
which the dependencies will have been downloaded - something that is
achieved silently by some IDEs, but that can always be done manually with a
go mod download command. In our case, we retrieve the documentation of
the pocketlog package, and of theNew function in the pocketlog package by
running the following commands:

> go doc path/to/repo/pocketlog


> go doc path/to/repo/pocketlog.Logger
> go doc path/to/repo/pocketlog.New
> go doc path/to/repo/pocketlog.Logger.Debugf

In any case, documentation should always be part of what you deliver. It can
take the form of comments, examples or of package headers. See more about
it here: https://go.dev/doc/comment.

Now that we’ve explained how to use our library, it’s high time to make it
usable!

4.2 Implement the exported methods


In the previous section, we’ve published the API to Susan, and we’ve written
some failing tests. The next step, to comply with the expectations, is to
decide where the logger is going to do its deed and finally log. Should it
always write to the standard output, and never the error one? How will it send
a message to a ticket printer? Write in a file? Send via the network to a
different aggregating mechanism? Should we implement this functionality for
every possible use case?

As we want this package to be usable in any situation, we will leave the


implementation of having a bespoke writer to the user’s discretion. But Susan
wants a default implementation that just spits out on the console (the standard
output), so that she can focus on her business logic before improving her
logging and monitoring (we may disagree on the priorities here, but the
Products team insisted). So,stdout it is. We will improve it shortly after.

4.2.1 Default implementation

The first implementation is really the easy part. Think about how you would
write the Debugf() method before spoiling your pleasure with the following
solution. Remember thatDebugf() should only log if the threshold level is
Debug or lower.

Listing 4.6 logger.go: Debugf’s default implementation for the console

// Debug formats and prints a message if the log level is debug or higher.
func (l *Logger) Debugf(format string, args ...any) {
if l.threshold > LevelDebug { #A
return
}
_, _ = fmt.Printf(format+"\n", args...) #B
}

When calling the Debugf function, the user expects the message to be printed
if the level of the logger allows for it. This means the first thing to do in this
function is to make sure that we should be logging a message. The enum we
declared for the levels allows us to compare two levels together, since the
underlying type of the Level type is an integer.

This method could be just 3 lines if we chose to log inside the if and invert
the condition, but always prefer to align the happy path unindented. Deal
with errors and early exits inside your if blocks and keep real business logic
as left as possible. This helps a lot when reading the code and makes
extending it way easier.

Once we’re sure we need to handle this message, let’s log it. For now, we’ll
use thefmt.Printf function. This whole library might look like a verbose
wrapping of this Printf call, but rest assured, there’s more to it than meets
the eye.

Exercise 4.3: ImplementInfo , Infof , Error , and Errorf levels methods.


Also, implement all the other levels that you chose to add.

Now your previous test should be green. Let’s discreetly postpone the testing
of the other methods: we want to write TestXxx methods, which give more
flexibility, so we need to write to non-standard outputs.

4.2.2 Interfacing

Writing bytes in various places is an extremely common use case in all


computer programs. Writing json to an HTTP output, ones and zeros to a
network router, bits into a digital port to turn a light on, encoded pixels to a
printer or letters on a console, everything we do has to output its result
somewhere in order to be useful at all.

Go has a set of standard interfaces for the most common uses, so that
everyone who produces code that writes can match the same format and
leverage intercompatibility.
io.Writer

Among the most commonly cited interfaces in the standard library, theio
package holds the two famousio.Writer to write to any destination and
io.Reader to read from any source (e.g. an array of bytes, a file, a json
stream).

Here are the declarations of these interfaces :

Listing 4.7 io interfaces

type Reader interface {


Read(p []byte) (n int, err error)
}

type Writer interface {


Write(p []byte) (n int, err error)
}

We want the user of our logger to define the destination. We can ask for an
io.Writer and simply write into it. They will be responsible for providing an
implementation of their choice.

Implicit interfaces

One major difference between Go and the most-known Object languages


(such as Java and C++) is that, in Go, interfaces are implicit. In order to
implement an interface, just add the methods that your interface defines to an
object and the compiler will recognise it.

We can already add the output to the structure, and the standard
Writer to
our New() builder.

Listing 4.8 logger.go: Add output field to the struct

// Logger is used to log information.


type Logger struct {
threshold Level
output io.Writer
}
// New returns you a logger, ready to log at the required threshold.
// The default output is Stdout.
func New(threshold Level, output io.Writer) *Logger {
return &Logger{
threshold: threshold,
output: output,
}
}

Note the slightly enhanced documentation. Alternatively, it could default to


os.Stderr, which represents the default error output.

Last but not least, each of the methods will need to use this new field. And
let’s make sure that users who don’t follow our recommendation of usingNew
to create aLogger don’t get nil pointer exceptions!

Listing 4.9 logger.go: Add output field to the struct

// Debug formats and prints a message if the log level is debug or higher
func (l Logger) Debugf(format string, args ...any) {
// making sure we can safely write to the output
if l.output == nil {
l.output = os.Stdout
}
if l.threshold <= LevelDebug {
_, _ = fmt.Fprintf(l.output, format, args...)
}
}

In Go, the underscore symbol represents the Void. In other words, assigning a
value to it will just discard the result.

What is the point? Go likes to be explicit. Here we explicitly say to the next
developer, including future-us: I know there are values returned by this
function, but I do not need them.

Here, the function returns anint , the number of written characters, and
sometimes an error. There is nothing we want to do about that error at the
moment, so we explicitly ignore it.

4.2.3 Refactoring
You might have noticed when implementing the Info and Error methods,
that we’re calling the same function fmt.Fprintf as our writing function.
You might also have noticed that, as opposed to its siblingfmt.Fprintf ,
fmt.Fprintln appends an end-of-line character at the end of the string. In a
way, the printf function allows you to do very fine craftsmanship, while the
println function won’t let you format exactly everything as you’d like it: no
left-padding before numbers or strings, no hexadecimal representation of
numbers, etc.

A new line is the guarantee that your log messages will be easily
distinguishable in a console. As we want to export thePrintf toolbox, we
must add an explicit \n when we write the messages.

In our case, we need to add that new line three times, once in each of the
functions. And whenever we want to make a change to the log message - for
instance, add the logging level - we need to write the same lines three times.
Should Warn or Fatal also be implemented, the count goes even higher. This
(loudly) calls for a minor refactoring - we don’t want to maintain the same
code more than twice. Let’s group all of them together and alter a single line
every time we want to adapt the logger.

Create alog() method on the Logger . For now, it will have the same
arguments asDebug , Info and Error . These three will call it: log is now the
one method responsible for formatting and printing. The other three, the
exported ones, are responsible for their log level and nothing more. There is
zero good reason to export this one.

Listing 4.10 logger.go: Refactoring the logging methods

// Debugf formats and prints a message if the log level is debug or higher.
func (l *Logger) Debugf(format string, args ...any) {
if l.threshold > LevelDebug {
return
}

l.logf(format, args...)
}

// logf prints the message to the output.


// Add decorations here, if any. #A
func (l *Logger) logf(format string, args ...any) {
_, _ = fmt.Fprintf(l.output, format+"\n", args...)
}

Run the tests. They should fail by now, as we haven’t updated them with the
os.Stdout parameter for theNew function. Once this is done, you can
commit, and inform your colleague that the code is ready to be used.
However, Susan tells you that her logs should be written to a specific file
rather than on the standard output, because she already makes use of the
standard output. Can theLogger achieve that by itself ?

4.3 The functional options pattern


Default values are a subject of high debate among the development theorists
and theologists. Some people love them because it makes everything
discoverable and therefore easier to bootstrap. Some people hate them
because they don’t force you to know what you are doing. We believe default
values are a good thing if used sparingly.

When we start developing, say, a new web service, we want to focus on the
business logic. We want to make sure our logic works locally before making
your service production-ready. In order to decrease the cognitive load, we
start with the default version of the logger - architecturing a better logger can
come later. Before writing the deployment code, we don’t need anything but
the standard output.

But very quickly, we deploy to a cheap cloud provider so that we can pitch
our prototype and show it to the world. Reading the standard output is not so
trivial anymore. We pick a tool, like an aggregating database, that happens to
publish a Go driver. The developers of this driver were smart enough to have
a structure in their library that implements the io.Writer interface.

We still want to keep this default implementation, writing to the standard


output, but we want to provide the option of writing somewhere else. One
common way of doing this is by using the functional options pattern.

4.3.1 Create configurations


Create a new file, options.go , where our configuration functions will be
written. Define a type of functions that can be passed to theNew() function
and that will be applied one after the other.

// Option defines a functional option to our logger.


type Option func(*Logger)

This function takes a pointer on our logger so that it can change it directly: in
our case, change the default output to whatever the user gave us.

Listing 4.11 options.go: Optional function to change the output

// WithOutput returns a configuration function that sets the output of logs.


func WithOutput(output io.Writer) Option {
return func(lgr *Logger) {
lgr.output = output
}
}

This type of function can be passed to theNew() function as variadic


parameters: a list of zero or more arguments of the same type.

Listing 4.12 logger.go: Apply options

// New returns you a logger, ready to log at the required threshold.


// Give it a list of configuration functions to tune it at your will.
// The default output is Stdout.
func New(threshold Level, opts ...Option) *Logger {
lgr := &Logger{threshold: threshold, output: os.Stdout}

for _, configFunc := range opts {


configFunc(lgr)
}

return lgr
}

Next time you want to add an option to your logger (e.g. a date formatter),
just create a newOption and you’re set. There is an important point to notice
here: adding configuration functions is quite easy, and lets the user set
specific behaviours without altering the API of our library. Our New function
accepts as many configuration functions as the user needs, from the list we
implement in this package.

Usage example

Susan wants to know how to use your library. There is a documentation file,
but human interaction is always so much more efficient. You write a small
example and send it to her.

Outside of the library, init a new module and create a main.go file. Define a
func main() , as you did in the previous chapter. In this function, instantiate a
new logger and call a few methods to showcase your work.

Listing 4.13 main.go: Usage example.

package main

import (
"os"
"time"

"learngo-pockets/logger/pocketlog"
)

func main() {
lgr := pocketlog.New(pocketlog.LevelInfo, pocketlog.WithOutput(os.Stdout))

lgr.Infof("A little copying is better than a little dependency.")


lgr.Errorf("Errors are values. Documentation is for %s.", "users")
lgr.Debugf("Make the zero (%d) value useful.", 0)

lgr.Infof("Hallo, %d %v", 2022, time.Now())


}

4.3.2 How to test that thing

We are already using the logger, but it is not fully tested! How
unprofessional! Susan can use our library, but we don’t want her to come
back with possible bugs.

The magic of interfaces means we can write a test helper that implements
io.Writer , and give it to our Logger under test.
Test helper implementation

At the end of logger_test.go , write a new testWriter struct. Make it


implement io.Writer , but instead of writing to a destination, it validates the
output string. For example, you can keep a field in the struct where you
concatenate the output, and you can validate that against the expected result.

Listing 4.14 logger_test.go: test helper implementation

// testWriter is a struct that implements io.Writer.


// We use it to validate that we can write to a specific output.
type testWriter struct {
contents string
}

// Write implements the io.Writer interface.


func (tw *testWriter) Write(p []byte) (n int, err error) { #A
tw.contents = tw.contents + string(p) #B
return len(p), nil
}

This structure can be passed to the functional option higher in our test. At the
end of the test, we can then check that the writer’s contents are what we
expect.

In practice, strings.Builder or bytes.Buffer can be used instead of the


testWriter. Now you know how to do a mock in case the interface you need is
not standard.

Update the test

Now that we are not forced to check the standard output anymore, we can
write a TestXxx function, one that will test all of the logging methods
together, sequentially. We can have one test case per required logging level
and check that the outputs are different and theDebugf() call is mostly
ignored.

Listing 4.15 logger_test.fo: Test function

const ( #A
debugMessage = "Why write I still all one, ever the same,"
infoMessage = "And keep invention in a noted weed,"
errorMessage = "That every word doth almost tell my name,"
)

func TestLogger_DebugfInfofErrorf(t *testing.T) {


type testCase struct {
level pocketlog.Level
expected string
}

tt := map[string]testCase{
"debug": {
level: pocketlog.LevelDebug,
expected: debugMessage + "\n" + infoMessage + "\n" + errorMessage + "\n",
},
"info": {...}, #B
"error": {...,
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
tw := &testWriter{}

testedLogger := pocketlog.New(tc.level, pocketlog.WithOutput(tw))

testedLogger.Debugf(debugMessage)
testedLogger.Infof(infoMessage)
testedLogger.Errorf(errorMessage)

if tw.contents != tc.expected {
t.Errorf("invalid contents, expected %q, got %q", tc.expected, tw.contents) #C
}
})
}
}

Exercise 4.4: The test, as we have written it here, only tests the calls to
functions in one order: Debugf , then Infof , then Errorf . What if we decide
to add a buffer, and only think about writing everything in the Errorf()
method? We will not see it in this situation, and Debug and Info messages
might stay stuck.

Your logger is ready, fully functional, documented and tested. The rest of the
company starts using it. Yet you keep dreaming up new functionalities for it.
Let’s explore a few and see where they lead.

4.4 Further functionalities


This tool offers endless possibilities for optimisations. The only limit, as
always, will be the amount of time we’re ready to spend on it. For example,
this library is not thread-safe: when multiple goroutines use the sameWriter
without any protections, the outcome can be unexpected - we will explore
solutions to that in later chapters. In the meantime, let’s have a look at a few
interesting enhancements we can add to our Logger .

4.4.1 Log the log level

Your service runs locally, with the lowest possible level of logs. You know
everything that happens just by looking at your console. But now you would
like to see the errors in red. You add an old awk command to your log tailing,
but how do you know what to colour?

How can we know while reading the logs which message has which level?
Well, let’s add that as an exercise.

Exercise 4.5: Add the log level to your output. Hint: change the contents of
the format variable before printing, as we did when we added the current
time.

4.4.2 Exposing the generic logging function

We chose from the start to export as many functions as there are logging
levels, for it makes the user’s code easier to read.

Now imagine yourself in a situation where you only know what level to pick
at runtime. Imagine you are logging the email address of your app’s user, but
on one platform all the users are internal and the admins need to know who
did what, and on another platform email addresses are covered by data
protection laws and should not appear in logs, even in case of errors. You
choose to have this information in your app’s configuration and would like to
pass it directly to the logger. But you can’t change the logging level for the
whole application, as this might discard some of your unrelated and
important messages.

For this, we can add an exportedLogf() function that takes a logging level as
its first parameter.

Listing 4.16 logger.go: Exported Logf function

// Logf formats and prints a message if the log level is high enough
func (l *Logger) Logf(lvl Level, format string, args ...any) {
if l.threshold > lvl {
return
}

l.logf(lvl, format, args...)


}

From there, why not refactor so that all the other exported methods simply
call this one? Of course, having both options will make your APIs more
cluttered and harder to understand and maintain. And the user can always
write her own function with a simple switch, using a variable defined in her
own domain.

Exercise 4.6: Add a test for this method.

4.5 Logging: good practices


Now we’ve completed this library that you’ll hopefully use and share, some
recommendations might be necessary. What kind of information do we want
to log? At what frequency do we want to log? There are two factors to take
into account here: this is a story of compromise.

Logs are the trace of the past execution of a program, and as soon as the need
for logs arise (usually “what was the value of count , at this moment?”, “how
long did this request take to process?”), the need to safekeep these logs also
appears on the stage. Logs, when stored in a file, a database, a bucket in the
cloud, or anywhere persistent, bluntly, cost money. The more logs you have,
the easier it will be to understand what went wrong and quickly fix it - but all
the more expensive it will be.
Every company will have its policy regarding what should - and shouldn’t -
be logged, and the level at which they should be. However, here are a few
recommendations we can share.

Write clear messages

Although it might be very tempting to log “Step 1”, “Step 2”, etc. inside a
function, these messages won’t help you on the next day. Think what
happened at step 1 - was the document inserted in the database? Was the
email sent? Help your future self with clarity in messages. When a function
has only one possible execution, the only value of a log message is the
comforting reassurance that we’ve finished this or that piece of logic. Some
valuable information here would be to know how long it took to process it, or
something similar.

Avoid long messages

The amount of data written to the logs is directly related to the amount of
money that will be spent to keep these messages. If your variable is a map
with potentially thousands of keys, printing the map will be costly. Instead,
wouldn’t having its size, or whether a specific key is present, be as valuable?
If your data is a piece of an image or a song recording, the logger is not the
place to keep a copy of the bytes that are being processed. Instead, write a
function to save the image or the song.

Exercise 4.7: Ensure the logged message doesn’t exceed 1000 characters (or
1000 bytes, up to you), or, better, a value that would be optionally set. If it
would and the limitation is activated, trim the end off to make sure all logged
messages have a reasonable size.

Log at milestones, give heed to loops and recursion

Functions can be complex and span over hundreds of lines. When this
happens, the most important question is to identify which sections really
deserve a log and which simply don’t. If we return early because we found no
item to process, should we say so? Maybe not, but we could always log the
number of items found instead.

When retrieving items from a database, and processing everyone in a loop, do


we want to know that we’re at item 5.567 out of 81.543? And 5.568? If the
normal flow doesn’t require this level of detail, maybe we can simply log a
message every 10.000 items, to get a rough idea of how much of this big list
of data has already been processed.

Get to trust the code

Logging shouldn’t be a tool to debug the code. Most of the time, when you
scratch your head wondering what’s going on when the input of this or that
function has this or that value, it’s because the code is unclear. There are
three ways of addressing this:

Writing clearer code - splitting logical blocks in more smaller functions;


Writing better documentation - comments, variable names, etc;
Writing more tests - errors don’t happen on the happy path, they happen
when something is wrong. Make sure you cover as many edge-case
scenarios as possible.

Structured messages

In the recent world, most logs are, in fact, not processed by humans. They are
mostly read by programs that use the logs to generate information displayed
in dashboards - for instance, representing the number of errors that happen
per minute over the course of time, or the time it took to process a request.

For this, we need to tell these computers how to parse the logs - which piece
of information is valuable, which is not to be taken into account, etc. And the
simplest solution, here, is to format the log messages into structured entities.
A common structured log message format is JSON (displayed here on several
lines for readability by humans):

{
"time": "2022-31-10 23:06:30.148845Z",
"level": "warning",
"message": "platform not scaled up for request"
}

Some existing logging libraries offer several additional features, such as


letting the user include their own set of keys and values to the logged
message.

Exercise 4.8: (The concepts here are explained in Chapter 5.) Update the log
function to print logs of the format above (you can ignore the"time" part
now, as we need a bit more than what we’ve covered to validate it in the
tests). This will require making use of the encoding/json package and the
Marshal function it contains. This function will have to be called on a new
type, which we’ll define as a structure that contains aLevel and a Message .
One of the most common traps, when using theMarshal (or Unmarshal )
function, is to forget that the json package needs to access the fields of the
structure. It is tempting, for newcomers, to keep these fields unexported, but
this makes them precisely unexported to the functions in charge of reading /
writing them. We’ll have more opportunities to cover this when we start
implementing services.

4.6 Summary
A library is a list of exported types and functions, the API, in a package,
which a client can use out of the box.
A library must only export what the user needs.
Using explicit names, and reproducing existing signatures to help the
caller of your library.
Define domain types over primary ones for readability and
maintainability of the code.
Enumerations using
iota are perfect for a type that has a small and
finite number of possible values.
Creating a New() method enables you to force the necessary parameters
at the object initiation and guarantee the client of your library will use it
properly, as it forces a clean initialisation of your object.
Use the functional options pattern to set fields that have a default value.
Implement and test the library using closed-box testing.
Smaller files make your code easier to read.
Be mindful of what you want to log and what not.
5 Gordle: play a word game in your
terminal
This chapter covers

Building a game that runs in a terminal


Retrieving runes from the standard input
Getting a random number in a slice
Propagating errors
Reading the contents of a text file

CLAUDIO. One word, good friend. Lucio, a word with you.

LUCIO. A hundred, if they'll do you any good.

This chapter is about a love story. During the 2020 pandemic, Mr Wardle, a
passionate software developer, created a new game named Wordle for his
partner Ms Shah, a word-game addict. After introducing the game to his
relatives and seeing how well it was welcome, he decided to publish it. This
is how this famous game began its journey before going public and rising like
a rocket. There is now a daily release of a new word to find, mostly
referenced throughout the world as “today’s wordle”. Since then, there have
been lots of variations, based on geography, maths, terminology from
Shakespeare, Tolkien, or Taylor Swift, and even more adaptations in different
languages throughout the world (beyond time and space - a list offers ancient
Greek, Quenya, and Klingon).

The game is pretty basic: you must guess a word of 5 characters in 6


attempts. After each attempt, the game tells you, for every character, whether
it belongs to the solution, and whether it has the correct position.

The goal of this chapter is to create our own game named Gordle (did you get
the pun?). It will be a configurable version of Wordle - the official version
has 5 characters per word, but we can imagine passing longer or shorter
words, and changing the number of attempts before a player’s game is over.
Lucio will be our developer, while Claudio, the player, will execute a
command that will start the game. In our code, we will progress step by step,
starting with a simple function reading from the input and printing Claudio’s
attempt. Then, we will iterate and have it evolve to give feedback to the
player. Adding a corpus (a list of words) from where to pick a random
solution will make the game more replayable. Finally, we will have the
opportunity to support more languages and tweak the parameters as we want.

For the sake of simplicity, this version will only support writing systems
where one character never needs more than one code point in Unicode.
Supporting other writing systems is out of the scope of this chapter, but you
can find extending ideas in the extras at the end.

Requirements

Write a program that picks a random word in a list


Read the guesses of the player from the standard input
Give feedback on whether the characters are correctly placed or not
The player wins if they find, or loses after the maximum number of
unsuccessful attempts

5.1 Basic main version


The default approach to any coding exercise is always to simplify the
problem to its absolute simplest version. We’ll have time to improve it later.
First, let’s start with a basic version of the main function that will have a
hardcoded solution. We’ll allow only one guess, and the program will answer
ok or not ok.

As we did in previous chapters, we first initialise our module.

$ go mod init learngo-pockets/gordle

Then we create a new package named after the game gordle next to the
main.go file. This package is where we’ll implement the game. Our project
will have the following structure:
.
├── go.mod
├── gordle
│ └── files in package gordle
└── main.go

The packagegordle will expose a structure Game to which we will


progressively add the needed methods to build a full game.

5.1.1 Mini main

Lucio begins simply, with an empty structure namedGame in the gordle


package. We know that this program will need more than 50 lines of code, so
it’s a good idea to split responsibilities over several files. Anything that
relates to the game will be in thegordle package. We know, for instance,
that at some point we’ll have to keep the secret word somewhere. This leads
us to the creation of a structure that will contain game information.

Listing 5.1 game.go: Game structure

// Game holds all the information we need to play a game of gordle.


type Game struct{}

As we’ve seen in Chapter 4 with the logger, there are several ways to create
an object. Here, we expose aNew() method, which will be the recommended
entry point into the library, guaranteeing the creation of theGame object with
all its dependencies. Note that it is a good habit to ensure proper behaviour of
your library.

By convention, New() will return a pointer on Game, similarly to Go’s built-in


function new, which returns a pointer too.

Listing 5.2 game.go: New to create a Game structure

// New returns a Game, which can be used to Play!


func New() *Game {
g := &Game{}

return g
}
We voluntarily did not write return &Game{} because we will add some code
before this return g line to configure our game.

Then, we attach aPlay method to the Game type. Play will run the game. In
our first implementation, let’s simply print the instructions. Creating a
method on an object, in Go, is achieved by writing a pointer receiver on the
Game structure.

Listing 5.3 game.go: Play to run the game

// Play runs the game.


func (g *Game) Play() {
fmt.Println("Welcome to Gordle!")

fmt.Printf("Enter a guess:\n")
}

This is enough for a very first version. Let's call these new methods in the
main function. For this, we need to import the gordle package in themain.go
file.

import (
"learngo-pockets/gordle/gordle"
)

In the main function, we need two steps to start the game: create a new
Gordle game and launch it!

func main() {
g := gordle.New()
g.Play()
}

The aggregated main file looks like this:

Listing 5.4 main.go: main function and package

package main

import (
"learngo-pockets/gordle/gordle"
)

func main() {
g := gordle.New()
g.Play()
}

After these initial steps, we can run our program and verify that it behaves as
expected. Now is also a good time to commit these files to your favourite
version control system, before you add some contents into the Game structure.
We are now ready to wait for Claudio’s guess of a secret word!

5.1.2 Read player’s input

Since this is a game, we have a player, Claudio. Let’s ask him for a
suggestion. Claudio has access to the keyboard, and we’ll be reading his
attempts through the standard input. After reading it, we can check it against
the solution.

Game structure

There are several ways of reading from the standard input, depending mostly
on what we want to read. Some functions read a slice of bytes. Some read
strings. In this case, the player will type characters and then press the Return
key. We, therefore, want to read a line until we hit the first end of the line
character.

The bufio package has a useful method to achieve this on its Reader
structure: “ReadLine tries to return a single line, not including the end-of-line
bytes”, reads the documentation. It also states that it’s not the best reader in
the world for most reading use cases, but for one word from the standard
input, it is perfect. The good thing is that the bufio.Reader implements the
io.Reader interface! We don’t want to over-engineer our solution.

Our Game object will hold a pointer to a bufio.Reader . Why a pointer and not
a simple object? Simply because thebufio package exposes aNewReader
function that returns a pointer to abufio.Reader . Also, since we’ll be calling
ReadLine a lot, it’s useful to immediately have a variable of the type of that
method’s receiver - a pointer.

Listing 5.5 game.go: Game with a reader

// Game holds all the information we need to play a game of gordle.


type Game struct {
reader *bufio.Reader
}

But wait, how do we initialise this reader? Should we do it as part of the


New() function? Although this is a valid option, we soon realise that
NewReader itself requires a io.Reader parameter - where should that
parameter come from? Sinceio.Reader is a very simple interface, we can
pass a variable implementing it to ourNew function, and create the
bufio.Reader inside New:

Listing 5.6 game.go: New with the reader as parameter

// New returns a Game variable, which can be used to Play!


func New(playerInput io.Reader) *Game {
g := &Game{
reader: bufio.NewReader(playerInput),
}

return g
}

Before we dive into reading a player’s input, we need to answer a question:


how does Go deal with characters?

If we want to play using another language, or even play in English with


words that come from another language, we need Unicode for a full support
of writable characters. Would it be fair to repudiate words such as canapé,
façade, dürüm or even the old-fashioned rememberèd?

Go natively uses Unicode. All the source files need to be encoded in UTF and
it even has a specific primitive type called rune that serves to encode a
Unicode codepoint.

If we take for example the default line that appears on the Go playground and
look at the length of the string (including the comma and the whitespace):

fmt.Println(len("Hello, 世界"))

This prints out 13. Indeed, UTF-8 requires 3 bytes to encode each of these
non-latin characters. On the other hand,

fmt.Println(len([]rune("Hello, 世界")))

This outputs 9. We are measuring the number of runes and not the number of
bytes necessary to encode them. Keep that in mind whenever iterating over a
string’s elements: you can either access its byte representation with
[]byte(str) , or access its rune representation with[]rune(str) , which is
the default behaviour.

Listing 5.7 Print the string and rune lengths

package main

import "fmt"

func main() {
fmt.Println("Hello, 世界")
fmt.Println(len("Hello, 世界"))
fmt.Println(len([]rune("Hello, 世界")))
}

Console Output:

Hello, 世界
13
9

The ask method

We have a variable that allows us to read from the standard input once the
game is set. Let’s politely ask Claudio for his next word. Since the feature of
retrieving an attempt provided by the player through the reader is something
we can summarise in a sentence without having to explain how it works, it’s
a great candidate for a function! We’ll call it ask , for clarity and simplicity.
This method will accept a Game receiver, since it needs to read from its
reader, and will return a slice of runes - the word proposed by the player. It
will guarantee we have a valid suggestion.

Listing 5.8 game.go: ask method signature

// ask reads input until a valid suggestion is made (and returned).


func (g *Game) ask() []rune {
// ...
}

The experienced reader will have noticed that we use a pointer receiver here.
There are two reasons for this. The first is simple: we’ll be modifying the
state of our Game structure via many of its methods, so they will all require a
pointer receiver. It is good Go practice to avoid having both pointer and non-
pointer receiver methods on a type, for consistency. The second is a bit more
complex, and is motivated by the fact that theGame structure has a field that is
a pointer: the output field. Appendix E covers the issues that can happen
when using copy-receivers with pointer fields.

Inside this ask method, we read the line using the reader. Should an error
occur, we’ll print it using Fprintf , but we decide to continue anyways and
wait for a new attempt. That is: a jammed line won’t cause the game to crash,
but merely to ask for another word. Fprintf will allow us to write to the
standard error.

We’ll see, when completing the Play method, how to better deal with errors.
The hard truth is that they shouldn’t be ignored, most of the time. However,
deciding that an error is not blocking is a good moment to leave a note for
future-self explaining this decision, in the form of a comment.

We can add an easy check on the length of the word. For the moment, we
play with the same parameters as the original Wordle, with 5-character long
words. We can define a constant at package level and use it everywhere we
need it.

A constant serves two purposes. The first one is to address a developer’s


laziness: by re-using a constant, we make sure that we don’t have to update
several lines of code should the value change. The second is to make the code
clearer by giving a name and a purpose to a value (just like we do with
variables): connectionTimeout is more explicit than 5*time.Minute . So
please don’t call your constanttime5minutes .

Making it a constant is an unambiguous way of telling the reader of the code


that this value isn’t expected to change.

Listing 5.9 game.go: ask method with the reader

const solutionLength = 5

// ask reads input until a valid suggestion is made (and returned).


func (g *Game) ask() []rune {
fmt.Printf("Enter a %d-character guess:\n", solutionLength)

for {
playerInput, _, err := g.reader.ReadLine() #A
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Gordle failed to read your guess: %s\n", err.Error())
continue #B
}

guess := []rune(string(playerInput))

// TODO Verify the suggestion has a valid length.


}
}

The ReadLine method will give us the user’s input as a slice of bytes. We
will then need to convert this byte slice into a rune slice. Converting each
byte into the rune representing that byte would be a very bad mistake.
Everything not ASCII would break. To properly convert a slice of bytes that
we know represents a string to a slice of runes, we need to first convert the
byte slice into a string and then into a rune slice.

The built-in method len() returns the length of a slice (or array). We can use
it to compare the length of Claudio’s word against thesolutionLength
constant. We should be polite and return a message if it fails. However, in
order to make it clear that we aren’t returning “happy path” information,
we’ll use the standard error output - available viaos.Stderr .

Listing 5.10 game.go: ask method with the reader


guess := []rune(string(playerInput)) #A

if len(guess) != solutionLength { #B
_, _ = fmt.Fprintf(os.Stderr, "Your attempt is invalid with Gordle's solution! Expected %d characters,
got %d.\n", solutionLength, len(guess))
} elsereturn
{ guess
}

Let’s test the ask method

Did you think we’d forget?

The ask method uses its receiver's reader and returns a slice of runes. Let’s
declare these in the test case definition.

As we are using the standard library’sbufio.Reader , we can use a reader to


any stub mimicking the player’s input. A stub is a very simple way of
implementing a dependency over a third party (in our case, Claudio).

Think of a few original test cases that use your favourite alphabet, abjad,
syllabary, or even emoji from the Unicode list of supported characters.

Listing 5.11 game_internal_test.go

package gordle

import (
"errors"
"strings"
"testing"

"golang.org/x/exp/slices"
)

func TestGameAsk(t *testing.T) {


tt := map[string]struct {
input string
want []rune
}{
"5 characters in english": {
input: "HELLO",
want: []rune("HELLO"),
},
"5 characters in arabic": {
input: " ‫"ﻣﺮ ﺣ ﺒﺎ‬,
want: []rune(" ‫)"ﻣﺮ ﺣ ﺒﺎ‬,
},
"5 characters in japanese": {
input: "こんにちは",
want: []rune("こんにちは"),
},
"3 characters in japanese": {
input: "こんに\nこんにちは",
want: []rune("こんにちは"),
},
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
g := New(strings.NewReader(tc.input))

got := g.ask()
if !slices.Equal(got, tc.want) {
t.Errorf("got = %v, want %v", string(got), string(tc.want))
}
})
}
}

You might have noticed that the first line of our test function is somewhat
different from those in the previous chapters. Indeed, we used to declare a
testCase structure, that would encapsulate all the fields we needed - and,
now, it’s gone! Or, rather, it’s been replaced with what is called an
anonymous structure. This implementation is very common in Go tests, and
we’ll be using it from now on. However, if you prefer the previous way of
declaring the testCase structure, that’s also perfectly valid. For comparison,
here are both:

type testCase struct { testCases := map[string]struct {


input string input string
want []rune want []rune
}}
testCases := map[string]testCase

Note that with the current implementation of ask , if we have an input of only
3 runes, theReadLine method waits forever, after ignoring the invalid 3-
character-long line and waiting for more. What’s happening here is that
ReadLine , when hitting the end of the input, will return a specific error to let
the caller know that there is nothing to be read.

Why are we not using == to compare slices? We can for arrays, after all!
Remember that slices hold a pointer to their underlying array. Array values
are comparable if values of the array element type are comparable. Two array
values are equal if their corresponding elements are equal. But when it comes
to slices, structs and maps,== will simply not work. It’s not that it will
produce random results - no! Instead, Go will simply not let you compare two
slices. Not even a slice with itself. The only entity that we can compare with
a slice is the nil keyword.

This might sound a bit harsh, but it should be considered a safeguard rather
than a restriction. It is possible, in tests, to use the method
reflect.DeepEqual , but it was not designed for performance; you should
avoid it in production code. Instead, write the simple loop.

Now, that does not compile. We added a dependency on an external library.


The developers of the Go language will typically write their new libraries in
golang.org/x/ in order to let the community test them out. Once they are
stable, they can be moved to the standard library.

Remember your module? To add the dependency, let the go tool look for the
latest version with the following command:

go get golang.org/x/exp/slices

You can see that ago.sum file has appeared. This is typically something that
should not be committed to your version control, but generated locally. You
can also notice that yourgo.mod has changed, and it now refers to the new
dependency.

Is your code compiling and your test passing now? We can move on.

Play

We are now able to read Claudio’s guess. Let’s make great use of this ability,
and plug it in the Play() method which looks like this:
Listing 5.12 game.go: Play method

// Play runs the game.


func (g *Game) Play() {
fmt.Println("Welcome to Gordle!")

// ask for a valid word


guess := g.ask()

fmt.Printf("Your guess is: %s\n", string(guess))


}

There is one missing update in the main, can you spot it? Since New()
method takes a reader as parameter, we need to pass os.Stdin to wait for the
player’s input.

Listing 5.13 main.go: main function updated with os.Stdin

package main

import (
"os"

"learngo-pockets/gordle/gordle"
)

func main() {
g := gordle.New(os.Stdin)
g.Play()
}

You can test it manually in your console. Here is an example of a game:

$ go run main.go
Welcome to Gordle!
Enter a 5-character guess:
four #A
Your attempt is invalid with Gordle's solution! Expected 5 characters, got 4. #B
apple #C
Your guess: apple #D

You may have noticed the ask() method is now responsible for both reading
the input, standardising it and validating the guess. It is best to separate the
concerns, let’s refactor!
5.1.3 Isolate the check

There is no specific rule concerning the responsibilities of a method, but


when you start having multiple operations of different natures, it might be
best to have one function for one action. Small functions are also easier to
test and maintain, while making sure our code is robust!

Let’s move the word length validation to another method, adequately named
validateGuess . Notice that we did say method, and not function. This
validateGuess will have a receiver over the Game type. The reason for this
won’t be visible here, but in the next pages, we’ll want to get rid of that
solutionLength constant, in favour of a test against the real secret word’s
length, which will be part of the Game structure. This validateGuess method
is in charge of the validation, it takes the guess as a parameter and returns
whether the word is valid.

There are two common ways of informing of the success of a check - either
we can return a boolean value, or an error which is in Go a value,
representing the issue we found (ornil , if everything was fine). Returning a
boolean is simple, but it doesn’t allow for fine behaviour. What if we need to
specify that we faced an unrecoverable (at least for thisvalidateGuess ’s
concern) error? While there is no granularity with booleans, errors offer a lot
more variations that will allow the caller - in our case, the ask method - to
decide the behaviour if there is any error. We will also see how it makes the
code easier to test.

Error propagation

Unfortunately, most programs will face errors. A file could be missing. A


connection could be closed. A value could be an unexpected zero. Go’s take
on error handling is to use functions that return one or more values, the last
one (if any) being an error. The first line that follows the call to such a
function is the most common line of any Go source code:if err != nil { .
It’s so common we have a macro to paste it in our IDEs.

When we retrieve an error, the best thing we can do is to handle it as much as


we can, and, if there’s nothing this layer can do about it, then propagate it to
the upper layer, nicely wrapped. Wrapping it will provide context to the layer
that can finally decide to handle the error. The simplest way of wrapping an
error is to call fmt.Errorf("... %w …", …, err, …) , where the %wstands
for “wrap the error”.

fmt.Errorf returns the wrapped error.

The following snippet of code holds the new method and its attached error. It
is declared outside of thevalidateGuess method to enlarge its scope and use
it in unit tests later, validating we retrieve the proper error.

Listing 5.14 game.go: validateGuess method

// errInvalidWordLength is returned when the guess has the wrong number of characters.
var errInvalidWordLength = fmt.Errorf("invalid guess, word doesn't have the same number of characters
as the solution") #A
// validateGuess ensures the guess is valid enough.
func (g *Game) validateGuess(guess []rune) error {
if len(guess) != solutionLength {
return fmt.Errorf("expected %d, got %d, %w", solutionLength, len(guess),
} errInvalidWordLength)

return nil
}

We decided to keep the validating function simple. Every implementation of


Wordle will feature its own validator - some will ensure that the attempt has
the right length, others that the attempt is a word that exists in a dictionary, or
in a list provided by the developers… There are as many implementations of
this as one’s mind can think of.

Make sure to replace the validation with the call tovalidateGuess in the ask
method like below. :

Listing 5.15 game.go: Call to validateGuess in ask method

err = g.validateGuess(guess)
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Your attempt is invalid with Gordle's solution: %s.\n", err.Error()) #A
} else {
return guess
}

Testing validateGuess()

Extracting the validation into a dedicated method is one way to test unitary
behaviour. As we did previously, we will use Table-Driven Tests to cover
several cases without too much repetition. Let’s test this new function!

First, we need in our test a newGame object to be able to call the


validateGuess method. Then, we build the structure holding all the
parameters needed for our execution and validation phase, in this case, the
attempted word and the expected error. Then it’ll be time to add test scenarios
to our table. Finally, in the execution phase, we callvalidateGuess with the
test case word and verify the error is as expected.

The errors package provides an important function,errors.Is(err,


target error) bool , which reports whether any error in err 's chain matches
the target error. errors.Is is very handy when dealing with wrapped errors,
as it will unwrap all the errors and the chains to verify the presence of a
specific error. Wrapped errors are similar to matryoshkas, anderrors.Is lets
you know if a layer is wearing a blue dress.

Now you are familiar with Test-Driven Tables, you should be able to write
the first test scenario without checking the solution, which we provide
anyway:

Listing 5.16 game_internal_test.go: Test validateGuess method

package gordle

import (
"errors"
"testing"
)

func TestGameValidateGuess(t *testing.T) {


tt := map[string]struct { #A
word []rune
expected error
}{
"nominal": { #B
word: []rune("GUESS"),
expected: nil,
},
"too long": {
word: []rune("POCKET"),
expected: errInvalidWordLength,
}
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
g := New(nil) #C

err := g.validateGuess(tc.word) #D
if !errors.Is(err, tc.expected) { #E
t.Errorf("%c, expected %q, got %q", tc.word, tc.expected, err) #F
}
})
}
}

Exercise 5.1: Here we cover the happy path case with a five-character word
and one unhappy path when the word is too long. Add new test cases to cover
more invalid paths. What happens if the attempt has fewer characters, is
empty, or is nil ?

Input normalisation

There is a line left in this ask method that could be reused later.

guess := []rune(string(suggestion))

We accept all kinds of upper and lowercase mixes and it will later be simple
to take care of the not-yet-supported writing systems if we put this into a
small function.

Listing 5.17 game.go: Split characters

// splitToUppercaseCharacters is a naive implementation to turn a string into a list of characters.


func splitToUppercaseCharacters(input string) []rune {
return []rune(strings.ToUpper(input))
}
Replace the long line above with a call to your new function. You can also
check that the input was correctly normalised by writing a test over the
function.

5.1.4 Check for victory

We have built the foundations of your game. The next step is to verify if
Claudio’s attempt is the solution. If it isn’t, he gets to try again. We will also
limit the number of attempts to make the game more challenging. Indeed,
there are two ways to end a game of Gordle: either the word was found, or
the maximum number of attempts was reached.

In order to give Claudio more attempts at finding the solution, we need to


enrich the Game structure, as it holds all the information required to play a
game. We keep the solution in the same type as the attempt, a slice of runes,
in order to make the comparison and the manipulation easy. We can either
have the gordle package select that solution, or, for now, have it as a
parameter of theNew function. To make sure the player can still try new
words, we need to store the maximum number of attempts in a variable
somewhere. We could have it a constant of the package, but having it
embedded within the Game structure will avoid creating unnecessary
constants, and will eventually keep the door open if we want to allow for
more attempts.

Listing 5.18 game.go: Add solution and max attempts

// Game holds all the information we need to play a game of Gordle.


type Game struct {
reader *bufio.Reader
solution []rune #A
maxAttempts int #B
}

Let’s update the New function by passing both the solution and the maximum
number of attempts as parameters, for the time being.

// New returns a Game variable, which can be used to Play!


func New(playerInput io.Reader, solution string, maxAttempts int) *Game {
g := &Game{
reader: bufio.NewReader(playerInput),
solution: splitToUppercaseCharacters(solution),
maxAttempts: maxAttempts,
}

return g
}

We take the solution as a string, which is easier to use, and reuse the function
we just wrote before. We are also normalising the solution given to our
package by setting all letters to uppercase, something which again only
makes sense in a limited number of alphabets.

In the Play method, we can add a loop to let Claudio suggest a second word,
a third word, and so on. The criterion to end the loop will be that Gordle has
received a number of attempts equal to the maximum allowed. That loop
starts by asking for a word, ensures its validity, and then checks if the attempt
is equal to the solution.

Here is the new version of thePlay() method:

Listing 5.19 game.go: Add check on victory

// Play runs the game.


func (g *Game) Play() {
fmt.Println("Welcome to Gordle!")

for currentAttempt := 1; currentAttempt <= g.maxAttempts; currentAttempt++ {#A


guess := g.ask() #B

if slices.Equal(guess, g.solution) {
fmt.Printf(" You won! You found it in %d guess(es)! The word was: %s.\n",
return currentAttempt, string(g.solution))
}
}

fmt.Printf(" You've lost! The solution was: %s. \n", string(g.solution)) #C


}

Using emojis

To insert emojis, use Ctrl-Cmd-Space on Mac, Win-period on Windows.


Typing Ctrl-Shift-U on Linux will let you enter unicode typing mode; write
the hexadecimal code value and press enter to see it appear. 1F984 is the code
of a unicorn. You can find the list of all available codes here:
https://unicode.org/emoji/charts/full-emoji-list.html .

With the solution now embedded in a Game object, we can remove the
constant solutionLength everywhere and replace it with the length of the
solution - len(g.solution) .

Listing 5.20 game.go: Example of replacement

// Before the replacement


fmt.Printf("Enter a %d-character guess:\n", solutionLength)
// After the replacement
fmt.Printf("Enter a %d-character guess:\n", len(g.solution))

Are our tests still passing?

It’s been a long time… are our tests still passing?

Well, for now, they don’t even compile, because we changed the signature of
New. As you can see, it forces the user to provide the mandatory fields:

g := New(strings.NewReader(tc.input), string(tc.want), 0)

The ask method does not use the max number of attempts, so we can give the
zero value as parameter and tell the next maintainer that it is useless in this
context. It makes our call a bit wacky, but this weird zero will be fixed in part
4 when we make it optional.

This should do it! We can continue to update the rest of the code, starting
with the main. Indeed, as we mentioned earlier, we need a solution word to
play. For now, we’ll hardcode this in the main like the following snippet of
code

Listing 5.21 main.go: Main with hardcoded soluton and updated New()

package main

import (
"os"
"learngo-pockets/gordle/gordle"
)

const maxAttempts = 6

func main() {
solution := "hello"

g := gordle.New(os.Stdin, solution, maxAttempts)

g.Play()
}

Let’s have a round of Gordle!

Here is an example of the game when the player finds the solution. This
illustrates the game when Lucio plays his game - an easy win on the first
attempt! Remembering what he wrote and hardcoded inmain a few minutes
earlier did help here…

$ go run main.go
Welcome to Gordle!
Enter a 5-character guess:
hello
You won! You found it in 1 attempt(s)! The word was: HELLO.

However, if Lucio lets his friend play the game, it’s a lot more difficult to
win! With no hints to guide Claudio towards the solution, this game is almost
impossible to win (unless one plays it twice, but changing the solution every
time the game is played is work for later).

$ go run main.go
Welcome to Gordle!
Enter a 5-character guess:
sauna
Enter a 5-character guess:
pocket
Your attempt is invalid with Gordle's solution: expected 5, got 6, invalid guess, word doesn't have the
same number of characters as the solution.
[...]
Enter a 5-character guess:
phone You've lost! The solution was: HELLO.
The game would be quite a bore if it didn’t give the player some information
about how close they are to the solution, in the form of hints as to which
characters are properly located, and which are misplaced. It’s time to give
Claudio some feedback!

5.2 Providing feedback


Claudio just submitted a word. And our task is now to let him know which
characters of that word are in the correct position, which are in the wrong
position, and which simply don’t appear in the solution. This will help him
find the secret word that Gordle initially chose.

A good feedback should return a clear hint for every character of the input
word, explicit about the correctness of the character in this or that position.
The initial Wordle uses background colour, behind each character of the
player’s input. While this was great for most of us, an application that
provides feedback to the user should take into account user accessibility. A
common impairment is colour vision deficiency, where making a difference
between green and orange isn’t as obvious as it would seem. An option was
added to Wordle that would allow players to use colours with high contrast
instead of the default ones. Let’s see what we can do here!

5.2.1 Define character status

We’ve determined that a feedback will be a list of indications that can have
three values - correct, misplaced, and absent. In order to easily manipulate the
feedback for a character, we create the type hint to represent these hints, of
the type byte - the smallest type Go offers, regarding memory usage. The
iota keyword allows us to automatically number them from 0 to 2. Using
underlying numbers will make it easier for us when it comes to finding the
best hint we can provide the player. Define thishint type in a new file,
hint.go , in the packagegordle .

Listing 5.22 hint.go: Hint character type and enum

// hint describes the validity of a character in a word.


type hint byte
const (
absentCharacter hint = iota #A
wrongPosition #B
correctPosition #C
)

Ordering values in an enum

In our example, we have 3 values that we want to list in an enum. There are
3! (“factorial 3”, equal to 3 * 2 * 1) overall possible permutations of 3
elements, which is 6 ways of ordering them. In Go, the best practice is always
to make best use of the zero-value, and to sort the elements of the enum in a
logical way - in our case, from worst to best. We could have had an
unknownStatus as the zero-value of our enum, but as we’ll see later, using
the zero-value for absentCharacter will come in handy.

These hints will be printed on the screen to help Claudio make the best guess
he can on his next attempt. We need to find a representation of these hints
that is both simple and explicit. Since this is the 21st century, what better
than emojis to convey a message that we can all understand and agree upon?
We want to attach one emoji to each hint, and the Go way of implementing
this is through a switch statement.

Let’s now think about how this method that will provide a string
representation of ahint is to be called. Both literally and practically: how do
we want to name it, and how do we want to make calls to it.

The Stringer interface

One of the important interfaces to keep in mind while writing Go code is the
Stringer interface defined in the fmt package. Its definition is simple:
String() string . This means any type that exposes a parameterless method
named String that returns astring implements this interface. So far, so
good - but there is a key aspect that still has to be mentioned here. If we have
a look at the fmt.Printf functions, we can read that “Types that implement
Stringer are printed the same as strings”. This means, in order to print a
variable of a type that implements Stringer, we only need to use%s, %q, or %v
in a Printf call, and this will, itself, call the String() method.

Implementing the Stringer interface will save a lot of time - reusing a well-
known convention is better than trying to be smart, and it won’t require an
extra layer of knowledge from future developers who will later work on this
code.

Listing 5.23 hint.go: String() method

// String implements the Stringer interface.


func (h hint) String() string {
switch h {
case absentCharacter:
return " ⬜" // grey square
case wrongPosition:
return " " // yellow circle
case correctPosition:
return " " // green heart
default:
// This should never happen.
return " " // red broken heart
}
}

Note that if your terminal does not display emojis properly, you can replace
them with numbers or regular characters such as. "" for absent, "x " for
misplaced, and "O" for correctly placed characters. It is less fun, but, at least,
more readable than squares.

Providing a hint for a single character is good, but we’ll need to do so for
every character of the word. We’ll represent the feedback of a word as a
structure. It will hold the hint for each attempted character compared to its
position in the solution. We name this new typefeedback . We could place
the definition of a feedback in a feedback.go file, but since it’ll be very
tightly linked to a hint , and that these two types won’t have more than one
method over them, we can place them in the same file.

Listing 5.24 hint.go: feedback type

// feedback is a list of hints, one per character of the word.


type feedback []hint
One can wonder what the benefit of having an alias for a slice of status really
is. This is an interesting question, and its answer is simple: we can define
methods over that alias. In particular, here, we’ll want to print the feedback
so Claudio can next make an informed guess. And, as we’ve seen a few lines
earlier, the best way to provide a nice string from a structure is to have its
type implement the Stringer interface. All we have to do is write a small
function that will print the feedback of each character.

Benefits of a strings builder

Our first and naive implementation of the String method on the feedback
type is to create astring , and append the status representation as we go
through the feedback’s statuses.

Listing 5.25 Naive implementation of building a string

// StringConcat is a naive implementation to build feedback as a string.


// It is used only to benchmark it against the strings.Builder version.
func (fb feedback) StringConcat() string {
var output string
for _, h := range fb {
output += h.String()
}
return output
}

However, there is an important lesson here: one should never concatenate Go


strings in a loop. We’ve written this function here only for teaching purposes.

In Go, strings are immutable. Constant. We cannot alter them. We can’t even
replace a character in a string without casting something to a slice, and
something back to a string. This makes string manipulation quite painful,
especially for what would seem the simplest task - sticking two strings
together. When we use the + operator on two strings, Go will allocate
memory for a new string of the correct size and copy the bytes of each
operand into that new string.

While this is simple and clear when concatenating two strings together, it
becomes slower as soon as we have several strings to merge. Keep in mind
that, when the number of strings to connect exceeds two, there are two quite
common alternatives that are worth checking:

strings.Join(elems []string, sep string) string : returns a string of the


elements separated by the separator (usually a whitespace or a comma).
Works only if you already have a slice of strings, which is not our case
here.
strings.Builder: Slightly more complex, but also a lot more versatile.
Under the hood, a strings.Builder stores the characters in a slice of
runes, which is a lot easier to grow than a rock-solid string. This is the
option we use in our example.

The strings package provides the typeBuilder that lets you build a string
by appending pieces of the final string, while minimising the number of
memory allocations and reallocations every time we add some characters.

In order to use theBuilder , we declare a new variable and to fill the string.
This type exposes several methods that can be used to append characters to
the string being built: WriteString , WriteRune , WriteByte and the basic
Write , which takes a slice of bytes. In our case theWriteString method is
the most appropriate, since we know how to make astring from a status .
Once we’re done feeding data to the builder, callingString() on it will
return the final string.

Listing 5.26 hint.go: String() on feedback type

// String implements the Stringer interface for a slice of hints.


func (fb feedback) String() string {
sb := strings.Builder{}
for _, h := range fb {
sb.WriteString(h.String())
}
return sb.String()
}

Want to check the difference? See Appendix D.1 for how to benchmark your
code. Once we’ve selected which implementation we’d rather use, let’s not
forget to test this method. Testingfeedback.String() will cover
hint.String() , which will be enough - therefore, no need to also test the
hint.String() method.

We are now ready to send feedback to Claudio - but we are missing a small
piece of information here. We don’t know yet which characters are correctly -
or incorrectly - located! This will be our next task before the game can be
enjoyed.

5.2.2 Checking a guess against the solution

This section is about approaching a new problem. Whatever the language you
use, there will be times when you need to roll away from the screen, take a
piece of paper and pen, and think about the best way to solve your problem.
In our case, we want to make sure the hints we give are accurate. A letter in
the correct position should always be marked as in the correct position. A
letter in the wrong position should only be marked as such if it appears
unmatched elsewhere in the word. We need to make sure we cover double
letters properly - for instance, what should be the feedback to the word
“SMALL” if the solution is “HELLO” ?

As this book is not about algorithms, we’ll start with the pseudo-code of the
check function that implements our solution. Feel free to think about it
yourself before jumping to our solution.

Pseudo-code is an intermediate representation of the code’s logic with


sentences and words rather than instructions. Pseudo-code doesn’t have an
official grammar - sometimes, the loops end with END FOR, sometimes with a
curly brace. It’s up to you to decide how you want to write it. Reading your
own pseudo-code should be at least as clear as reading code, and you’ll have
access to operators that might not exist for a given programming language.
Pseudo-code magically offers any function that you can dream of (although
you might have to implement them later on). Here, we want to highlight the
use of a “fake” operator such as . mark

Our pseudo-code’s syntax will be close to that of Go, with curly braces,
because we think it makes more sense in this book. We had previous drafts of
pseudo-code that used boxes, arrows and loops.

Listing 5.27 Pseudo-code of computeFeedback()


func computeFeedback(guess, solution) feedback {

for all characters of the guess { #A


mark character absent
}

for each character of the attempt { #B


if the character in solution and guess the same {
mark character as seen in the solution
mark character with correct position status
}
}

for each character of the guess { #C


if current character already has a hint {
skip to the next character
}

if character is in the solution and not yet seen {


mark character as seen in the solution
mark character with correct position status
}
}

return the hints


}

Once we’ve written the pseudo-code, we can shoot some examples at it and
see how it behaves. By first iterating over correctly placed characters, and
then over those that are misplaced, we get the expected output for “SMALL”
vs “HELLO”:

SMALL
HELLO
⬜⬜ ⬜

We need to think about how to implement the different parts that are still
“pseudo-code magic”. How do we mark a character with a hint? How do we
mark a character as seen in the solution? There are lots of ways of
implementing this, we’ll go with a simple approach here: we’ll use a slice of
hints to mark characters of the guess with their appropriate hint, and we’ll use
a slice of boolean to mark characters of the solution as either seen or not yet
seen.
We recommend you give it a try before checking the solution.

Listing 5.28 game.go: computeFeedback() method

// computeFeedback verifies every character of the guess against the solution.


func computeFeedback(guess, solution []rune) feedback {
// initialise holders for marks
result := make(feedback, len(guess)) #A
used := make([]bool, len(solution)) #B

if len(guess) != len(solution) {
_, _ = fmt.Fprintf(os.Stderr, "Internal error! Guess and solution have different lengths: %d vs
return result #C len(solution))
%d", len(guess),
}

// check for correct letters


for posInGuess, character := range guess {
if character == solution[posInGuess] {
result[posInGuess] = correctPosition
used[posInGuess] = true
}
}

// look for letters in the wrong position


for posInGuess, character := range guess {
if result[posInGuess] != absentCharacter {
// The character has already been marked, ignore it.
continue
}

for posInSolution, target := range solution {


if used[posInSolution] {
// The letter of the solution is already assigned to a letter of the guess.
// Skip to the next letter of the solution.
continue #D
}
if character == target {
result[posInGuess] = wrongPosition
used[posInSolutionj] = true
// Skip to the next letter of the guess.
break #E
}
}
}

return result
}

A tricky part here is handling the case if the guess and the solution have
different lengths. Since this is our code, we know this can’t happen - because
it’s been checked earlier. But if somebody changes the code later (including
future Lucio who forgot everything he wrote and why), it will end in a
segfault, during runtime; we can’t even warn him with a unit test. For this
reason, we decide to re-check the length of the guess against the length of the
solution here.

Another option would have been to return a feedback and an error in the
computeFeedback function. These assumptions are tolerable in internal
functions, but they would absolutely not be accepted in exposed functions,
because we don’t control the range of values that can be passed to functions
available to the rest of the world.

Congratulations, you implemented the most difficult part! Now, let’s add
some tests.

Testing computeFeedback()

To properly test the methodcomputeFeedback , we need to provide aguess , a


solution and the expectedfeedback . Once we have these, we can call
computeFeedback and verify that the received feedback is the expected one.

To easily compare two feedback s, we write a helper method next to the


feedback definition. The package github.com/google/go-cmp/cmp" from
Google provides some insights as to how we should name this method:
“Types with an Equal method may use that method to determine equality”.

Listing 5.29 hint.go: Equal() helper

// Equal determines equality of two feedbacks.


func (fb feedback) Equal(other feedback) bool {
if len(fb) != len(other) {
return false
}

for index, value := range fb {


if value != other[index] {
return false
}
}

return true
}

Earlier, to compare two slices, we used thegolang.com/x/exp/slices


package. As always, when writing code, there is more than one way of doing
things. Here, we offer a different take, which is as valid as the previous one.
You will find arguments for both over the internet. Our recommendation is to
use whichever is clearer to you. If you’re curious, checking the
implementation of slices.Equal is worth the time, but it requires some
understanding of generics, which is a topic for later.

You now know how to write a test in a table-driven way. First, we define our
structure holding the required elements for our test case, inputs and expected
outputs. Then, we write our use cases, and finally we call the method and
check the solution. Here, for the sake of clarity and to avoid unnecessary
clumsiness, we’ve decided to use strings instead of slices of runes in the
structure of our test case for the guess and the solution. The conversion from
a string to a slice of runes is simple and safe enough to be performed in
execution of the test. On the other side, we want to explicitly check the
contents of the returned feedback, and, for this reason, we have a
feedback
field in the test case.

Listing 5.30 game_internal_test.go: computeFeedback() tests

package gordle

import "testing"

func TestComputeFeedback(t *testing.T) {


tt := map[string]struct {
guess string
solution string
expectedFeedback feedback
}{
"nominal": {...},
"double character": {...},
"double character with wrong answer": {...},
"two identical, but not in the right position (from left to right)": {
guess: "hlleo",
solution: "hello",
expectedFeedback: feedback{correctPosition, wrongPosition, correctPosition,
}, wrongPosition, correctPosition},
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
fb := computeFeedback([]rune(tc.guess), []rune(tc.solution))
if !tc.expectedFeedback.Equal(fb) {
t.Errorf("guess: %q, got the wrong feedback, expected %v, got %v", tc.guess,
} tc.expectedFeedback, fb)
})
}
}

Exercise: Part of the fun of a project is to come up with some edge case
scenarios. Try and find some that would push the logic to its limits.

Finally, we need to integrate thecomputeFeedback call in the Play function.


This isn’t too difficult, especially as we only want to print the feedback to
Claudio.

Listing 5.31 game.go: Update the Play() function to display the feedback

[...]
for currentAttempt := 1; currentAttempt <= g.maxAttempts; currentAttempt++ {
guess := g.ask()

fb := computeFeedback(guess, g.solution) #A

fmt.Println(fb.String()) #B

if slices.Equal(guess, g.solution) {
fmt.Printf(" You won! You found it in %d guess(es)! The word was: %s.\n",
return currentAttempt, string(g.solution))
}
}
[...]

And finally this is what playing the game looks like!

$ go run main.go
Welcome to Gordle!
Enter a 5-character guess:
hairy
⬜⬜⬜⬜ #A
Enter a 5-character guess:
holly
⬜ #B
Enter a 5-character guess:
hello
#C
You won! You found it in 3 attempt(s)! The word was: hello.

We now have a solution checker and we are able to give Claudio some well-
deserved feedback! Feedback makes it a lot easier for the player to find the
solution. However, there is a small final detail we still need to address that
will provide even more fun: how about Claudio getting a different word
every time he plays Gordle? We proved that our implementation works with
a hardcoded solution, it is time to add a corpus and add randomisation to our
game.

5.3 Corpus
In linguistics, a corpus is a collection of sentences or words assumed to be
representative of and used for lexical, grammatical, or other linguistic
analysis. Our corpus will be a list of words with the same number of
characters.

Until now, we have been using a hardcoded solution and ensured our
algorithm was working as expected. In this section, we will focus on adding
randomisation to our game by picking a word from a given list. Let’s first
retrieve a list of words and then pick a random word in it as the solution of
the game.

5.3.1 Create a list of words

First, we create acorpus directory with a file named english.txt . This file
contains a list of uppercase English words, one per line. Our corpus was built
while playing other versions of the game. Feel free to use the adequate list for
your own game. Adding a new corpus for a different language, or for a
different list of words (6-character long, for instance) is now simple: all we
have to do is add a file here and have the program load it.

5.3.2 Read the corpus

Parsing a file is a very common task that most programs face. It could be a
configuration file with default values to load, an input file as we have here, a
database query, an image, a video file, or anything that comes to your mind.
If it exists on a disk, a program is going to read it. In our case, we want to
read the corpus file as a list of words that we will store in a slice of strings.

Start by creating a new file, corpus.go , where all methods related to the
corpus will live.

How can we read the corpus? As it happens, the os package provides a


ReadFile method which takes the path to a file on disk as a parameter, reads
it and returns its contents as a slice of bytes. It reads the whole file or returns
an error if something bad happened.

The signature of the methodos.ReadFile :

func ReadFile(name string) ([]byte, error)

It’s good to keep in mind that files, when written on disk, are nothing but a
chunk of bytes. Nice characters, spaces, tabulation, tables, etc. are rendered
by file editors. This book was saved as some 0’s and 1’s. This is why we
don’t immediately have a list of lines out of the ReadFile function. That
logic has to be implemented by us, at reading time. We know that some of
these bytes are the new line character - but let’s not rush to an easy solution
that would be to split this slice of bytes on \n. Indeed, how if the byte
representation for \n (0x0a ) was in fact a byte part of a representation of a
non-ASCII longer character? Or, what if the file was encoded differently,
with a new line character not only represented by \n, but rather a \r\n ?

Manipulating an array of bytes in our case is not very practical, so we will


convert it to a string in order to split on any whitespace, including the new
line characters. Thestrings package exposesSplit and its siblings
SplitAfterN , SplitN , Cut , and Fields. These functions come in handy when
the need to split strings arises. In our case, the basic Fields is enough, as it
will split the string into a slice of its substrings delimited by all default
whitespaces, which relieves us from the trouble of knowing them. This slice
of substrings is our list of words eligible to become a solution.

The signature of the methodstrings.Fields is:

func Fields(s string) []string

The full code of the ReadCorpus method looks like this:

Listing 5.32 corpus.go: ReadCorpus method

const ErrCorpusIsEmpty = corpusError("corpus is empty")

// ReadCorpus reads the file located at the given path


// and returns a list of words.
func ReadCorpus(path string) ([]string, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("unable to open %q for reading: %w", path, err)
}

if len(data) == 0 {
return nil, ErrCorpusIsEmpty
}

// we expect the corpus to be a line- or space-separated list of words


words := strings.Fields(string(data))

return words, nil


}

Sentinel errors

Error management is at the heart of software development, whatever your


chosen language and whatever application you are making. Say you try to
read a file line by line: it could be that the file doesn’t exist, or you might be
missing the adequate rights to read it, or it could be empty or incomplete, or,
finally, accessible, and you could read it to the end. In all these cases, you get
an error back; your program’s reaction will be different depending on which
error case you fall into.
You would like to check the error that was returned, with a line of code such
as err == ErrNoSuchFile or err == EOF .

Sentinel errors are a type of recognisable errors. In Go, “errors are values”,
meaning that they carry a meaning. Sentinel errors must behave like
constants, but Go will only accept primitive types as constants, and not
method calls. Unfortunately for us, the two default ways to build an error are
by calling fmt.Errorf or errors.New . And these don’t produce constant
values - they produce the output of a function, which isn’t known at compile
time, only at execution time. This implies that errors generated by
fmt.Errorf or errors.New will always be variable. So, how do we get the
constant errors we’d like? We declare our own type and have it implement
the error interface:

Listing 5.33 errors.go: Sentinel error corpusError

package gordle

// corpusError defines a sentinel error.


type corpusError string

// Error is the implementation of the error interface by corpusError.


func (e corpusError) Error() string {
return string(e)
}

Here, we can declare acorpusError that is a constant (it is as primitive as a


string) and still implements the error interface. Yes, we wish this type were
in the standard library. Maybe in a future version of Go.

Small note: if you look at io.EOF in the code, you’ll realise it is a global and
exposed variable - it was generated at execution time by a call toerrors.New .
Don’t do that at home. Imagine a pesky colleague were to do this:

io.EOF = nil
...
if err == io.EOF { // oops

Test the reading


Now, we can test if we can actually read a file full of words into a slice of
string. Let’s add a nominal case reading from the corpus we created for the
English list of words, verifying the length and the associated error, if any.

Listing 5.34 corpus_test.go: Test readCorpus()

package gordle_test

func TestReadCorpus(t *testing.T) {


tt := map[string]struct { #A
file string
length int
err error
}{
"English corpus": { #B
file: "../corpus/english.txt",
length: 35,
err: nil,
},
"empty corpus": {
file: "../corpus/empty.txt",
length: 0,
err: gordle.ErrCorpusIsEmpty,
},
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
words, err := gordle.ReadCorpus(tc.file) #C
if tc.err != err {
t.Errorf("expected err %v, got %v", tc.err, err)
}

if tc.length != len(words) {
t.Errorf("expected %d, got %d", tc.length, len(words))
}
})
}
}

We are now happy, we have our corpus in a handy form, it is reading from a
file that can be updated in the simplest way possible - just add a new word to
it as a new line. Gordle now knows a list of words. If we pick one - and try to
make it different every time - Claudio will face a different challenge every
time he plays the game!

5.3.3 Pick a word

Every game of Gordle needs a random word for the player to guess. We have
a corpus, all that's left is to select one word from our list.

Libraries implementing random number generators are under a lot of


pressure, as they need to comply with very strict requirements. One would
expect, for instance, a random number generator to generate numbers with
the same probability and amount of time.

Go’s math/rand package provides a random number generator - but there is


another package that also achieves this in Go’s standard packages - the
crypto/rand package. The main difference is that the crypto package
guarantees truly random numbers, while themath package generates pseudo-
random numbers. Oh, and the crypto package is a lot more expensive. As a
rule of thumb, for small non-critical applications, using the math package is
perfectly fine. When it comes to passwords, tokens, or security-related
objects, using the crypto package is recommended.

Both of Go’s rand packages expose, amongst others, an Intn(n int)


function that returns a number between 0 (included) and n (not included).
These packages are built on algorithms using a base value called source that
can be overridden. Overriding it with something that changes every time will
ensure we get a random number out of the library. Earlier versions of Go
(prior to 1.20) required the rand package to be seeded, with a call to
rand.Seed(seed) . The random number generator is now seeded randomly
when the program starts - there is no real point in calling it. We’ll keep this
rand.Seed in the code, for those who aren’t using the latest version of Go.

Now that we know how to get a random number, picking a random word in a
list is straightforward - simply get the word at the random index.

index := rand.Intn(len(corpus))

The pickWord function is implemented as follows:


Listing 5.35 corpus.go: pickWord() method

// pickWord returns a random word from the corpus


func pickWord(corpus []string) string {
index := rand.Intn(len(corpus))

return corpus[index]
}

How to test a random func?

You know the importance of testing the core methods to make sure they are
working properly before calling them into higher methods. pickWord will
follow that trend, but there is a minor issue. When we execute tests, usually,
we want to compare an output to a reference.pickWord , by design, has a non-
deterministic output. When this happens, we have two solutions. We can
change the behaviour of the random number generator from the test (but then
we’re not testing anything). Or we can assert a truth about the output: what
we really want to test is whether the method returns a word from the list, or
the results we get when calling “a lot” of times the random function follows
the expected distribution. So, we will go for the second approach, and ensure
that the word
pickWord returns was indeed in the initial list. For this, we
won’t use Table-Driven Test, as we won’t have a wide variety of cases.

Let’s first write a helper function to verify a word is present in a list of words.
There is no special trick here, we have to range over the list and, if the word
corresponds to the input, immediately return true. Otherwise, we return false.
This function, similarly to the previous two that compared slices, is also a
very common one that we can make more generic.

Listing 5.36 corpus_internal_test.go: Helper inCorpus()

func inCorpus(corpus []string, word string) bool {


for _, corpusWord := range corpus {
if corpusWord == word {
return true
}
}
return false
}
Earlier, we shyly ventured into the world of the golang.org/x/exp/slices
to use theEqual function. That slices package also offers aContains
function with a very similar signature to our inCorpus . We believe that
practice makes perfect and that it doesn’t hurt to have your tiny
implementation closeby.

With the help of this small function, we can now add a test that will ensure
pickWord returns a word from the input slice.

Listing 5.37 corpus_internal_test.go: Test pickWord

func TestPickWord(t *testing.T) {


corpus := []string{"HELLO", "SALUT", "ПРИВЕТ", "ΧΑΙΡΕ"}
word := pickWord(corpus)

if !inCorpus(corpus, word) { #A
t.Errorf("expected a word in the corpus, got %q", word)
}
}

Now we have done the implementation and covered the testing, we are ready
to wrap it up! Do you remember that nasty hardcoded solution in theGame
structure creation? It’s time to replace it by calling the pickWord method and
passing the corpus as a parameter of New() .

We want Gordle to be independent and reusable by anyone with a list of


words. For this reason, we'll pick the solution in the New rather than have it
provided by the rest of the world. Even themain function doesn't know the
hidden word! However, we must now ensure that the corpus is valid - it
should have at least one word. If the corpus is empty, theNew function won’t
be able to create a playable game, and we need to return an error. This will
change theNew() function’s signature.

We are now also reaching the moment whereNew() does a lot. Not only does
it create a Game, but it also initialises it. We won’t push it any further, and
instead consider that it might be time to split it into two distinct functions,
each with its responsibilities. For now, let’s just add that final cherry on top
of the New() function:

Listing 5.38 game.go: Update New() with a corpus


// New returns a Game variable, which can be used to Play!
func New(reader io.Reader, corpus []string, maxAttempts int) (*Game, error) {
if len(corpus) == 0 {
return nil, ErrCorpusIsEmpty
}
g := &Game{
reader: bufio.NewReader(reader),
solution: []rune(strings.ToUpper(pickWord(corpus))), // pick a random word from the corpus
maxAttempts: maxAttempts,
}

return g, nil
}

Now, we have everything ready. Claudio's been waiting a long time to play,
let’s adjust the call in the main function and give him the keyboard!

5.3.4 Let’s play!

There is very little left to do before the game is complete. Only a few changes
in the main function remain to apply - we’ve got a corpus, and we need to
parse it and feed it to Gordle’s New() function. Since this New() function now
returns an error, we should take care of it. Let’s write a message on the error
output and leave themain() function with a return .

Listing 5.39 main.go: Calling readCorpus() in main

package main

import (
"bufio"
"fmt"
"os"

"learngo-pockets/gordle/gordle"
)

const maxAttempts = 6

func main() {
corpus, err := gordle.ReadCorpus("corpus/english.txt") #A
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "unable to read corpus: %s", err)
return
}

// Create the game.


g, err := gordle.New(bufio.NewReader(os.Stdin), corpus, maxAttempts) #B
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "unable to start game: %s", err)
return
}

// Run the game ! It will end when it's over.


g.Play()
}

That’s enough typing from our side, time to let Claudio smash these keys
frenetically, in search of one of Gordle’s secret words.

$ go run main.go
Welcome to Gordle!
Enter a 5-character guess:
sauna
⬜⬜⬜⬜⬜
Enter a 5-character guess:
waste
⬜⬜⬜⬜
Enter a 5-character guess:
hello

⬜⬜⬜
Enter a 5-character guess:
terse
⬜ ⬜
Enter a 5-character guess:
crept
⬜ ⬜⬜
Enter a 5-character guess:
freed

You won! You found it in 6 attempt(s)! The word was: FREED.

5.4 The limit of runes


Claudio enjoyed this so much he wants to submit his list of words! He wants
to share Gordle with his friend Mithali, who lives in India. He writes a small
list of words, to make sure the program behaves as expected, using the
Devanagari characters - which are used to write in Hindi, just like the Latin
characters are used to write in English. The first word he writes is नम ◌े -
“Namaste”, meaning “Hello”. It’s composed of four characters, but after
updating the main function and reading from this new hindi.txt file,
Claudio gets prompted “Enter a 6-character guess: ”. He comes back to
you, unhappy with the program. What’s happening?

Devanagari, as opposed to Latin, isn’t an alphabet. Instead, it is an abugida -


a system in which (simply put) vowels alter consonants. If we look at the
word नम ◌े, we can split it into its different sections: न is pronounced “na”, म
is pronounced “ma”, and ◌े is pronounced “ste”. This last section is actually
the combination of the “sa” letter, written स, without its “a” part, and the “ta”
section, written त, which is here written ते, because the sound “e” must be
present, as represented by the matra - a descending bar above the shirorekhā,
the horizontal line.

The word combination here also is important - even though the spelling न +
म + स् + ते would pronounce the same sound, the rules of Devanagari
combine the last two symbols, “s” and “te”, into one: ◌े “ste”.

Now, let’s see how Go deals with the “नम ◌े” string:

Listing 5.40 Understanding the नम ◌े case

func main() {
s := " नम ◌े"
for _, r := range []rune(s) {
fmt.Print(string(r)+" ")
}
}

Running this small program produces the following output:

न म स ◌◌् त ◌◌े

We can see that Go did indeed split the string into six runes. We’ve already
seen four of them, those with the shirorekhā: they are called swars in Hindi.
The other two are a bit cryptic - they represent a dotted circle with a
decoration - something called a diacritic. In Devanagari, this is one way of
representing matras (which include, but aren’t restricted to, vowels).
Diacritics are alterations to existing characters, they don’t have an existence
on their own. English has some diacritics, mostly in borrowed words such as
déjà-vu or señor: the accents on the first word’s vowels can’t be written
without their supporting vowel, and the same goes for the tilde, which needs
a letter to sit on.

As we can see, Go won’t merge the diacritic ◌् with the character स, when
splitting the string into runes. This character remains two different runes for
Go. So, what can Lucio do to help Claudio? Unfortunately, this is the limit of
what the native rune type of the Go language can support. But this is
precisely what the golang.org/x packages are for - extending the limits of
what Go natively accepts. In our case, the package
golang.org/x/text/unicode/norm provides a type Iter that can be used for
these strings. With a bit more work on the code, Mithali will be able to play
Gordle too!

5.5 Conclusion
Finally, we’ve completed our objective! We’ve written a command-line game
that lets a user interact with it via the standard input. Our game reads data
from a file containing words, selects one at random, and has the player guess
the word. After each attempt, we provide visual feedback to help the player
towards the solution. Whatever happens, we’ve tried to print clear messages
to the player so they don’t get lost with what to do next.

5.6 Summary
A switch / case statement is a lot more readable than a long sequence of
if / else if / else statements. Aswitch can even be used instead of an
if statement. We think that, if you need anelse statement, you’re better
off with a switch block.
A command-line tool often needs to read from the console input. Go
offers different ways of doing it, in this chapter, we used the
bufio.ReadLine method, which reads an input line by line.
Sentinel errors are a simple way of creating domain errors that can be
exposed for other packages to check. It is a cleaner implementation than
creating exposed errors witherrors.New() or fmt.Errorf() . To declare
a new sentinel error type, declare a new type that is defined as string a
(this makes creating new errors simple).
Propagating an error to the caller is the way Go handles anything that
steps out of the happy path. Functions that propagate errors have their
last return value of their signature be an error. In the implementation of
these functions, fmt.Errorf("... %w", … err) is the default way of
wrapping errors. The w in %wstands for “wrap”.
Any structure with a method with the following signature: String()
string implements the fmt.Stringer interface. Any structure that
implements the Stringer interface will be nicely printed by fmt.Print*
functions.
The os package provides aReadFile function that loads a file’s contents
as a slice of bytes. This function can be used for plain-text files, media
files, files in XML or HTML format, etc.
The golang.com/x/exp/slices package contains useful tools such as
the Equal function or the Contains function. However, the
documentation mentions that they could move out of/x/exp at any
point. As we’ve seen, implementing the function for a specific use case
isn’t too complex.
A Go string can be parsed as either a slice of bytes, or as a slice of
runes. The latter is recommended when iterating through the characters
that compose it. Use[]rune(str) to convert the str string to a slice of
runes. However, even this solution isn’t perfect and won’t always work.
Best to first check the language you’re dealing with to select the best
libraries to parse any text.
All receivers of a specific type should be either pointer or value
receivers. Using value receivers is only interesting if the structure is
small in memory, as it will copy it. When in doubt, use pointer-receiver
declarations.
When writing table-driven tests, it is a very common practice to use a
map[string]struct{...} . The key of the map, thestring , describes
the test case, and thestruct is an anonymous structure that contains the
fields necessary for your test case.
Getting a random number can be achieved by both the math/rand and
the crypto/rand packages. Anything related to security, cyphering, or
cryptographic data should use thecrypto/rand package, while the
math/rand is cheaper to use.
When working with random numbers, make sure you’re using Go 1.20
or more. Otherwise, be explicit about setting the seed with a call to
rand.Seed() . An usual value for the seed used to be the current
nanosecond, retrieved withtime.Now().Nanosecond() .
Taking a step back, away from a screen, and writing pseudo-code with
potatoes and arrows is valuable, and helps to see the bigger picture and
imagine tricky scenarios that might prove or disprove an algorithm.
6 Money converter: CLI around an
HTTP call
This chapter covers

Writing a CLI
Making an HTTP call to an external URL
Mocking an HTTP call for unit tests
Grasping floating-point precision errors
Parsing an XML-structured string
Inspecting error types

A long list of websites nowadays exposes useful APIs that can be called via
HTTP. Common examples are the famous open-source system for
automating deployment Kubernetes, weather forecast services, international
clocks, social networks, online databases such as BoardGameGeek or the
Internet Movie Database, content managers like WordPress, the list is long. A
small number of them also provide a command-line tool that calls these APIs.
Why? Even though nice and clickable interfaces are wonderful, they are still
very slow. Here's an example: when we look up a sentence in our favourite
search engine, it still takes an extra click to access the first link that isn't an
advert, or the first one we haven't opened yet. The terminal shell, on the other
hand, allows us to manipulate inputs and outputs of programs - and even to
combine them, which reduces the number of command lines and helps
automate more of our work.

In this chapter, we will create a CLI tool that can convert amounts of money.
Starting with a broad view of what we want our tool to achieve, we will begin
by defining the main concepts: what is a currency, and how do we represent
it? What does it mean to convert money? We’ll need a change rate, how do
we get it? What should our input and our output be? How do we parse the
input? As we’ll see, some precaution is required when manipulating floating-
point precision numbers. An early disclaimer is required here: this project is a
tutorial project and shouldn’t be used for real-life transactions.
Requirements

Write a CLI tool


Takes 2 currencies and an amount
Returns the converted amount
Safely rounds to the precision of the given currencies
Currency should be provided according to the ISO-4217 standard

Limitations

The input amount must be defined with digits only, and one optional dot
as a decimal separator. We could extend later with spaces, underscores,
or apostrophes.
We only support decimal currencies. Sorry, ariaries and ouguiyas.

Usage example: change -from EUR -to USD 413.9

6.1 Business definitions


One approach to building software is to start with the business definitions.
Indeed, understanding what you are trying to achieve is a good way to avoid
solving a different problem. In our case, the big picture is that we want a
command-line tool that takes an amount of money expressed as a quantity
and its currency, and another currency as the target, and that will present the
converted amount as its output. Our business words include “amount”,
“quantity” (a decimal value), “currency”, and “convert”.

As we want to convert a certain amount of money between two currencies,


we will start by defining what a currency is, what an amount is, and export a
Convert function that takes these as input.

New project, new module

By now, you know how to initialise a new module. Create your folder,
initialise it:

go mod init learngo-pockets/moneyconverter


Let’s start simple and create amain.go file at the root of the project, with the
package namemain and the usualfunc main() .

As long as we're only having a single binary, it's fine to have the main.go file
at the root directory. For tools that expose several binaries, the common place
for main function is in cmd/{binary_name}/main.go .

6.1.1 money.Convert converts money

While the main function is responsible for running the executable in a


terminal, reading input and writing output, most of the logic will reside inside
a subpackage: themoney package’s scope will be the heart of our domain
logic. The main package will be in charge of calling this money package. As a
general rule, imagine that the package can be reused, for example if you start
writing a fancy user interface, but don’t over-engineer it until you know what
you actually need.

We now create a folder namedmoney containing one file that will expose the
converter’s entrypoint of the package, theConvert function. We can start
writing the contents of convert.go : it has to be an exposed function.

Listing 6.1 convert.go: signature of the converter’s entrypoint

package money

// Convert applies the change rate to convert an amount to a target currency.


func Convert(amount Amount, to Currency) (Amount, error) {
return Amount{}, nil #A
}

The two parameters are hopefully self-explanatory. We want to convert a


given amount into the currency to . The function will return an amount of
money or, if something goes wrong, an error.

In order to make our project compile, we need to define the two custom
types: Amount and Currency . We can already anticipate that they will hold a
few methods, e.g.String() to print them out. This calls for a file for each of
the types, ready to hold their future methods.
Currency

The ISO-4217 standard associates a three-letter code to every currency used


out there in the real world, for example USD or EUR. As this will be our
input, we can start by using that three-letter code to define ourCurrency type
that will represent the currency code with a field of type string .

Create acurrency.go file in the same package and add the following
structure.

Listing 6.2 currency.go: Currency definition

// Currency defines the code of a currency.


type Currency struct {
code string #A
}

Immutability

The code string is hidden inside the struct for any external user. We will
continue building all of our types so that they stay immutable, meaning that
once they are constructed, they cannot be changed.

We do that to make the code more secure for the package’s users (that is, us):
if we have 10 euros, they will not suddenly become 19.56 Deutsche Mark
because we called a function on them. Immutability also makes the objects
inherently thread-safe.

Amount and Decimal

As our tool will convert money, we need to be able to represent a quantity of


money in a given currency to convert. This is what theAmount type needs: a
decimal value and a currency. Let’s create theamount.go file in the package
money and add the following structure.

Listing 6.3 amount.go: Amount struct

// Amount defines a quantity of money in a given Currency.


type Amount struct {
quantity Decimal #A
currency Currency #B
}

Why is quantity not simply a float? For a start, if we want to attach some
methods to it, it needs to be a custom type. Second, we will see in the next
part that floats are dangerous - there are many possible ways to save this
number and if we want to make room for later optimisation, we need to hide
the entrails behind a custom type.

As we don’t know yet how we’ll write these internal details, let’s leave it
empty for now. It is not necessary to have one struct per file, or file named
after the main struct they contain, but it is a good way for maintainers to find
what they are looking for.

Listing 6.4 decimal.go: Decimal struct

// Decimal is capable of storing a floating-point value.


type Decimal struct {
}

At this point your project’s tree should have one directory and a total of 5 go
files:

$ tree
.
├── go.mod
├── main.go
└── money
├── amount.go
├── convert.go
├── currency.go
└── decimal.go

Everything should now compile with the go build -o convert main.go


command. Congratulations, we’ve defined the business entities of our library.

Before we start filling it up, let’s write a test.

Testing Convert
Testing a function that does nothing is pretty preposterous, you might think.
We would like to argue that if you can’t write a test that is easy to understand
and to maintain, your architectural choices are on the wrong path. If writing
the test is a mess, rethink your code organisation even before starting to work
on the business logic. Unfortunately, easy testing is not a guarantee of a good
architecture, or the world would be a better place.

More for learning reasons than anything else, we chose to use a validation
function here.

Validation function

In Chapter 2, we learned about writing Table-Driven Tests, where you define


a list of test cases, each in its instance of a custom structure. The expected
return value is directly in the structure in each case. Usually, you will see a
function returning a pair of a value and an error. A validation function can
ensure that a returned result is valid, on a case-by-case basis. Sometimes, you
want to dig deep into the returned value. Sometimes, you want to ensure the
error is the expected one. Most of the time, it's pointless to fully check both
the returned value AND the error - only one of them will be set.

A validation function is a field from the test case structure and takes as
parameter *testing.T and all necessary parameters for the check. It does not
return an error but fails directly if something wrong happens. For our
Convert function, we will need the value we got and the error.

tt := map[string]struct {
// input fields
validate func(t *testing.T, got money.Amount, err error)
}{

Now wait a second. Is that field actually a function? Yes! Go allows for the
definition of variables of many types, including functions of specific
signatures. You can see examples of what it looks like in test cases below.

Listing 6.5 convert_test.go: Check the testability of the design choices

package money_test
import (
"testing"

"learngo-pockets/moneyconverter/money"
)

func TestConvert(t *testing.T) {


tt := map[string]struct {
amount money.Amount
to money.Currency
validate func(t *testing.T, got money.Amount, err error) #A
}{
"34.98 USD to EUR": {
amount: money.Amount{}, #B
to: money.Currency{}, #B
validate: func(t *testing.T, got money.Amount, err error) { #C
if err != nil {
t.Errorf("expected no error, got %s", err.Error())
}
expected := money.Amount{} #D
if !reflect.DeepEqual(got, expected) {
t.Errorf("expected %v, got %v", expected, got)
}
},
},
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
got, err := money.Convert(tc.amount, tc.to)
tc.validate(t, got, err) #E
})
}
}

Enough suspense, let’s code thisDecimal .

6.2 How to represent money


How should we represent a given amount of money? Say, 86.33 Canadian
dollars. Or, when using the ISO-4217 standard, 86.33 CAD.

A first idea could be to simply use a float. Unfortunately, there are two
problems in this naive approach.
First, we are not preventing anyone from declaring 86.32456 CAD, which
bears no real-world meaning. The smallest subunit of this Canadian dollar is
the cent, a hundredth of a dollar. Anything smaller than 0.01 CAD must be
rounded one way or another. We want to prevent this nonsense from
happening and prevent it by design. This means that the way we build this
Decimal struct should prevent it from ever happening, not because of
safeguards that we may accidentally remove, but because it should simply be
impossible.

Second, the precision of the floating point numbers is worth diving into.

Floating-point numbers

Using integers in computer programming is straightforward - all you need to


pay attention to is whether they’re not too big. Using floating-point numbers
is a very different story, and, whenever using floating-point numbers one
should always assume one won’t get exactly what one expects.

In computer science, the IEEE-754 standard, adopted by Go, defines an


implementation of floating-point numbers arithmetic. Go offers two flavours
of floating-point numbers: float32 (encoded as 4 bytes) andfloat64
(encoded as 8 bytes). Due to the implementation of IEEE-754, these two
types have a precision - a number of guaranteed correct digits - when written
in base 10.

float32 guarantees a precision of only six digits. This means anything


farther down the line from the first non-zero digit, in a float32 variable, can
safely be considered gibberish. Here are some examples:

123_456_789 (around a hundred million) - the first non-zero digit is the


leading 1 , the seventh digit is the7 . If we write fmt.Printf("%.f",
float32(123_456_789)) , we get the output123456792 . As we can see, we’ve
lost the correct digits after the 7 .

0.0123456789 - the first non-zero digit is the 1 in the hundredth (second after
the decimal separator) position. If we write fmt.Printf("%.10f",
float32(0.0123456789)) , we get the output 0.0123456791 . Again, only the
first seven non-zero digits were safely encoded, the rest is lost.

When using float64 variables, the precision is 15 guaranteed digits, which


would be around a few seconds if the dinosaurs had started float64
a clock
when they went extinct, 33 million years ago. You might think this is way too
accurate to ever be imprecise - sometimes, it simply isn’t: using afloat64 to
represent the mass of Earth would make the weight of all the gold on Earth a
negligible part of these gibberish numbers.

Some numbers will have an exact representation in IEEE 754 - numbers that
are combinations of inverses of powers of 2 - up to a certain point. For
instance, 0.625, which is ½+⅛, prints as 0.625000… - and all digits after the
5 are zeroes. But most fractions can’t be written as sums of inverses of
powers of two, and thus, most decimal numbers will be incorrectly
represented, when usingfloat32 or float64 .

We can reach the limits of float32 rather early: the following line doesn’t
print the expected 1.00000000. Even though the first 7 digits are correct
(0.9999… is equal to 1), the eighth isn’t.

fmt.Printf("%.8f", float32(1)/float32(41)*float32(41))

Sometimes, a precision of seven significant digits will be enough. When


averaging grades, usingfloat32 works perfectly. Similarly, an error of a
millionth of a dollar would seem tolerable, if we were to use float32 s in our
project, wouldn’t it? But what if the amount to convert is not 1 dollar, but ten
million dollars? In this case, the error we introduce by usingfloat32 would
already be a few dollars. Would that still be acceptable?

Back to money

In order to represent an amount of money, it is therefore always preferable to


default to fixed precision, unless you know for certain that the floating point
will not cause any harm - arithmetic operations on integers are correct to the
unit - as long as they’re “not too big”. If we want to represent billions, which
is close to the maximum value of anuint32 - 2^32, or around 4 billion - the
packagebig has a few types to represent really big numbers, for example
big.Int or big.Rat , respectively for integer and rational. In our case, let’s
keep it simple and use regular integers: we’ll accept the fact that we are not
decillionaires as a limitation.

When it comes to operations, there are a few things we can take for granted,
and others that we should not.

Floating-point number operations

Multiplying or dividing floating-point numbers is usually fine. The problems


start happening when adding and subtracting one to another. Here’s a simple
scenario: a bank has encoded the money in their customers’ accounts with
float32 . A very, very rich customer decides to buy ice cream with their
credit card. The ice cream costs 1 euro. At that moment, the customer had a
hundred million euros in their account. On the bank’s side, the operation
float32(100_000_000) - float32(1) is executed. At their surprise, the
customer realises they still have a hundred million euros, as if the payment of
1 euro had never happened. This is due to the fact that we have 8 significant
digits in the customer’s bank account before we reach the unit, and the
subtraction of 1 is lost in the noise of float32 ’s precision, which guarantees
only 7 correct digits.

One of the most common mistakes programmers do when using floating-


point numbers is using the== operator as a comparator. Since a floating-point
number isn’t properly represented, how can we hope it will be equal to
another badly represented number? The safe way of comparing floating-point
numbers is to take into account the precision: if two floating-point numbers
are within the precision range of the largest, they should be considered equal,
and, otherwise, they should be considered different. Here’s a quick
trigonometric example. Don’t worry, we won’t go too far. The sine function
returns 0 when evaluated on any multiple of π. Let’s see how this looks like
when calling the
math.Sin function (it returns a float64 ) in Golang:

fmt.Println(math.Sin(math.Pi))

This returns a very small, but clearly not null, value - 1.2246467991473515e-
16. Any mathematician would be offended by this result. However, as
computer scientists, we know that, instead of comparing this number to the
exact 0, we should compare it to 0 within the range of the precision of a
float64 . This is how we could check if sin(π) is close enough (to the
precision of 15 digits) to 0 that we can consider them non-distinguishable:

fmt.Println(math.Abs(math.Sin(math.Pi)-0) < math.Pow(10, -15))

6.2.1 Decimal implementation

That was a lot of theory. Knowing all this, there are a number of different
possibilities for implementing this Decimal struct. What we chose to do here
was to split the integer and decimal parts. Fortunately, as the contents of the
struct are private to the package, users don’t depend on our implementation
and it should be possible to come back on this decision anytime without
breaking our exposed API.

This is another reason why hiding the internal details behind a custom type is
generally a good idea. In practice, we could start with imprecise floats and
refactor later. Let’s not, though - we already know that floats can introduce
imprecision, and we wouldn’t want that, right?

Integer and decimal parts are two different numbers. But how do we know
what this decimal represents in the currency? The satoshi is currently the
smallest unit of the bitcoin currency recorded on the blockchain and it is one
hundred millionth of a single bitcoin (0.00000001 BTC), far from the
generally accepted hundredth of euros, francs, hryvni or rupees. We will keep
this precision of the decimal part as a power of ten. This precision is a
number that will range between 0 (we’ll always want to be able to represent
1.0) and a value that isn’t too big. Since we don’t need to represent numbers
bigger than 10^30, we don’t need to store an exponent of 10 that is bigger
than 30. For small numbers such as this, using a
byte is a safe choice. A
byte ’s maximum value is 255, and we’re definitely not going to need that
power of 10.

Listing 6.6 decimal.go: Decimal struct implementation

// Decimal can represent a floating-point number with a fixed precision.


// example: 1.52 = 152 * 10^(-2) will be stored as {152, 2} #A
type Decimal struct {
// subunits is the amount of subunits. Multiply it by the precision to get the real value
subunits int64 #B
// Number of "subunits" in a unit, expressed as a power of 10.
precision byte #C
}

We should be able to update the test to add real quantity (asDecimal


a ) and
currency (as aCurrency ) values to it. But can we?

Constructing the Decimal

The fields of the structs are not exposed, and we really want to keep it that
way. We need a building function for Decimal and Amount .

Let’s start with the one with no dependency:Decimal .

What will it take as parameters, though? If we ask for three ints for integer
part, decimal part and precision, there will be no way of changing this int-
based implementation later. We can expect the amount to be expressed as a
string in the caller’s input, in order to avoid floating point imprecision from
the start.

One common way of creating a struct in Go is theNew function, as seen in


Chapter 4. Another, when everyone carries strings around, is the Parse
keyword, as found, for example, in time.ParseDate or url.Parse . Our
Decimal is a good candidate for this pattern. Let’s write a function that will
take a string as its parameter and return D
aecimal that the string represents.

Parse a decimal number

Write a ParseDecimal function in the decimal.go file that returns a Decimal


or an error . Don’t forget to write a test. You will need strconv.ParseInt to
convert strings to integers, andstrings.Cut , to split a string on a separator.
In a terminal, you can run go doc strconv.ParseInt and go doc
strings.Cut for some inspiration. Remember: We want to use anint64 to
represent the value, as this is the largest of the basic types. Here’s a short
description of the various steps we need to go through inParseDecimal :
ParseDecimal(string) (Decimal, error) {
// 1 - find the position of the . and split on it.
// 2 - convert the string without the . to an integer. This could fail
// 3 - add some consistency check
// 4 - return the result

There are several ways of splitting the string “18.95 ” into “ 18 ” and “ 95 ”, and
the strings package offers two: Cut and Split . Why are we using
strings.Cut and not strings.Split ? We appreciate the simplicity of the
former, and it is a lot more convenient to use when the separator is not
present in the input string.

On one hand,Cut will break the string into two parts, right after the first
instance of the given separator, and return a boolean telling whether the
separator was found. If the string does not contain the separator, the function
returns the full string, an empty string andfalse .

On the other hand,Split breaks the string into substrings, delimited by


separators, and returns a slice of these substrings. If the separator does not
appear,Split returns a slice with only one string. Finally, if the separator or
the string is empty, it returns an empty slice.

Here is an example of the behaviour ofstrings.Cut and strings.Split on


three different strings: one where the separator doesn’t appear, one where the
separator appears once, and one where the separator appears more than once.

Table 6.1 Examples of strings.Cut and strings.Split

Output of Output of
Value fmt.Println(strings.Cut(" fmt.Println(strings.Split("
{{Value}}", "p") {{Value}}", "p")

banana "banana", "", false []string{"banana"}

grape "gra", "e", true []string{"gra", "e"}


"a", "ple", true []string{"a", "", "le"}
apple

You can see on the grape example forstrings.Split that the length of the
resulting slice is the number of “p” +1. It is also interesting to notice that
Strings.Split will not discard empty strings, as you can note on the “apple”
example.

Since ParseDecimal can return an error, let’s take some time to go through
what are error types and how to check them.

Error types

It is nearly part of the definition of parsing: there might be problems. If the


user sends us letters, what can we do? Return an error.

The errors package exposes a useful method:

errors.As(err error, target any) bool

It reports whether err's concrete value is assignable to the value pointed to by


the target. It becomes very handy when you are using a library and want to
compare the type of error with the domain error, for example, is this error
coming from the money package? When you are the writer of the library, it is
polite to expose a domain error for the users so that they can adapt the
behaviour in their code if the error is coming from your library.

In our case, we are the writers of the library and as polite people, we will
expose a domain error type in the package money and implement the
interface Error from the standard errors package.

Let’s create a package-specific error type.

Listing 6.7 errors.go: Custom error type for the package money

package money
// Error defines an error.
type Error string

// Error implements the error interface.


func (e Error) Error() string {
return string(e)
}

Not a lot of effort, and worth the simplicity in usage. The consumer can now
check whether a returned error is from this package.

var moneyErr money.Error


if errors.As(err, &moneyErr) {
// ...
}

Finally, before we start writing code, let’s think of errors that consumers will
be able to understand. The first would be returned if the string to parse is not
a valid number. The second will be raised if we try to deal with values that
are too big. Having a limit is a good idea. It will help ensure that we don’t
exceed the maximum value of anint64 , especially when multiplying to
Decimal variables together, which is bound to happen.

Listing 6.8 decimal.go: Expose errors

const (
// ErrInvalidDecimal is returned if the decimal is malformed.
ErrInvalidDecimal = Error("unable to convert the decimal")

// ErrTooLarge is returned if the quantity is too large - this would cause floating point precision
ErrTooLarge
errors. = Error("quantity over 10^12 is too large")
)

Now that we’ve exposed the errors we could think of, let’s write the function.

Listing 6.9 decimal.go: ParseDecimal function

// ParseDecimal converts a string into its Decimal representation.


// It assumes there is up to one decimal separator, and that the separator is '.' (full stop character).
func ParseDecimal(value string) (Decimal, error) {
intPart, fracPart, _ := strings.Cut(value, ".") #A

// maxDecimal is the number of digits in a thousand billion.


const maxDecimal = 12

if len(intPart) > maxDecimal {


return Decimal{}, ErrTooLarge
}

subunits, err := strconv.ParseInt(intPart+fracPart, 10, 64) #B


if err != nil {
return Decimal{}, fmt.Errorf("%w: %s", ErrInvalidDecimal, err.Error()) #C
}

precision := byte(len(fracPart)) #D

return Decimal{subunits: cents, precision: precision}, nil


}

How does our precision variable work? Let’s look at a few examples.

Table 6.2 Precision examples

value (string) precision

5.23 2

2.15497 5

1 0

As you can see, the precision of the parsed number is simply the number of
digits after the decimal separator. If the user gives us 1.1 dollars, it’s a bit
weird but it’s valid. Note that converting it to dollars will give $1.10 back,
with one more digit, because dollars are divided in hundredths.

Testing ParseDecimal
In order to check the results, we need to access the unexposed fields of the
Decimal structure. To achieve this, our test needs to reside in the same
package and be aware of the implementation.

The test can look like this.

Listing 6.10 decimal_internal_test.go: Test ParseDecimal

func TestParseDecimal(t *testing.T) {


tt := map[string]struct {
decimal string #A
expected Decimal #B
err error #B
}{
"2 decimal digits": {
decimal: "1.52",
expected: Decimal{
integerPart: 1, decimalPart: 52, precision: 2,
},
err: nil, #A
},
"no decimal digits": {...}, #B
"suffix 0 as decimal digits": {...}, #B
"prefix 0 as decimal digits": {...}, #B
"multiple of 10": {...}, #B
"invalid decimal part": {...},
"Not a number": { #C
decimal: "NaN",
err: ErrInvalidValue,
},
"empty string": { #C
decimal: "",
err: ErrInvalidValue,
},
"too large": { #C
decimal: "1234567890123",
err: ErrTooLarge,
},
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
got, err := ParseDecimal(tc.decimal)
if !errors.Is(err, tc.err) {
t.Errorf("expected error %v, got %v", tc.err, err)
}
if got != tc.expected {
t.Errorf("expected %v, got %v", tc.expected, got)
}
})
}
}

Let’s have a look, in particular, at the test named “suffix 0 as decimal digits”.
In it, we parse the value 1.50, and it gets converted to a Decimal with 150
subunits, and a precision of 2. This is correct, but is it really the best we can
do? We’re dealing with a decimal number here, there is no point in keeping
these extra zeroes, as they don’t bring any information. Let’s simplify a
decimal, with the use of a new method for theDecimal type, called simplify .
This method will be tested via the tests onParseDecimal . simplify will
remove zeroes in the rightmost position as long as this doesn’t affect the
value of the Decimal. 32.0 should be simplified to 32, but 320 should remain
320.

Listing 6.11 decimal.go: Method simplify

func (d *Decimal) simplify() {


// Using %10 returns the last digit in base 10 of a number.
// If the precision is positive, that digit belongs to the right side of the decimal separator.
for d.subunits%10 == 0 && d.precision > 0 {
d.precision--
d.subunits /= 10
}
}

That was an important first step. Remember to run tests and commit - with
explicit messages - regularly, especially upon completion of a piece of the
deliverable. After this complex logic, writing the Currency builder is going to
be easy.

6.2.2 Currency value object

In order to understand what this input number means, we need a currency. As


mentioned before, each currency has a fixed precision and cannot express any
value smaller than this precision: 0.001 CAD isn’t an amount that we want to
represent, as it doesn’t exist in real life (it may be used in very particular use
cases such as statistics, but not in transactions). There is no way to guess each
currency’s precision. We could retrieve this information through a service,
read a database, read a file... or we could simply keep a hard-coded list of the
exceptional currencies and default to the hundredth, which most use.
Hopefully, currencies won’t start using new subunits too often. After all, this
is a pet project, we are not planning (yet) to support funky historical
currencies.

First, add the currency’s precision to the struct. It’s going to be a value
between 0 and 3. Abyte is again a good choice here.

Listing 6.12 currency.go: Add precision

// Currency defines the code of a money and its decimal precision. #A


type Currency struct {
code string
precision byte #B
}

We can use theParse prefix, as we take astring in and return a valid object
or an error. Let’s have a thought about the error(s) we might have to return. If
an invalid code currency is given, we should be able to return an error. Let’s
create a new constant,ErrInvalidCurrencyCode , of moneyError type. As
you can see, the proposed name of the error begins with a capital, meaning it
is exposed. This allows this package’s consumer to check against it. Then, we
can create a function namedParseCurrency that will take the given currency
code, as a string, and return aCurrency object and an error . The first
validation consists of checking if the code is composed of 3 letters; if it isn’t,
we can directly return our new error. Otherwise, we’ll switch on the possible
code currencies and return theCurrency object with their respective
precisions. We will assume that the default case is 2, as most of the
currencies have a precision of 2 digits.

Listing 6.13 currency.go: Function for supported currencies

// ErrInvalidCurrencyCode is returned when the currency to parse is not a standard 3-letter code.
const ErrInvalidCurrencyCode = moneyError("invalid currency code") #A

// ParseCurrency returns the currency associated to a name and may return ErrInvalidCurrencyCode.
func ParseCurrency(code string) (Currency, error) {
if len(code) != 3 { #B
return Currency{}, ErrInvalidCurrencyCode
}

switch code {
case "IRR":
return Currency{code: code, precision: 0}, nil
case "CNY", "VND": #C
return Currency{code: code, precision: 1}, nil
case "BHD", "IQD", "KWD", "LYD", "OMR", "TND": #C
return Currency{code: code, precision: 3}, nil
default:
return Currency{code: code, precision: 2}, nil #D
}
}

Again, don’t trust this tool in production. Validating the currency in real life
should be done against a list that can be updated without touching the code.
What we could do without making this project too big to fit in a pocket: we
could go further and make sure the letters are actually capitals of the English
alphabet. Actually, let’s make it an exercise.

Exercise 6.1 Make sure the currency code is made of 3 letters between A and
Z. You can use theregex package if you want to make things complicated, or
check that each of the 3 bytes is between ‘A’ and ‘Z’ included.

Have we properly tested everything? Not yet. Try writing a test for the parser
yourself before looking at our version.

Listing 6.14 currency_internal_test.go: TestParseCurrency function

package money

import (
"errors"
"testing"
)

func TestParseCurrency_Success(t *testing.T) { #A


tt := map[string]struct {
in string
expected Currency
}{
"hundredth EUR": {in: "EUR", expected: Currency{code: "EUR", precision: 2}},
"thousandth BHD": {...},
"tenth VND": {...},
"integer IRR": {...},
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
got, err := ParseCurrency(tc.in)
if err != nil {
t.Errorf("expected no error, got %s", err.Error())
}

if got != tc.expected {
t.Errorf("expected %v, got %v", tc.expected, got)
}
})
}
}

func TestParseCurrency_UnknownCurrency(t *testing.T) { #B


_, err := ParseCurrency("INVALID")
if !errors.Is(err, ErrInvalidCurrencyCode) {
t.Errorf("expected error %s, got %v", ErrInvalidCurrencyCode, err)
}
}

This time we are not using validation functions but separating the success
cases from the one error case. It is mostly a matter of taste - the only criteria,
as usual, are whether the next reader will understand what we are testing and
find it easy to add or change a test case. By callingt.Run we also make sure
that all test cases can be run separately.

If your tests pass, don’t forget to commit.

We have a number, we have a currency, we can put them together and make
an Amount of money.

6.2.3 NewAmount

An amount is a decimal quantity of a currency. As mentioned before, a


decimal can be incompatible with a currency, if its precision is too large, for
instance, we shouldn’t allow for the creation of an Amount of decimal
quantity 19.8875 (a number with a precision of 4) Canadian dollars, since a
Canadian dollar’s subunit is a cent (a precision of 2). Building an object that
is not valid makes no sense: it is the role of theNew function to return either
something valid or an error.

Listing 6.15 amount.go: NewAmount function

const (
// ErrTooPrecise is returned if the number is too precise for the currency.
ErrTooPrecise = moneyError("quantity is too precise")
)

// NewAmount returns an Amount of money.


func NewAmount(quantity Decimal, currency Currency) (Amount, error) {
if quantity.precision > currency.precision { #A
// In order to avoid converting 0.00001 cent, let's exit now.
return Amount{}, ErrTooPrecise
}
quantity.precision = currency.precision #B

return Amount{quantity: quantity, currency: currency}, nil


}

The test should be quite straightforward, so we won’t give you our version
here. Of course you can still find it in the book’s code repository. Don’t
forget to cover the error case and run the test with coverage to validate you
did not miss anything.

go test ./... -cover

Update the external test

Now, the last step before writing the actual conversion is to update the test of
Convert . As it is testing Convert and neither ParseDecimal , ParseCurrency
nor NewAmount (which are already covered by their own internal tests), what
will we do with the potential errors? It is not the role of the TestConvert
function to check the different values that ParseNumber and friends can
return, but it needs to deal with the errors.

One option to avoid dealing with these errors would be to write a function
that builds the required structures without checking anything, because you
know that your test cases are valid. In order to build them, it needs to live in
the money package and be exposed to the money_test package. But then what
would prevent consumers from using that test utility function and send you
invalid values? It completely invalidates the actual builders that we wrote in
this chapter section. Let’s not choose this dangerous option.

An alternative is to call the function and ignore the error:

number, _ := money.ParseDecimal("23.52")

This is practical and doesn’t expose anything dangerous, but it raises a


problem: if we give our test an invalid value by mistake, there’s no way we
detect it. We would be testing on the zero values returned by the function
alongside the error without knowing it, resulting in flaky tests. Let’s, again,
not choose this option.

As we need to call a chain of different builders, we will instead write a test


helper function, and we will tell Go about it. The testing.T object that we
use for unit testing has aHelper function: according to the documentation
(which can be obtained with go doc testing.T.Helper ), Helper marks the
calling function as a test helper function. When printing file and line
information, that function will be skipped. It means that you will be able to
see which test broke, rather than this helper function’s line number.

Listing 6.16 convert_test.go: Helpers to parse Currency and Amount

package money_test

import (
"testing"

"learngo-pockets/moneyconverter/money"
)

func mustParseCurrency(t *testing.T, code string) money.Currency {


t.Helper() #A

currency, err := money.ParseCurrency(code)


if err != nil {
t.Fatalf("cannot parse currency %s code", code) #B
}

return currency
}

func mustParseAmount(t *testing.T, value string, code string) money.Amount {


t.Helper() #A

n, err := money.ParseDecimal(value)
if err != nil {
t.Fatalf("invalid number: %s", value)
}

currency, err := money.ParseCurrency(code)


if err != nil {
t.Fatalf("invalid currency code: %s", code)
}

amount, err := money.NewAmount(n, currency)


if err != nil {
t.Fatalf("cannot create amount with value %v and currency code %s", n, code)
}

return amount
}

As you can see we are not usingt.Fail but t.Fatal , which stops the test run
immediately.

We can now give actual values to theConvert function’s test. The return
value is still nothing, though.

Listing 6.17 convert_test.go: Calling mustParse in the test case

"34.98 USD to EUR": {


amount: mustParseAmount(t, "11.22", "USD"), #A
to: mustParseCurrency(t, "EUR"), #B
validate: func(t *testing.T, got money.Amount, err error) {
if err != nil {
t.Errorf("expected no error, got %s", err.Error())
}
expected := money.Amount{}
if !reflect.DeepEqual(got, expected) {
t.Errorf("expected %q, got %q", expected, got)
}
},
},

Does it compile?

Does it run?

Does it pass tests?

Good job. This is a good time to commit your work.

6.3 Conversion logic


We are happy (or at least OK) with the API of this package, and the objects
that we have in hand are guaranteed valid and supported. Let’s now have it
really convert the money. For the first version, until we can actually run the
tool, we will hardcode an exchange rate. After validating the base logic, we’ll
be able to call a distant server holding the truth. For this distant server, we’ve
decided to use the European Central Bank, which is free of use, doesn’t
require authentication, and is likely to still be around in a couple of years.

6.3.1 Apply change rate

Of all the different entities that we built, which is responsible for applying a
change rate?

This logic could belong to the Amount structure. It would know how to create
a new Amount with a new value. Of course, amounts should be immutable and
we need to make sure that the input amount is not modified by the operation.
But does this option make sense conceptually? Would you expect a sum of
money in a given currency to tell you what it is worth in another? Would you
expect your 10 Sterling pound note to tell you “Hey, I’m worth roughly 10
US dollars today”? Probably not. You would go to an exchange office, give it
your note and expect another back with a handful of coins. If it doesn’t make
sense conceptually, then it will be harder to understand for future
maintainers.

Instead, let’s write the exchange office as a function that will be called by
Convert .

Implement applyExchangeRate

As mentioned earlier, we don’t want to use float64 for this piece of the logic.
It’s the most sensitive, and we want to ensure we do exact maths, without
losing any precision on the values we handle. On one side, we have our
Amount ’s quantity field of type Decimal , and on the other side, we have an
exchange rate that will be retrieved in a remote call. We must also provide
the target currency, as that is where the precision of the output amount is
stored.

How should we express that rate? Exchange rates published by the European
Central Bank have up to 7 figures, which means we could safely store it in a
float64 variable. A float32 might not be enough for currencies that use 7
digits - who knows why an eighth digit wouldn’t be added. We’ve already
created a specific type for floating-point numbers with high precision,
Decimal . It would be better to use that. Since this variable won’t represent a
“normal” decimal number, we might as well use a specific type for it, in order
to best describe its purpose. We can even push the zeal to the point of
creating a new file for it, but at this point even the most adamant advocate for
small files among these authors will admit that it can also live in the
convert.go file, just after the exposed method.

A note on code organisation: you always want to have the exposed method
first in a file, as it is easier to read code from its entrypoint. If you have
multiple exposed functions in the same file, you may want to start with an
exposed function and keep the private functions it calls just after.

Listing 6.18 convert.go: ExchangeRate type

// ExchangeRate represents a rate to convert from a currency to another.


type ExchangeRate Decimal

This leads to an explicit signature for the function. applyExchangeRate is in


charge of multiplying the input quantity by the change rate and returning an
Amount compatible with the target Currency . We will first need a function to
multiply a Decimal with an ExchangeRate , and then we’ll have to adjust the
precision of that product to match theCurrency ’s.

To multiply a decimal with an exchange rate, we converted the exchange rate


to a Decimal , and performed some arithmetics: the result of the multiplication
is the product of the values, and the precision of the returned decimal is the
sum of the precisions. We’ve done the following:

20.00*4.0 = {subunits: 2000, precision: 2}*{subunits: 40, precision: 1}


= {subunits: 2000*40, precision: 2+1}
= {subunits: 80_000, precision: 3}

The first point to notice, here, is that we have obtained “80” by using a lot
more digits than necessary. Indeed, 80 is equal to 80.000, but we don’t really
need this precision. We can make use of the method simplify here again,
when performing the multiplication

The second point to notice is that we have a precision that doesn’t yet take
into account any information about currencies. All we’ve done so far is
multiplying decimal numbers. In applyExchangeRate , we’ll therefore need to
adjust the result of multiply to give it the precision of the target currency.
For this, we’ll have to multiply (or divide) by the difference of precision
between our target currency and the result of the exchange rate
multiplication. Of course, we could have a direct call to math.Pow(10.,
precisionDelta) here, but this would be costly, with lots of casting to and
from floats or integers. Instead, we’ll delegate that task to a function named
pow10 . In the function, we’ll hardcode some common powers of 10 as quick-
win solutions, and default to the expensive call tomath.Pow only for values
out of the expected range of exponents. Overall, thispow10 function could be
implemented with an exhaustivemap or a switch statement. We decided to go
with the latter, but both options are valid.

Listing 6.19 decimal.go: pow10()

// pow10 is a quick implementation of how to raise 10 to a given power.


// It's optimised for small powers, and slow for unusually high powers.
func pow10(power int) int {
switch power {
case 0:
return 1
case 1:
return 10
case 2:
return 100
case 3:
return 1000
default:
return int(math.Pow(10, float64(power))) #A
}
}

Let’s write the code for this first part, and then we can implement the
mysterious multiply function. The switch is here to adjust the result with
the precision of the target currency.

Listing 6.20 convert.go: applyExchangeRate

// applyExchangeRate returns a new Amount representing the input multiplied by the rate.
// The precision of the returned value is that of the target Currency.
// This function does not guarantee that the output amount is supported.
func applyExchangeRate(a Amount, target Currency, rate ExchangeRate) (Amount, error) {
converted, err := multiply(a.quantity, rate) #A
if err != nil {
return Amount{}, err
}

switch { #B
case converted.precision > target.precision: #C
converted.subunits = converted.subunits / pow10(converted.precision-target.precision)
case converted.precision < target.precision: #D
converted.subunits = converted.subunits * pow10(target.precision-converted.precision)
}

converted.precision = target.precision

return Amount{
currency: target,
quantity: converted,
}, nil
}

The returned Amount is not constructed using the function that validates it.
Instead, we prefer to return an amount that theConvert function has to
validate before returning it to the external consumer. Note that we are being
explicit in the documentation: if a future maintainer (you included) wants to
start exposing this function for some reason, they will need to refactor it to
return an error if needed.

Finally, the core of this chapter resides in the multiplication function, so let’s
implement it! Remember, we don’t want to multiply floats together, as this
could lead to floating-point errors. This means we’ll have to convert our
ExchangeRate into a Decimal . The rest is quite straightforward.

Listing 6.21 convert.go: multiply

// multiply a Decimal with an ExchangeRate and returns the product


func multiply(d Decimal, r ExchangeRate) (Decimal, error) {
// first, convert the ExchangeRate to a Decimal
rate, err := ParseDecimal(fmt.Sprintf("%g", r)) #A
if err != nil {
return Decimal{}, fmt.Errorf("%w: exchange rate is %f", ErrInvalidDecimal, r)
}

dec := Decimal{
subunits: d.subunits * rate.subunits,
precision: d.precision + rate.precision,
}
// Let's clean the representation a bit. Remove trailing zeroes.
dec.simplify()

return dec, nil


}

Now that we have a function to convert an amount to a new currency, where


should we call it? Before we answer that question, let’s test what we’ve
written.

Most importantly, testing it!

This is the heart of the logic. It requires a lot of testing to make sure that
everything works fine and keeps working fine if we ever decide to change
any implementation.

Before writing the test, you can start thinking about all the test cases
imaginable. Here are a few examples:
Table 6.3 Possible test cases

TargetCurrency
Amount Rate What are we checking?
precision

The output must be exactly


1.52 1 2
identical to the input.

The decimal part becomes 0,


2.50 4 2 but precision should be that
of target

Same as above, but switched


4 2.5 0 around, and the precision is
0.

3.14 2.52678 2 A real-life exchange rate

1.1 10 1 Keeping the precision of 1

Keep the precision in large


1_000_000_000.01 2 2
numbers

5.05935e-
265_413.87 2 Very small rate
5

Adding extra precision in


the output when there was
265_413 1 3
none

Increasing the precision in


2 1.337 5
the output

1.33 *
2 5 Rate is too high
10^16

The number of different test cases, and how fast we can think of new corner
cases, calls for a table or map-based test.

Of course, as the function is not exposed, the test will have to be internal and
you need a new file for that. The implementation of the test can look like this.

Listing 6.22 convert_internal_test.go: Testing applyExchangeRate

package money

import (
"reflect"
"testing"
)

func TestApplyExchangeRate(t *testing.T) {


tt := map[string]struct {
in Amount
rate ExchangeRate
targetCurrency Currency
expected Amount
}{
"Amount(1.52) * rate(1)": { #A
in: Amount{
quantity: Decimal{
cents: 152,
exp: 2,
},
currency: Currency{code: "TST", precision: 2},
},
// add test cases
}

for name, tc := range tt {


t.Run(name, func(t *testing.T) {
got := applyExchangeRate(tc.in, tc.targetCurrency, tc.rate)
if !reflect.DeepEqual(got.number, tc.expected) {
t.Errorf("expected %v, got %v", tc.expected, got)
}
})
}

Finally! Convert can return something useful to the consumer. We don’t have
exchange rates right now, so let’s hardcode a rate of 2 for now and keep the
fetching of exchange rates for later in this chapter.

Listing 6.23 convert.go: First implementation of Convert

// Convert applies the change rate to convert an amount to a target currency.


func Convert(amount Amount, to Currency) (Amount, error) {
// Convert to the target currency applying the fetched change rate.
convertedValue := applyExchangeRate(amount, to, 2) #A

return convertedValue, nil #B


}

Congratulations, you broke the test for this function! We are now returning
something so you can fix it by calling mustParseAmount to define the
expected output.

We now trust the heart of the conversion, we can make sure that what we
return to the consumer can be used again by our own library.

6.3.2 Validate result

Because we have so many limitations in the supported values, it seems wise


to check that the output is something expected.

What are the limitations exactly?


12
The number cannot be higher than 10 or it will lose precision because
of the floats;
The number’s precision must be compatible with the currency’s.

Contrary to the conversion logic, this can be the responsibility of theAmount


structure. An amount can be valid or not, and it should know about it. After
all, pound notes and the dollar bills have the necessary fancy decorations on
them to attest their authenticity.

Listing 6.24 amount.go: validate method implementation

// validate returns an error if and only if an Amount is unsafe to use.


func (a Amount) validate() error {
switch {
case a.number.integerPart > maxAmount: #A
return ErrTooLarge #A
case a.number.precision > a.currency.precision:
return ErrTooPrecise #A
}

return nil
}

This is what the Convert function finally looks like. It is pretty small: not
much would need to be tested internally.

Listing 6.25 convert.go: Convert implementation

// Convert applies the change rate to convert an amount to a target currency.


func Convert(amount Amount, to Currency) (Amount, error) {
// Convert to the target currency applying the fetched change rate.
convertedValue := applyExchangeRate(amount, to, 2)

// Validate the converted amount is in the handled bounded range.


if err := convertedValue.validate(); err != nil {
return Amount{}, err
}

return convertedValue, nil


}

Check your tests: do you have a convincing coverage of the finalised library?
You can check the coverage of your test.
Now that we have the whole structure and logic of our library, now that it is
tested, let’s plug it in, because how is code fun if you don’t run it? After that,
we will fetch some real-life change rates and finish the tool. Having an
executable in which we keep adding features allows us to showcase an early
version of our product that we can improve.

6.4 Command-Line Interface


In this section, we will write the main function: let’s not forget that we are not
writing a library, we are writing a CLI. This means we will be parsing input
parameters, validating them and passing them on to the Convert function.

But before we implement all these safety nets, we want to run our
application!

6.4.1 Flags and arguments

Take a step back. What should our program do? Let’s look at the Usage we
wrote in the requirements.

change -from EUR -to USD 413.98

Note that we have 2 flags - the in and out currencies - and an argument - the
amount we want to change.

Currency flags

In order to read flags from a command line, as we saw in Chapter 2, Go has


the quite explicit flag package. After importing it in our main.go file, we can
start with the first few lines that will ensure we properly read from the
command line.

As mentioned in Chapter 2, theflag package exposes useful methods to


retrieve values from flag parameters. Here, we will use theflag.String
method to retrieve the two currencies' source and target:from and to .

flag.String takes as an argument the name of the flag, the default value
(which can be empty) and a brief description. It returns the contents of the
flag -from as a variable of type*string . As we’ve already mentioned in
Chapter 2, calling the Parse method is necessary after all flags are defined
and before values are accessed. Here, we leave the default value empty for
the -from flag, but we set it for the -to flag to the string EUR. Should the user
not provide the -from flag on the command-line, the value of thefrom
variable will be an empty string. Similarly, if the -to flag is absent, the value
will be EUR.

Listing 6.26 main.go: Parsing the flags

package main

import (
"flag"
"fmt"
)

func main() {
from := flag.String("from", "", "source currency, required")
to := flag.String("to", "EUR", "target currency") #A

flag.Parse()

fmt.Println(*from, *to) #B
}

Now, we can run it. Do you remember how to run a program from the
terminal after all this library development?

go run . -from EUR -to CHF 10.50

With this first implementation of the main, we’re not calling the Convert
function, but we’re printing the source and destination currencies. They
should appear on the screen.

Value argument

The next step in implementing our command-line interface is to retrieve the


value that we have to convert. If it’s absent from the command line, we’ll exit
with an error.
Retrieving arguments from the command-line

When running an executable, most of the time, we need to specify the input,
the behaviour, the output, etc. These parameters can be provided either
explicitly, via the command line, or implicitly, via pre-set environment
variables, or configuration files at known locations. When it comes to explicit
settings, there are two ways of passing user-defined values to the program:
arguments, and flags.

Arguments are a sequence of parameters that starts at 0 and proceeds.


Arguments are anonymous and ordered. Exchanging their position can
completely change the behaviour of the program. Arguments, most of the
time, are mandatory, and have no default value. The only argument of the
go
build command is the directory, file, or package containing themain
function.

Flag parameters, on the other hand, aren’t sorted. They can appear in any
order in the command-line without altering the behaviour of the program.
They can have default values (used when the flag is absent from the
command-line), as our -to has. An example of a flag that controls behaviour
that you might have been using is the-o {binaryPath} option of the go
build .

In Go, the parameters of the command line can be retrieved withos.Args or


with the flag package. Let’s have an example to see the differences between
these two:

./convert -from EUR -to JPY 15.23

In this line, the os.Args would return a list of 6 strings, each entry
representing a word of the command:{"./convert", "-from", "EUR", "-
to", "JPY", "15.23"} . The flag.Args , on the other hand, would return
only the arguments to the command line that weren’t in flags:{"15.23"} .

Depending on which information we want to access, usingflag.Args or


os.Args is more meaningful. In our case, we only want to access the
command line parameters, and we don’t care which flags were set. Using the
first parameter that isn’t part of a flag is simple: we can useflag.Arg(0) .
Listing 6.27 main.go: Retrieve the first argument of the program

func main() {
from := flag.String("from", "", "source currency, required")
to := flag.String("to", "EUR", "target currency")

flag.Parse()

value := flag.Arg(0) #A
if value == "" {
_, _ = fmt.Fprintln(os.Stderr, "missing amount to convert") #B
flag.Usage() #C
os.Exit(1)
}

fmt.Println(*from, *to, value) #D


}

The inputs are in. They are strings, and we are not sure that the values are
valid. Fortunately, we have the perfect functions for that already.

6.4.2 Parse into business types

The Convert function is taking as parameters values that are already typed
for its usage, and the package exposes ways to build them. This strategy
optimises flexibility in the consumer’s logic, as main is free to use the type
through any other logic that it could add, or use its own different types and
Parse at the last minute, or use strings and parse whenever it needs it.

We are not doing much more than converting in this chapter (feel free to add
to it later). We just need to parse them all.

Listing 6.28 main.go: Parse currencies and amount

package main

import (
"flag"
"fmt"
"os"

"learngo-pockets/moneyconverter/money"
)
func main() {
from := flag.String("from", "", "source currency, required")
to := flag.String("to", "EUR", "target currency")

flag.Parse()

fromCurrency, err := money.ParseCurrency(*from) #A


if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "unable to parse source currency %q: %s.\n", *from, err.Error())
os.Exit(1)
}

// TODO: repeat for target currency

// TODO: read the argument, as seen above

quantity, err := money.ParseDecimal(value) #B


if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "unable to parse value %q: %s.\n", value, err.Error())
os.Exit(1)
}

amount, err := money.NewAmount(quantity, fromCurrency) #C


if err != nil {
_, _ = fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}

fmt.Println("Amount:", amount, "; Currency:", toCurrency) #D


}

Run it and enjoy the show. You should have some gibberish, something like
this:

$ go run . -from EUR -to CHF 10.50


Amount: {{10 50 2} {EUR 2}}; Currency: {CHF 2}

For someone who doesn’t know the structures we use, this is hard to
understand. It is therefore polite for the library to expose some Stringers on
its types.

6.4.3 Stringer

If you look into the fmt package, you can find a very useful interface that all
of the package’s formatting and printing functions understand: theStringer .
It follows a Go pattern where interfaces with only one method are named
with this method followed by -er, as in Reader , Writer , etc. Let’s look at how
it is defined:

type Stringer interface {


String() string
}

fmt.Stringer is implemented by any type that has a String method, which


defines the “native” formatting for that value. The String method is used to
print values passed as an operand to any format that accepts a string or to an
unformatted printing function such as Print.

In order to implement an interface in Go, a type only needs to have the right
method(s) attached to it.

Listing 6.29 currency.go: Implement Currency Stringer

// String implements Stringer.


func (c Currency) String() string {
return c.code
}

Magically, your Currency is now a Stringer . Anyone calling a printing


function with a currency as parameter will be calling this. Try it:

$ go run . -from EUR -to CHF 10.50


Amount: {{10 50 2} {EUR 2}}; Currency: CHF

The target currency is now properly readable. Let’s do the same with
Decimal . We already have a method on the type Decimal, and it receives a
pointer - the simplify method. Go doesn’t really like having both pointer and
non-pointer receivers for methods of a type, so let’s haveString() accept a
pointer receiver - we need a pointer receiver forsimplify .

Here we chose to use a double-formatting trick: we are using the precision to


create the formatting string, then using this to format the number. For
example, for a precision of 2 digits, the format variable will be %d.%02d,
which pads the decimal part with zeros: we don’t want to print 12.5 when the
currency has cents, we want to print 12.50. This trailing 0 is added by
padding it in the %02 formatting string.

However, not all currencies have a precision of 2 digits, and we must build
this %02 string using the precision of the currency. For this, we can use a
function provided by the package in charge of string conversions - adequately
named strconv . The function we use is strconv.Itoa , which you can think
of as the reversestrconv.Atoi .

decimalFormat := "%d.%0" + strconv.Itoa(int(d.precision)) + "d"

Immediately, we notice that an edge-case is when the precision is 0. Since


this is a simple test, and that the output is quite simple, we will start our
String() function by checking this scenario.

We have all the bricks to write the implementation of the Stringer interface
for Decimal type. The output of pow10 gives us the number of subunits in a
unit of the currency, which means we can retrieve the fractional part and the
integer part by simply dividing by the number of subunits. Finally, we can
return the printed output using the formatting decimalFormat .

Listing 6.30 decimal.go: Implement Decimal Stringer

// String implements stringer and returns the Decimal formatted as


// digits and optionally a decimal point followed by digits.
func (d *Decimal) String() string {
// Quick-win, no need to do maths.
if d.precision == 0 {
return fmt.Sprintf("%d", d.subunits) #A
}

centsPerUnit := pow10(d.precision) #B
frac := d.subunits % centsPerUnit
integer := d.subunits / centsPerUnit

// We always want to print the correct number of digits - even if they finish with 0.
decimalFormat := "%d.%0" + strconv.Itoa(int(d.precision)) + "d" #C
return fmt.Sprintf(decimalFormat, integer, frac)

Even if Currency ’s String method can arguably skip the unit test
requirement, this one needs one. Take a minute to write it and check your
coverage. Remember that coverage does not check that you are covered, you
could have perfect coverage and still miss a lot of cases; instead, it tells you
where you are not, and you can decide whether it is worth the effort to extend
coverage.

Finally, Amount should also implement the Stringer interface. This could be
adapted to different language standards but we chose
22.368 KWD as the
output format.

Listing 6.31 amount.go: Implement Amount Stringer

// String implements stringer.


func (a Amount) String() string {
return a.number.String() + " " + a.currency.code
}

Is your output more legible now?

$ go run . -from EUR -to CHF 10.50


10.50 EUR CHF

Now we have all the Stringer implemented, we can call the Convert function.

6.4.4 Convert

The only thing left to do in the main function is to call Convert .

Listing 6.32 main.go: End of the main function

func main() {
// ...

convertedAmount, err := money.Convert(amount, toCurrency)


if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "unable to convert %s to %s: %s.\n", amount, toCurrency,
os.Exit(1)
err.Error())
}

fmt.Printf("%s = %s\n", amount, convertedAmount))


}
This code compiles. This code can be run. It’s beautiful.

Did you spot the bug?

There is a final issue we need to address here. Despite our heavy testing,
we’ve missed something quite obvious. If you’ve tried running the tool, you
might’ve noticed it. When we pass the input amount with a lower number of
decimal digits than the currency’s precision, we display that amount with its
input number of digits, and not its currency’s!

Here’s an example:

$go run . -from BHD -to CHF 12.5


12.5 BHD = 25.00 CHF

If we check the switch in the ParseCurrency code, we see that there are 1000
fulūs in 1 Bahraini dinar - we should be writing 12.500 BHD = 25.00 CHF .

The root for this problem resides in theNewAmount function. Let’s fix it by
taking into account the currency’s precision, and add a test to cover this bug.

Listing 6.33 amount.go: Fixing the NewAmount

// NewAmount returns an Amount of money.


func NewAmount(quantity Decimal, currency Currency) (Amount, error) {
switch {
case quantity.precision > currency.precision:
// In order to avoid converting 0.00001 cent, let's exit now.
return Amount{}, ErrTooPrecise
case quantity.precision < currency.precision:
quantity.subunits *= pow10(currency.precision - quantity.precision)
quantity.precision = currency.precision
}
return Amount{quantity: quantity, currency: currency}, nil

There’s a teeny tiny problem, though. This code doesn’t work properly: it
applies a constant conversion rate of 2, regardless of currencies we set on the
command-line. We need real exchange rates. We are ready to call the bank.
6.5 Call the bank
We have a working solution, but for one problem: we are not using the real
exchange rates. We need to call an external authority to get them. Here, the
authors chose to implement a solution based on the API of the European
Central Bank, because it is free of charge, it does not need any identification
protocol, and it is very likely to still be running with the same API in a year
or even two. An unreliable API from a data provider is something that we
don’t want to face.

Fetching and using the data are two separable concerns and any separable
logic should be indeed separated in order to make testing and evolving easier.
The bank is going to be a dependency of our program: an external resource
on which it relies in order to work. It is an accepted best practice in software
design, whatever the language you use, to use inversion of control (IoC).
Inversion of control serves multiple design purposes:

decoupling the execution of a task from implementation,


focusing a module or package on the task it is designed for,
freeing systems from assumptions about how other systems do what
they do and instead rely on contracts,
and finally preventing side effects when replacing a module.

More concretely, the money package should not know where the exchange
rate is coming from, this is beyond its scope. Another package will be
responsible for calling the bank when needed, deal with the bank-specific
logic and return the required info. This other package is therefore a
dependency that the consumer (here our main function) is giving to it, via a
contract in the shape of an interface. This way, the consumer decides what
source of data is the best, and the money conversion is not touched.

Let’s take an example. Let’s say that while you are writing the tool,
somebody else in another team is writing the banking service. You cannot
access it yet. What you can do is create a dependency that Convert can
understand, where you simply return hard-coded values. And when the
service is finally here, you just need to replace the plug with a call to the new
API. Everything else is already tested and runs smoothly. Replacing
dependencies with stubs during development is just one use of this pattern.
Another could be adding a cache: replace a call to the API with a similar
function that checks in memory whether the value is already known and
avoids a network call.

In our case, the dependency’s role is to fetch the exchange rate between two
currencies. Think of an errand boy cycling to the bank a few streets away and
returning with the info, while the clerk responsible for computations is
waiting. Even though the API returns the whole list of currencies that it
knows and exchange rates for 1 euro, the tool doesn’t need the full list, only
the to and from currencies; knowledge about the details of the API should
stay inside the dependency’s package.

6.5.1 Dependency injection - the theory

There are two ways in Go to provide a dependency: one is more object-


oriented, the other looks like functional programming.

Object dependency

The first option requires the consumer to have in hand a variable of a type
that implements an interface. If you know any object-oriented language, such
as the Java family, you will be familiar with this approach.

In this version, we create a structure with a functionFetchRates attached to


it, and we pass a variable of this type toConvert . Convert is expecting any
variable that implements the expected interface.

Listing 6.34 Dependency injection via an interface

type ratesFetcher interface {


FetchRates(from, to Currency) (ExchangeRate, error) #A
}

func Convert(..., rates ratesFetcher) { #B


...
rate, err := rates.FetchRates(from, to) #B
...
}
...

func main() {
ratesRepo := newRatesRepository() #C
money.Convert(..., ratesRepo)
}

In this implementation, the main function is in charge of creating the variable


that implements the interface.

Function dependency

The second option is more verbose but it also works and can be preferred in
some cases. TheConvert function’s last parameter is a function’s definition
rather than an object implementing an interface. The rest is relatively similar.

Listing 6.35 Dependency injection via a function

func Convert(..., rates func(from, to Currency) (ExchangeRate, error)) (Amount, error) { #A


...
rate, err := rates(from, to) #B
...
}

func main() {
ratesRepo := newRatesRepository() #C
money.Convert(..., ratesRepo.FetchRates) #D
}

The main function is passing directly the FetchRates method that Convert
will be calling.

You can even name the function’s signature by declaring a type.

type getExchangeRatesFunc func(from, to Currency) (ExchangeRate, error)

func Convert(..., rates getExchangeRatesFunc) (Amount, error) {


...
}

Alternatively, the consumer is free to create any function on the fly, relying
on variables of the outside scope if needed:

Listing 6.36 Dependency injection via a local function

func main() {
config := ...

fetcher := func(from, to Currency) (ExchangeRate, error){ #A


return config.MockRate, nil #B
}

money.Convert(..., fetcher)
}

As you can see the function dependency option is a bit less intuitive for
beginners - why would a function be a parameter? - but also leaves more
room to the consumer to implement the dependency. It doesn’t fix the name
of the function, nor does it require it to be a method on an object, and
mocking it for tests is slightly easier. As often, it leaves more freedom for the
implementation, which means that mistakes are easier to make.

For example, you can have two different methods on one object and pick the
one you want to use depending on the context. Imagine an API where you
can have daily exchange rates for free, or rates updated every minute when
you are logged in. Both functions have the same signature with different
names, and they are methods of the same object, which contains
configurations valid for both calls.

Listing 6.37 Dependency injection via a local function

func main() {
ratesRepo := newRatesRepository()
apiKey := getAPIKey() #A

fetcher := ratesRepo.FreeRates #B
if apiKey != "" {
fetcher = ratesRepo.WithAPIKey(apiKey).LoggedInRates #C
}

money.Convert(..., fetcher)
}
In the rest of the chapter, we choose to implement the rate retriever with the
first approach presented, the interface dependency, mostly because it is easier
to read, explain, and understand.

6.5.2 ECB package

Let’s create a new package that will be responsible for the call to the bank’s
API. There is no point in trying to make it sound generic: it will only know
how to call this one API from the European Central Bank, so let’s call it
ecbank .

API of the package

As we’ve seen, the new package should expose a struct with one method
attached, and probably a way to build it.

Let’s talk a little about the method’s signature. We assume that it will take
two currencies and return the rate or an error. It should not return aDecimal ,
becauseDecimal represents money values and has the associated constraint
that nothing exists below the cent (or agora or qəpik). It should return an
exchange rate, for which we happen to already have a business type.

What are we going to call the structure? In real physical life, in order to get
the information, your errand boy would walk to the bank and ask. In our
code, this object does what the bank does, so we can safely call it a bank.

Listing 6.38 ecb.go: EuroCentralBank struct and its principal method

// EuroCentralBank can call the bank to retrieve exchange rates.


type EuroCentralBank struct {
}

// FetchExchangeRate fetches the ExchangeRate for the day and returns it.
func (ecb EuroCentralBank) FetchExchangeRate(source, target money.Currency)
(money.ExchangeRate,
return 0, nil error) { #A
}

Arguably, we could build completely independent packages and not rely on


money ’s types. The architectural decision instead is to base everything on the
autonomous money package. Others are allowed to rely on it, but it needs to
rely on nothing else. In Go, if package A relies on package B and B on A, the
compiler will stop you right there: import cycles, also known as cyclic
dependencies, are not allowed. Cyclic dependencies are a compiler’s version
of our chicken or egg dilemma. Even though some languages might manage
how to build with cyclic dependencies, Go forbids it from the start. This
makes the architecture cleaner, in our opinion.

What will this FetchExchangeRate method do? Build the request for the API,
call it, check whether it worked and, if it did, read the response and return the
exchange rate between the given currencies. The whole logic is articulated
around the HTTP call. Let’s see how Go natively deals with these.

6.5.3 HTTP call, easy version

The European Central Bank exposes an endpoint that lists daily exchange
rates. You can first try calling the API in your favourite terminal to see what
it looks like.

curl "http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml"

As you can see it returns a large XML response. We will have to parse it and
find the desired value. But first, let’s retrieve this message in our code!

Go’s net/http package provides server and client utilities for HTTP(S) calls.
It uses a struct calledClient to manage the internals of communicating over
HTTP and HTTPS. Clients are thread-safe objects that contain configuration,
manage TCP state, handle cookies, etc. Some of the package’s functions
don’t require a client, for example, the simple Get function, and use a default
one:

http.Get("http://example.com/")

We will start with this easy call.

Making calls to external resources

Anything that calls something that isn’t your code should be handled with
extreme care and precaution. Hope for the best, but prepare for the worst.
Here is your chance to be creative: what’s the worst that could possibly
happen when calling someone else’s code? Making a call to a library could,
potentially, lead to a panic, in the worst case scenario.

When it comes to network calls, we could get errors from the network (for
instance, we couldn’t resolve the URL of the resource), or server-related
problems (timeouts, unavailability).

It is your responsibility, as developer, to decide which issues should be


handled by your code, and which won’t be. Be explicit in your
documentation about what is covered, and what isn’t. For instance, if you
decide to not set a timeout for your request, you implicitly accept that the call
you make could hang on to your connection forever - resulting in your
application being frozen.

Design is about what you allow and what you prevent.

Where to declare the path

We declared the path to the resource as a constant. This is OK for now -


maintainers can change it in one single place if the API changes (imagine the
case of a version change in the path, even if it is not the case here). Ideally,
the client package should know about the relative path and the consumer
should tell it, as a configuration, what URL to call: this leaves space for test
environments. This configuration is out of the scope of our chapter but feel
free to think up a cleaner solution.

You can declare the constant just before the function, but you can also reduce
its visibility and prevent anything else inside the package from reaching it by
declaring it inside the FetchExchangeRates function. The compiler will still
replace it wherever it finds it.

Listing 6.39 ecb.go: FetchExchangeRates first lines

const euroxrefURL := "http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml"

resp, err := http.Get(euroxrefURL)


if err != nil {
return money.ExchangeRate(0), fmt.Errorf("%w: %s", ErrServerSide, err.Error()) #A
}

In case of an error, we return the zero value of the first return value’s type.
This type is ExchangeRate , which is based on afloat64 , so its zero value is
simply 0. We add the dot afterwards to specify to the compiler that we are
declaring a float and not an integer. The compiler will know that it should
actually return an ExchangeRate .

Note that in production code, it is considered a bad idea to stick to the default
client which http.Get is using. See more about clients at the end of this
chapter.

Errors, again

Keeping in mind that the consumer - themain function - should not have to
deal with implementation details, we should not directly propagate the
net/http package’s error to our consumer: if they want to check what type of
error is returned, they would also have to rely on thenet/http package -
then, if we change the implementation and use another protocol, we break the
consumer’s code. Not nice.

We will instead declare the same 4 lines as the money package’s errors, but
these will be specific to our ecbank package: consumers will be able to check
the value of the error with errors.Is() and know its meaning.

Here, we don’t really need to expose the error type we define - it brings no
value to the customers. We only need to expose the sentinel errors, as we
want to allow for error checking.

Listing 6.40 errors.go: Possible errors

// ecbankError defines a sentinel error.


type ecbankError string

// ecbankError implements the error interface.


func (e ecbankError) Error() string {
return string(e)
}
We then declare our constant and exposed errors close to where they can be
returned. This list will be enriched as we add more code.

const (
ErrCallingServer = ecbankError("error calling server")
)

The http.Get function returns an http.Response . One of Reponse ’s exposed


fields is its Body , which implements io.ReadCloser . Before even looking at
the documentation, you can see from its name that it exposes Read
a and a
Close method. Whatever you do, don’t forget to close it, or you will create all
kinds of memory leaks. To clear your mind from that as soon as possible, Go
has thedefer keyword: what you put after defer will be done just before any
return. It means you can use the response, read it, have fun, return errors
when you have to, and whichever branch the code runs through will close the
response body before returning.

defer resp.Body.Close()

Now we can look at what we received from the bank.

6.5.4 Parse the response

Before parsing the body of the response, we first want to check the status
code. The status code describes how the remote server handled our query.
There is no point in reading the response if we know that the call was
unsuccessful.

Check status

The standard of the hypertext transfer protocol defines a long list of possible
status codes distributed in five classes:

1xx (100 to 199) informational response – the request was received,


continuing process
2xx (200 to 299) successful – the request was successfully received,
understood, and accepted
3xx (300 to 399) redirection – further action needs to be taken in order
to complete the request
4xx (400 to 499) client error – the request contains bad syntax or cannot
be fulfilled
5xx (500 to 599) server error – the server failed to fulfil an apparently
valid request

In order to carry on, we need something that starts with 2 - more specifically,
we know that we want a 200. But we also want to check for 4xx and 5xx: in
the first case we made a mistake with our query, in the second it’s not our
fault.

Because we currently only really care about the class of the status code, in
case of a problem, we can use a division to check just the first figure. It’s
perfectly fine to have a function dedicated for only this division.

Listing 6.41 ecb.go: Function handling HTTP status code

const (
clientErrorClass = 4
serverErrorClass = 5
)

// checkStatusCode returns a different error depending on the returned status code.


func checkStatusCode(statusCode int) error {
switch {
case statusCode == http.StatusOK: #A
return nil
case httpStatusClass(statusCode) == clientErrorClass: #B
return fmt.Errorf("%w: %d", ErrClientSide, statusCode)
case httpStatusClass(statusCode) == serverErrorClass: #C
return fmt.Errorf("%w: %d", ErrServerSide, statusCode)
default: #D
return fmt.Errorf("%w: %d", ErrUnknownStatusCode, statusCode)
}
}

// httpStatusClass returns the class of a http status code.


func httpStatusClass(statusCode int) int {
const httpErrorClassSize = 100
return statusCode / httpErrorClassSize
}

The FetchExchangeRate function can call this checker and forward the error
without wrapping it: we already made sure we knew what type of error we
were returning. When calling functions from the same package, it is your
responsibility to decide whether you want to wrap the error or not. Errors
coming out of exposed functions should all be documented and of known
types, but you have the choice of where you create them.

We now know that the http call caused no error, and we can ensure that the
server returned a valid response. Let’s now have a look at the XML contained
in this response.

XML parsing

In order to parse XML, we will use the encoding/xml package of Go. As we


saw in Chapter 3, there is a good list of different encodings supported next to
it, including JSON, CSV, base64…

Decoding and encoding XML (or JSON)

Both encoding/json and encoding/xml packages offer two ways of decoding a


message. They both expose an Unmarshal function that can convert a slice of
bytes into an object. They also both allow for the creation of aDecoder ,
through a function called NewDecoder . This constructor takes aio.Reader as
its parameter, from which calls to Decoder will read and convert data to the
desired object. The answer to the “which should I use?” question is simple: if
you have anio.Reader , use aDecoder . If you have a []byte , then using
Unmarshal or NewDecoder is fine.

Similarly, when encoding JSON or XML, if you have access to a io.Writer ,


use anEncoder . Otherwise, useMarshal .

Here are two examples of how to parse a slice of bytes :

type person struct {

Age int `json:"age"` # Must be exposed to be decoded

Name string `json:"name"`


}

data := []byte(`{"age": 23, "name": "Yoko"}`)

p := person{}

err := json.Unmarshal(data, &p) # This requires the whole data slice

if err != nil {

panic(err)

p := person{}

dec := json.NewDecoder(bytes.NewReader(data)) # Or use an io.Reader

err := dec.Decode(&p)

if err != nil {

panic(err)

The response.Body is of type io.Reader , isn’t that convenient! It therefore


makes complete sense to use aDecoder here. We can thenDecode into the
right structure. In order to do this, we first define a type (which will be
covered in the next paragraph), and pass a pointer to a variable of that type to
the decoder. Indeed, declaring the variable will allow it to exist in memory,
the decoder will access its various fields to fill them with what can be found
in the Reader . It is paramount to provide a pointer to theDecoder - see
Appendix E for a more detailed explanation.

decoder := xml.NewDecoder(resp.Body)

var xrefMessage theRightStructure #A


err := decoder.Decode(&xrefMessage)

What exactly is this “right” structure? Something that looks like the response
we got, and states what XML field should be unmarshalled into what Go
field, using tags.

To define this structure, let’s start by looking at the response. The European
Central Bank being responsible for euros, everything is based on euros.

Listing 6.42 XML response from the API

<?xml version="1.0" encoding="UTF-8"?>


<gesmes:Envelope xmlns:gesmes="http://www.gesmes.org/xml/2002-08-01"
xmlns="http://www.ecb.int/vocabulary/2002-08-01/eurofxref">
<gesmes:subject>Reference rates</gesmes:subject> #A
<gesmes:Sender>
<gesmes:name>European Central Bank</gesmes:name>
</gesmes:Sender>
<Cube>
<Cube time='2023-02-20'> #B
<Cube currency='USD' rate='1.0674'/> #C
<Cube currency='JPY' rate='143.09'/>
<Cube currency='BGN' rate='1.9558'/>
<Cube currency='CZK' rate='23.693'/>
<Cube currency='DKK' rate='7.4461'/>
[...]
</Cube>
</Cube>
</gesmes:Envelope>

We can keep the naming of the response and create a structure called
envelope . However, the XML node name Cube is not explicit enough, so
we’ll use currencyRate s.

While the parsing objects themselves,envelope and currencyRate , do not


need to be exposed, their fields must be accessible to the
encoding/xml
package, and therefore have to be exposed.

The way Go tells the encoding/* packages from and to which node a field
should be decoded or encoded is by defining a tag at the end of the line
declaring this field’s name in the structured language. Tags are always
declared between back-quotes and composed of their name followed by a
column and a value in double quotes. If you need multiple tags on the same
field, separate them with commas inside the backquotes. For example:

type Movie struct {


Title string `xml:"Title",json:"title"`#A
ReleaseYear int `json:"year"` #B
}

To retrieve the attributes of an XML node you just need to tell the decoder to
look for an attribute. Go offers the possibility to “unnest” nodes by using the
> syntax. Here, we don’t want to retrieve thetime attribute of the
intermediate Cube node, only its inner Cube nodes. We “skip” from the root to
the level that contains the data we want withCube>Cube>Cube , where the first
one is a child of the Envelope , and the last one contains our exchange rate.

Listing 6.43 envelope.go: structures used for XML decoding

type envelope struct {


Rates []currencyRate `xml:"Cube>Cube>Cube"` #A
}

type currencyRate struct {


Currency string `xml:"currency,attr"` #B
Rate money.ExchangeRate `xml:"rate,attr"`
}

There are a couple of fields that we don’t need, such as the


time or the
subject . Be minimal when you are declaring tags and only retrieve what you
need.

Compute exchange rate

Now, we need to compute the exchange rate between the source and target
currencies. Remember that the European Central Bank’s exchange rates are
all answers to “which quantity of currency X do I get for 1 euro?”. This
means that the euro can be used as a “transition” currency, or, even better,
that the rates to convert from a currency to another is simply computed with a
hop in euro-world. The rate from CAD to ZAR is, by transitivity, the rate
from CAD to EUR multiplied by the rate from EUR to ZAR. We only have
access to the EUR to CAD exchange rate, but we’ll assume, in this project,
that the CAD to EUR exchange rate is the inverse of the EUR to CAD
exchange rate.

How do we retrieve our two change rates from the decoded list? One
approach would be to go through the list and retrieve them. We would need
to stop as soon as we found both of them. If, when we reach the end of the
list, we didn’t find them, then we can send an error. This implementation
would work, but it doesn’t make the easiest code to read.

Considering the very low-performance requirements in our situation, we


prefer to optimise for maintainability rather than memory footprint and go
with another solution: store all the decoded currencies and their change rates
in a map, and add the euro - as it’s absent in the payload from the European
Central Bank. Then, when need be, we can get the interesting values from the
map. Registering the values has an O(N) time and memory complexity,
because we go through the list once; and getting a value from a map is, in our
case, an O(1) in time complexity. It adds a little memory footprint than only
storing two values, but there are not millions of currencies in the world, even
if you choose to consider all of human history. We should be fine.

The map key is the currency and the value is the rate. Note that we could
improve readability by naming the currency code something else than
string , but as the money package did not deem it necessary to expose the
type, let’s follow suit and roll with a simple string .

We could write a function that takes an envelope as its input and returns the
map, or we could write a method. Both implementations would be clear here.
Using a method implies that we may be changing the object that holds it,
whereas using a function should not. It is more a convention than a real
constraint.

Listing 6.44 envelope.go: Map the rates for easy search

const baseCurrencyCode = "EUR"

// exchangeRates builds a map of all the supported exchange rates.


func (e envelope) exchangeRates() map[string]money.ExchangeRate {
rates := make(map[string]money.ExchangeRate, len(e.Rates)+1) #A

for _, c := range e.Rates {


rates[c.Currency] = c.Rate
}

rates[baseCurrencyCode] = 1. #B
return rates
}

From there it becomes easy to compute the desired change rate:

Listing 6.45 envelope.go: Compute change rate

// exchangeRate reads the change rate from the Envelope's contents.


func (e envelope) exchangeRate(source, target string) (money.ExchangeRate, error) {
if source == target {
return 1., nil #A
}

rates := e.mappedChangeRates()

sourceFactor, sourceFound := rates[source] #B


if !sourceFound {
return 0, fmt.Errorf("failed to find the source currency %s", source) #C
}

targetFactor, targetFound := rates[target] #D


if !targetFound {
return 0, fmt.Errorf("failed to find target currency %s", target)
}

return targetFactor / sourceFactor, nil #E


}

We are using the shortened syntax for currencies in input, where multiple
parameters have the same type: this type is only declared once at the end of
the list. Compare:

source, target string

source string, target string

Don’t forget to test! You don’t need us for this one. While sometimes it is ok
to skip a unit test on some intermediate layers, this kind of computation
should raise a test flag and sirens: coming back to change an implementation
detail may result in this division being switched around and the next thing
you know the FBI is after you for illegal money. But switching a division
around, who would do that, it never happens! You would be surprised.
Once you are done, let’s write the function that reads from the response body
and returns the exchange rate.

Listing 6.46 envelope.go: Read change rate

func readRateFromResponse(source, target string, respBody io.Reader) (money.ExchangeRate, error) {


// read the response
decoder := xml.NewDecoder(respBody)

var ecbMessage envelope


err := decoder.Decode(&ecbMessage)
if err != nil {
return 0., fmt.Errorf("%w: %s", ErrUnexpectedFormat, err)
}

rate, err := ecbMessage.exchangeRate(source, target)


if err != nil {
return 0., fmt.Errorf("%w: %s", ErrChangeRateNotFound, err)
}
return rate, nil
}

As you can see, we are limiting the scope of the arguments to string s and
io.Reader . It could be tempting to send the full money.Currency and
*http.Response that the main function actually has in hand, but it makes
testing harder and blocks future changes for no good reason.

The last thing we need to do for the exposed method of the package is, call
this last function. Easy, right?

readRateFromResponse(source.ISOCode(), target.ISOCode(), resp.Body)

The ISO codes of source and target are not exposed, though. They are
accessible via theString() method, so it would be tempting to use that, but
what is the guarantee that the stringer will always return the ISO code? What
if somebody wants to make the CLI nicer and print the full name in English,
they would just have to change the stringer. Boom, nothing works anymore.

We can instead add anISOCode method in the money package, one that
provides the ISO code and whose behaviour is not going to change for the
sake of the presentation.
The final exposed function should look something like this.

Listing 6.47 ecb.go: Fetch exchange rate

// FetchExchangeRate fetches the ExchangeRate for the day and returns it.
func (ecb EuroCentralBank) FetchExchangeRate(source, target money.Currency)
(money.ExchangeRate, error) {
const path = "http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml"

resp, err := http.Get(path)


if err != nil {
return 0., fmt.Errorf("%w: %s", ErrServerSide, err.Error())
}

// don't forget to close the response's body


defer resp.Body.Close()

if err = checkStatusCode(resp.StatusCode); err != nil {


return 0., err
}

rate, err := readRateFromResponse(source.Code(), target.Code(), resp.Body)


if err != nil {
return 0., err
}

return rate, nil


}

Testing around HTTP calls

Now, this part is pretty important for our tool, so how do we test it? The code
is explicitly calling a hard-coded URL. Do we really want to make an HTTP
call every time we run a unit test? What if the remote server is not
responding, what if we lost the connection? Unit tests should be local and
fast. We certainly don’t want to have a real call during unit testing.

The httptest package exposes the infrastructure to set up a small HTTP


server for the tests. It can run a very tiny mock HTTP server for the
milliseconds when your test requires it. Then you just need to pass the
server’s URL to the caller and define what you expect as a response. Any
query to that server’s URL will always return a specific message, as we’ll see
below.
But, wait, in our code, the URL is a constant. As mentioned before, in a
production environment, we would want to make this configurable and make
it part of the Client object. Let’s do that.

Add a field path to the object. Ideally, a New function would be tasked with
taking this path as a parameter and creating the object, but we can take a
small shortcut.

Listing 6.48 ecb.go: Fetch exchange rate with mockable path

// Client can call the bank to retrieve exchange rates.


type Client struct {
url string #A
}

// FetchExchangeRate fetches today's ExchangeRate and returns it.


func (c Client) FetchExchangeRate(source, target money.Currency) (money.ExchangeRate, error) {
const euroxrefURL = "http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml"

if c.url == "" { #B
c.url = euroxrefURL
}

resp, err := http.Get(c.url)


// ...

This should work the exact same way if you test it manually. The only
difference is that now you can automate the test.

In your test, start by creating a server using thehttptest facilities.


NewServer takes a function of the type HandlerFunc , which is the standard
HTTP handler in Go, defined not in the httptest but in the real http package.

ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {


fmt.Fprintln(w, `...`)
}))
defer ts.Close() #A

Here the parameter is an anonymous function that we cast into the


HandlerFunc type. What the server does is just write the expected response
into the ResponseWriter every time a query is sent to its URL.
We can then pass this server’s URL to our EuroCentralBank and the rest of
the test is quite easy for you by now:

Listing 6.49 ecb_internal_test.go: Testing the Fetch function

func TestEuroCentralBank_FetchExchangeRate_Success(t *testing.T) {


ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, `<?xml...>`)
}))
defer ts.Close() #A

ecb := Client{
path: ts.URL, #B
}

got, err := c.FetchExchangeRate(mustParseCurrency(t, "USD"), mustParseCurrency(t, "RON")) #C

//...

Note: why are we copying mustParseCurrency in 2 places? Many times, a


small copy is better than a big dependency. This way, both can evolve
independently. To keep only one, you would need to expose it in a non-test
file… let’s stop there and copy a handful of lines.

Of course the example test that we are giving here is just one test case. Don’t
forget to add more cases, including error cases. Instead of writing a pretty
XML response into the ResponseWriter, try this line:

w.WriteHeader(http.StatusInternalServerError)

What would you expect if the XML is broken?

6.5.5 Use in the money package

Good. Now, we can retrieve the rate. Back inmain , how do we use the
previous section with Convert ?

Interface definition

In some common languages, an interface is a contract that every


implementation must satisfy. In Go, it's quite the opposite. We don't write
interfaces as long as we don't need them. And we start needing them when
they make our life simpler. We have a saying in Go that could be
counterintuitive if you come from a strongly object-oriented language like
Java or C++ where interfaces are implicit.

Interfaces

should be discovered, not designed.

What does it mean? We have written a dependency forConvert without first


defining the contract between the two packages:money.Convert has to use
something from the ecbank package. Is it not too late now - it never is - how
are we going to tell Convert what rates provider to expect?

Well, we just do. We already have a function signature that is generic enough
to be mocked in tests. Let’s put it in an interface for Convert to use. As usual,
put it where you need it: you can declare it next toConvert itself.

Listing 6.50 exchangerates.go: Interface definition

type exchangeRates interface { #A


FetchExchangeRate(source, target Currency) (ExchangeRate, error)
}

Why is it not exposed, how can other packages use the interface? Actually,
we don’t want anyone else to rely on this interface, it’s ours, in this package.
If someone else needs to call the same API, they will define their own one-
line interface and mock it the way they want. It reduces coupling quite
drastically.

Let’s see how we can use the interface.

Use in Convert

We can now add the dependency to Convert’s signature, and call it to retrieve
the current rate. As the caller is responsible for providing the implementation
of the rates provider, it will know about all the kinds of errors that it can
return. If anything wrong happens, we can simply wrap the error that we get
and bubble it up.

Listing 6.51 convert.go: Final implementation

// Convert applies the change rate to convert an amount to a target currency.


func Convert(amount Amount, to Currency, rates exchangeRates) (Amount, error) {
// fetch the change rate for the day
r, err := rates.FetchExchangeRate(amount.currency, to) #A
if err != nil {
return Amount{}, fmt.Errorf("cannot get change rate: %w", err) #B
}

convertedValue := applyExchangeRate(amount, to, r)

if err := convertedValue.validate(); err != nil {


return Amount{}, err
}

return convertedValue, nil


}

Fix the test

Time to fix your test. If you want to be fast, the smallest thing that
implements your local interface is nil . Let’s try:

got, err := money.Convert(tc.amount, tc.to, nil)

It compiles. But if you try to run it, you will get a full-fledged panic attack:

panic: runtime error: invalid memory address or nil pointer dereference

This message means you are trying to access a field or a method on a pointer
that is clearly wrong, which is the case of ournil value. It is what the C
family of languages calls a segmentation fault and what the Java family calls
NullPointerException. At runtime, your machine is trying to access a
function on an object whose address is nothing (recognisable by the value
zero). Result: boom.

Wasn’t that fun to watch? At least, this happened in a test environment, and
not on a production platform. Meeting with these runtime errors is always
valuable. Once you’ve experienced a few, you know precisely what you’re
looking for when investigating an issue. Did we access a slice past its
bounds? Did we dereference an invalid pointer? Did we just divide by 0?
Let’s fix this by implementing a stub in the test file: a very minimal struct
that implements our interface and returns values that we require for testing. If
in a bigger project you require a mock, there are a handful of tools out there
that can generate one from the interface definition, e.g. minimock or
mockgen (
https://github.com/golang/mock). The difference between a mock
and a stub is that the mock will check whether it has been called and make
the test fail if the expected calls don’t match the actual ones. A stub will only
imitate the expected behaviour when called, but cannot validate anything.

Listing 6.52 convert_test.go: Stub the dependency

// stubRate is a very simple stub for the exchangeRates.


type stubRate struct {
rate money.ExchangeRate
err error
}

// FetchExchangeRate implements the interface exchangeRates with the same signature but fields are
unused for tests purposes.
func (m
return
stubRate)
m.rate,FetchExchangeRate(_,
m.err _ money.Currency) (money.ExchangeRate, error) { #A
}

Exercise: Update the rest of the unit test. You need to add the stub to the test
case scenario and give the rate that you expect to get from the dependency.

6.5.6 Dependency injection in main

The last missing piece consists in building the EuroCentralBank and passing
it to Convert .

Listing 6.53 main.go: Use the conversion rate

[...]
rates := ecbank.EuroCentralBank{}

convertedAmount, err := money.Convert(amount, toCurrency, rates)


[...]

And tada ! Try it, go on. Just because we’re all tired by now, here is an
example command again.

go run . -from EUR -to JPY 15.23

You can try invalid currencies and numbers, have fun. Play, it’s not your
money anyway.

6.5.7 Sharing the executable

While “ go run . ” is nice, sometimes you don’t want to share the source code
with other people, and only the compiled binary. Generating the executable
file in Go is achieved with the following command:

go build -o convert main.go

This command generates an executable binary file,convert , in the bin


directory. Of course, the location can be changed.-o is a flag that has a
default value, as all flags do - its default value is “. ”, which is the current
directory. Then, we can execute it:

./convert -from EUR -to JPY 15.23

6.6 Improvements
There are a lot of tiny problems with the implementation we presented here.
The goal of the chapter was to reach a working solution, not a perfect one.
Let’s go over a few ideas and implement one.

6.6.1 Caching

First, we are calling the bank for every run of the tool, which is a waste of
resources. For example, you could be tempted to write a script that reads a
long list of amounts from a file, and, for each line, changes the amount to
Philippine pesos. Currently, there would be an identical HTTP call for each
line. This is quite time-consuming.
A solution would be to dump the rates in a temporary file with the date in the
name. The ecbank.Client struct would have a pointer to that file. If the file
doesn’t exist, fetch the rates and dump them. If the file is too old, same.
Otherwise load from the file.

You would then need to provide a way to flush the cache with a different
flag.

6.6.2 Timeout

You might someday be in the situation of one of the authors of this book:
trying to get the change rate between euros and British pounds, but there’s
some sea above your head and you’ve lost the 4G signal. Fun fact, the
world’s longest undersea section for trains is 37.9 km and the fastest train can
only go at 160km/h inside. What happens when you make a http call and the
server never answers? Nothing, unless you plan for it by the means of a
timeout.

In this code, we’re calling http.Get . Under the hood, this makes use of the
default client available in the net/http package. While this is perfectly fine for
a small example such as this chapter’s, it is certainly not good enough for
production code. Running go dochttp.Client shows that one of the fields of
the Client structure is Timeout. As you would expect, setting this will take
care of interrupting calls exceeding a given amount of time. The default value
of this field, which is the default value of the http.DefaultClient , is zero,
which, as the doc reads, means “no timeout”. Usinghttp.Client{Timeout:
5*time.Second} would, for instance, create a client with a specific timeout,
which can be safely used instead of the default client.

If you look at how the client is defined in the code, you will see a lot of
default zero values:

Listing 6.54 net/http/client.go: Implementation of the DefaultClient

// DefaultClient is the default Client and is used by Get, Head, and Post.
var DefaultClient = &Client{}

It is a pointer, so anyone changing it will change it for the whole program.


This is actually a design choice: you can start the program by setting a
timeout on this variable and the rest of the program can rely on it and use that
timeout value. But it’s a global variable that the rest of the program can
change - you have to hope that none of your libraries sets a different timeout
than your timeout inside the DefaultClient . If you want to fathom how
annoying this would be, simply imagine that your smartphone has
unfortunately inverted the phone numbers of two of your contacts, and you
have to figure out a way to find out.

Instead, we can declare our ownhttp.Client , only to be used in our


EuroCentralBank structure, and use theGet method of the client. This lets us
set a timeout that will be used during, and only during, our calls to the
European Central Bank.

Listing 6.55 ecb.go: Timeout example

// Client can call the bank to retrieve exchange rates.


type Client struct {
client http.Client #A
}

// NewBank builds a Client that can fetch exchange rates within a given timeout.
func NewClient(timeout time.Duration) Client { #B
return Client{
client: http.Client{Timeout: timeout},
}
}

// FetchExchangeRate fetches the ExchangeRate for the day and returns it.
func (c Client) FetchExchangeRate(source, target money.Currency) (money.ExchangeRate, error) {
const path = "http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml"

resp, err := c.client.Get(path) #C


[...]

The main function will have to adapt:

rates := ecbank.NewBank(30 * time.Second)

We are now ready to handle undersea tunnels. The http.Get function will
immediately return an error (and an unusable response) if the timeout is
reached, and it’ll be up to the caller to decide what to do. Thenet/http
package warns us, in the documentation ofhttp.Get , that the errors returned
are of type *url.Error (the pointer information is very important), and that
we can use that to determine whether the call timed out. This is a nice
opportunity to discover a useful function of the errors package.

We’ve already seen that we can test if an error is of a specific flavour, with
errors.Is . Sometimes, we want to inspect the error a bit further, especially
when we know there is something more than an error message that can be
extracted from the error. In this case, we are informed that the error returned
is, in fact, of a specific type. This means we could cast it to that type:

urlErr, ok := err.(*url.Error)

This would then allow us, provided the ok variable is true, to access fields
and methods of the*url.Error structure. Let’s have a look at what’s over
there: go doc url.Error .

As we can see, there are several exposed fields in that structure - the
operation that was attempted, the URL that was requested, and the error
itself. But what is interesting for us, here, is that we can call aTimeout
method that returns a boolean value. This is how we can ensure we did
indeed reach a timeout.

if urlErr.Timeout() {

This is nice, but there is a nicer and more idiomatic way of performing this
operation: we can make use of theerrors.As function. Its signature is
simple: it takes an error, a target, and it returns a boolean - whether it
succeeded. When it did, the target now contains the value of the original
error.

Becauseerrors.As is writing to its target parameter, we need to provide it


as a pointer to a variable that will receive the value, just as we had to do
earlier with the Decode method of the encoding/xml package. In our case, we
want to pass a pointer to a variable of type*url.Error . Yes, that’s a pointer
to a pointer. But it’s important to understand that we merely pass the address
of a variable, and the fact that this variable is itself a pointer has nothing to do
with it - we only retrieved this information from the http.Get
documentation. Then, if everything went as expected, we can use the (now
populated) *url.Error to check if it is indeed a timeout!

Listing 6.56 ecb.go: Checking for timeout

func (c Client) FetchExchangeRate(source, target money.Currency) (money.ExchangeRate, error) {


[...]
resp, err := c.client.Get(path)
if err != nil {
var urlErr *url.Error #A
if ok := errors.As(err, &urlErr); ok && urlErr.Timeout() {
// This is a timeout!

It’s now up to you to decide what should be done when a timeout is reached.
It could be interesting to retry after a few moments - maybe the 4G coverage
is now better and we’re out of that undersea tunnel. Or we could decide that
any error we face is fatal for the process of converting money, and it’s not
our converter’s responsibility to choose how to deal with network errors.

Beyond timeouts

As we’ve seen, our http.Client structure can be tuned with a timeout. But a
timeout isn’t the only value we can set for our client - for instance, here,
we’ve not overwritten the Transport field of our http.Client , which means
we’ll be using the http.DefaultTransport in our client. The arguments for
using a specific http.Client applies here again - we might also want to tune
the Transport within our Client .

Testing our implementation

We can’t re-use the same test as previously, since we were only passing the
URL of the bank to the Client . This time, we need to use a client that will
proxy to the server’s URL. We can do this with the Transport.Proxy field of
the http.Client structure. Here’s the implementation for this:

Listing 6.57 ecb_internal_test.go: Testing exchange rates

func TestEuroCentralBank_FetchExchangeRate_Success(t *testing.T) {


ts := httptest.NewServer(...)
defer ts.Close()

proxyURL, err := url.Parse(ts.URL)


if err != nil {
t.Fatalf("failed to parse proxy URL: %v", err)
}

ecb := Client{
client: &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyURL(proxyURL), #A
},
Timeout: time.Second, #B
},
}

got, err := ecb.FetchExchangeRate(mustParseCurrency(t, "USD"), mustParseCurrency(t, "RON"))


...

The rest of the test is the same as before. But this is only testing the happy
path, let’s also test the case where a timeout occurs! For this, we’ll change the
behaviour of the NewServer we build in the test, and, instead of writing an
XML to the response, we’ll instead simulate a long wait with time.Sleep .
We’ll re-use a similar client as in the successful test, and this time, we’ll
check the error that is hopefully returned!

Listing 6.58 ecb_internal_test.go: Testing timeout

func TestEuroCentralBank_FetchExchangeRate_Timeout(t *testing.T) {


ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(time.Second * 2) #A
}))
defer ts.Close()

proxyURL, err := url.Parse(ts.URL)


if err != nil {
t.Fatalf("failed to parse proxy URL: %v", err)
}

ecb := Client{
client: &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyURL(proxyURL),
},
Timeout: time.Second, #B
},
}

_, err = ecb.FetchExchangeRate(mustParseCurrency(t, "USD"), mustParseCurrency(t, "RON"))


if !errors.Is(err, ErrTimeout) { #C
t.Errorf("unexpected error: %v, expected %v", err, ErrTimeout)
}
}

Timeout’s value

When it comes to choosing a “good” timeout value, you want to think about
the call you’re executing. Your timeout is your patience, you don’t want to
think about what the remote service has to execute your query, because you
shouldn’t know. Their implementation and performance might change
without you having to change your code. A rule of thumb, though, when
making calls to external resources is that a call that leaves your environment -
be it a local network, or a cloud platform - should be allowed up to a few
seconds. You need to account for all these time-consuming network
handshakes. When running locally, a few seconds are only tolerable for big
processes, and the usual value is less than a second. These numbers aren’t set
in stone, as they need to be tuned for your own use cases. Allowing only 20
milliseconds for a call over the internet is too short, and if your timeout for a
local query is 30 minutes, there is something fishy in the architecture.

6.6.3 Alternative tree

In your everyday developer life, you will be led to import external libraries
from the open-source world. This is very frequent for most libraries. In
general, they are organised with the exposed types at the root of the module
as it minimises the path to reach the required package for users. Compare
github.com/learngo-pockets/money instead of github.com/learngo-
pockets/moneyconverter/money .

We have created a folder namedmoney containing one file, exposing all the
methods and types to the users and have everything at the root.

If you need a main, it is common to create it in a folder `cmd` for command.


Here is an example of a project importing our library.
$ tree
.
├── cmd/main.go
└── github.com/learngo-pockets/money
└── money.go
└── go.mod

Congrats! You are done! It was a tough chapter with different concepts that
we will practice again over the following chapters.

6.7 Summary
The flag package exposes functions that allow us to retrieve both
arguments and optional parameters from the command line. We can
even set default values to our flags. Remember to callflag.Parse()
before checking the values of the flags!
When implementing a functionality, it is good to declare types that
mirror the core entities that we will have to handle. In our case, we
created aCurrency , a Decimal , an Amount , and we defined whatConvert
should do before writing a single mathematical operation or calling a
single function. We also knew from the start how to organise our code,
which types and functions should be exposed.
The fmt.Stringer interface is a simple interface that will make printing
complex structures nicely.
Floating point numbers are inaccurate, at best. There should always be
room for a margin when comparing two floating point numbers.
Operations on floats will sometimes seem nonsensical, because of the
precision limitation. Knowing their precision and the precision of the
computation they’re used for will help make a choice of float32, float64,
big.Float - or to go for something else.
A package is most often built starting with its API, and finishing with its
unexposed functions.
Go’s net/http package offers functions to perform HTTP calls, such as
GET or POST requests, over the network. It offers a default client,
which can be used for prototyping, but shouldn’t be kept in production-
level code, for security reasons.
A HTTP call will return a response that contains a status code. Checking
this status code is mandatory - the code informs how meaningful the
body of the response is. Status codes are divided into 5 classes, but the
most important are 200 (and 2xx), which means everything went fine,
400 (and 4xx) which means the request might be incorrect - some fields
could be missing, some authentication could be wrong, … -, and 500
(and 5xx), which mean the server faced an issue and couldn’t process
the request.
Testing HTTP calls should be agnostic from external behaviour. Go
provides a httptest package to mock HTTP calls by providing an
infrastructure to set up an HTTP server for tests. a useful NewServer
function.
A clean code separates the responsibilities of each package. Retrieving
data isn’t the same task as computing data, and should be handled in a
separate piece of code. Go offers extremely simple interfaces that allow
for dependency injection. Dependency injection makes code simple, and
tests even simpler.
Stubs are a very nice way of implementing interfaces. Simply declare a
struct where you need to implement an interface - usually in a _test.go
file - and have it implement the interface. A stub is very useful when
trying to improve test coverage, but it can only be used for unit tests, as
they don’t check the whole logic of the call.
Exposed functions, in a package, should return sentinel errors. This
makes using a package clean and simple. Within the implementation of
the package, the decision of creating the sentinel errors in exposed
functions or in unexposed functions is left to the developer.
Using
errors.Is how we test if an error is a sentinel. Usingerrors.As
is how we access an error’s fields and methods.
Go’s encoding/xml package provides a function and a method to decode
an XML message. In order to be able to decode some bytes into a
structured variable, Go’s syntax requires that the fields we want to
decode be described with the XML path where their value can be read. It
is even possible to skip layers by using the> character in that path.
The toolchain offers the go build -o path/to/exec . command,
which generates an executable file at the specified location.
7 Caching with generics
This chapter covers

Using generics in Go
Not using generics when they are not needed
Creating type constraints
Goroutines, parallelism and concurrency
Race conditions
Mutexes
Some Go proverbs

Imaging being in school, when there was an important piece of home


assignment that you realised was due tomorrow, and you hadn’t even started?
How nice was it to phone a friend and ask if, maybe, they had the answer to
question 2.b? Of course, you would have their answer, not yours, but getting
this solution early also meant you could spend time on some other part of the
homework. Of course, teachers will discourage you from being lazy and
cheating - the whole point of these assignments is to have you use your brain
power, to either learn or understand a lesson.

Computers, on the other hand, won’t judge you for taking shortcuts. A cache
is such a shortcut: it’s a key-value storage which can access data that has at
least been computed once in the past - as long as it makes sense to access it.
Caches are often used when we know a slow function returning a value will
be called several times with the same input - and that the output value should
be the same every time. There are cases when we want to use a cache - for
instance, you know that “the city with the longest name is: Bangkok”. This
isn't something that you need to check every morning. There are cases when
we specifically don’t want to use a cache - “The current exchange rate of the
Algerian dinar to Euros is: ??” - in these cases, an outdated response could
lead to confusion. And sometimes, there are situations where using a cache
could be acceptable - “The total population of Ethiopia is: 123,967,194”
doesn’t really require an instantaneous answer, the value from last week
having the same magnitude as that of this week.
In this chapter, we will present generics and implement a naive cache.
Through tests, we will show our first approach isn’t good enough and needs
to be strengthened. Then, we will add a “time to live” to values in our cache,
to make sure we don’t store outdated information. Finally, we’ll cover some
good practices of using caches.

Requirements:

Write a cache of type key, value


We should be able to store data, read and delete it
It should be concurrency-safe

7.1 A naive cache


Let’s start with a short definition. A cache is a storage for retrieving values
that were previously saved. In order to access these values, the user of the
cache will use the same key as they did when they registered the value the
first time.

There are two important notions regarding a cache that need to be evoked
here.

A cache should be able to store any new pair of key and value;
When retrieving a value using a key, the cache should return what was
previously stored in it;
Usually the whole reason for it is speed: a cache needs to be fast.

Here are some examples of pairs of keys and values a cache could hold:

Phone number (string ) - Name (string )


Year (uint ) - All the medallists, per country, at these Olympics
(map[Country][]Athlete )

Through these examples we would like to show that a key can take many
forms, and that a value can take even more. We’ll cover this in detail later,
but for now, we need to understand generics and how to write them in Go, so
we can write our first implementation of a cache.
We need to start with a bit of theory, but we will try to keep it short. We are
here primarily for hands-on projects, after all.

7.1.1 Introduction to generics

Go is strongly typed by design. It means every variable has to have an


explicit type. You can know by looking at the code which variable is a string,
which is an int , and therefore you can understand how they interact and what
the code does. If your function takes afloat64 , there is nothing else you can
give it than a float64 .

func prettyPrint(f float64){


fmt.Printf("> %f", f)
}

prettyPrint(.25)

This is nice and clear, but how can we write a function that prints anint ? Do
we have to copy the tree lines and change a few characters? Well, believe it
or not, ancient generations of Gophers remember the time when yes, you
indeed had to copy all of these little functions around. But generic types, also
known as generics, arrived and saved us from so much boilerplate.

Generics let you write code without explicitly providing the type of the data a
function takes or returns. The types are instead defined later when you use
the function. Functions are just an example. You cannot fathom the amount
of boilerplate code (and bugs) that got swept away by this great feature. But
like all good things, we should not overuse them, and we should know what
we are doing with them.

How does it work? Instead of specifying a real type, like float64 above, we
define a generic type and give it a constraint. The least-constraining type
constraint is the keyword any , which really means what it says: any type at
all. Here is an example of a single function that first accepts afloat64 as its
parameter t , and then accepts astring as the same parametert . You can
notice that the signature of the function is somewhat different - we’ve used
square brackets between the name of the function and its parameters. We’ve
declared in these square brackets that T is a placeholder for the type
constraint any for the scope of this function’s declaration. The typeT is set
when we call the function.

func prettyPrint[T any](t T){


fmt.Printf("> %v", t) #A
}

prettyPrint[float64](.25) #B
prettyPrint[string]("pockets")

In this example, we can no longer use the%f verb in Printf , as this is only
usable with floating point numbers. We specify, when we call the function,
what the type T should be.

But what is type T, specifically? It depends on what we decide to pass to the


function when it is called. The function is parameterised: at compilation
time, the required flavours, here afloat32 and a string , will both be
generated and compiled like any concretely typed function. The only
difference is the reusability of your code, which is now way better.

Type inference

If the type T appears in the parameters of the function, the Go compiler is


able to detect which type it represents each time the function is called and
doesn’t need the hint we’re giving it by using square brackets when we call
the function. For instance, in the previous example, since the parameter of the
function is precisely of type T, the Go compiler will notice that we call this
function with a float64 and with a string parameter. The lines above can be
simplified to the following:

func prettyPrint[T any](t T){


fmt.Printf("> %v", t)
}

prettyPrint(.25) #A
prettyPrint("pockets") #B

Type inference might make generics seem a bit magical. It’s worth noting
type inference will only happen on input parameters, not on returned types.

Type constraints
We saw that any , introduced by Go 1.18 along with generics, is a keyword
that can be used as a type constraint for, actually, no constraint at all. It is
equivalent to interface{} . There is only one other built-in constraint:
comparable is implemented by all comparable types, that is all types that
support comparison of two elements using== or != . Into this category fall
booleans, numbers, strings, pointers, channels, arrays of comparable types,
structs whose fields are all comparable types. Slices, maps, and functions are
not comparable. If you create a map and want its key to be generic, it will
have to becomparable . More on that soon.

We have seen in the previous chapters how interfaces work, and how any
structure can implement an interface as long as it has all methods attached
with the right signature.

Type constraints are interfaces, and, of course, you can declare your own. As
%v is not exactly formatting prettily, here is a different version:

func prettyPrint[T fmt.Stringer](t T){


fmt.Printf("> %s", t) #A
}

The difference here is that now we cannot give floats and integers to our
function anymore because they don’t implement theStringer interface, but
we can give any structure that does implement it. If you remember theAmount
from our previous chapter’s money converter - it did implement this
interface, and we can therefore call:

amount, err := money.NewAmount(...)


prettyPrint(amount)

Generic types

Let’s say we want to define a group of Amounts so that we can perform some
specific operations on them.

type Group []Amount

func (g Group) PrettyPrint() {


for _, v := range g {
prettyPrint(v)
}
}

func main() {
var g Group
g.PrettyPrint()
}

But of course, suppose we wanted to do the same to pencils and clouds and
dresses. So, we would decide to make this group generic:

type Group[T any] []T

Group is parameterised: it is a group ofT, and it is an alias for a slice ofT. Go


won’t allow us to write a method with a receiver of type slice - func (s
[]string) Do() - the compiler would complain with a message saying
“invalid receiver type”. But using our Group , we are able to write a method
on that parameterised type:

func (g Group[T]) PrettyPrint() {


for _, v := range g {
prettyPrint(v)
}
}

The receiver of the method needs to be parameterised too. Indeed, the


variable v in the loop will be of type T, and the compiler needs to know where
to look for what T means. Until we instantiate aGroup , it means nothing.

var g Group[Cloud]
g.PrettyPrint()

Here we are: the compiler can now create a version of theGroup that supports
Clouds, and only clouds.

Declaring your own constraints

One last thing you need: what if you want to support multiple integer types?
Or support int and your own PocketInt but nothing else? You cannot make
primary types implement an interface, but you can define union interfaces.
type summable interface {
int | int32 | int64
}

This means that any function that can take a summable as parameter will
accept any of int, int32 or int64, and will be able to use the + operator on it.

However, now, if you define a new type that is an int (in other words, a new
type whose underlying type is int), it won’t be included. For example:

type age int

To support all things that are actually int s, we use the~int (with a tilde)
syntax to include all types whose underlying type isint . The following
interface includes the typeage above.

type summable interface {


~int | ~int32 | ~int64
}

If you are talking about stars and specifically need an int64 for ages, you can
change your type and everything will fall into place.

Enough with theory, let’s write this cache.

7.1.2 Project initialisation

Since this project is about creating a library, we’ll use a common organisation
of our files. In Chapter 4, we exposed a module that contained a package
pocketlog . Although this was nice when we needed to introduce packages,
most open source libraries will expose their types at the root of the module,
as this prevents having cumbersome import paths. Here, we want our users to
import our cache package with the minimum effort and this means placing
our cache as early as possible.

Our file organisation will be as follows:

learngo-pockets/genericcache
├── cache.go
└── go.mod
This will allow anyone to use our library by importing our module, and then
using genericcache.Cache . We will, of course, require other files - but these
are the bare necessities that our users will need.

Start by creating a genericcache directory, and in there run the following


line to initialise the module:

go mod init learngo-pockets/genericcache

7.1.3 Implementation

As we’ve seen earlier, a cache is a place to store data in a way we can retrieve
it easily. In our case we decided to have a key value storage that could be
used for almost any type of key, and any type of value. We need our keys to
be comparable - that is, we need to know if two keys are considered equal.
This is achieved with the constraint comparable .

In our tests, we use simple cases where the key is an integer and the value a
string - but feel free to use other types instead.

Create a typeCache in your package, in acache.go file. It is a struct


holding one field: a map. The type of the keys and values is parameterised.

Listing 7.1 cache.go

// Cache is key-value storage.


type Cache[K comparable, V any] struct {
data map[K]V
}

As you can see, we have defined two types, giving them one-capital-letter
names as per convention.K for key and V for value seem as self-explanatory
as we can get.

Already we face a decision: should we store amap[K]V or a map[K]*V ? In


human terms, should our cache store copies of the user’s values, or should we
only store pointers to them? There are pros and cons to each choice, and a
tradeoff has to be found between using more memory (with copies of the
values) and using more CPU (because of pointer indirection having to be
resolved). We decided to go with the implementation above - after all, if a
user wants to use a typeV that is a pointer to their values, they can still do it!
This also means that our cache is fully responsible for its memory, and that
once a value has been added to the cache, it can’t be updated without a call to
the cache.

New

Since our cache contains a map, we need to initialise it. A side effect of using
a map in a structure is that the zero value of variables of that type should be
treated with caution. Not being able to use a zero value is an argument to
keep a type unexposed.

Listing 7.2 cache.go

// New creates a usable Cache.


func New[K comparable, V any]() Cache[K, V] {
return Cache[K, V]{
data: make(map[K]V),
}
}

Read

The most common operation executed on a cache is usually to read from it, as
that’s the whole point of our cache. Let’s write a method to achieve this. This
method accepts a key of the adequate type, and returns the value - also of the
adequate type. It is up to us to decide what to return when the key isn’t found
in the cache, which is the case when the value hasn’t been stored there yet.
As a reminder, Go’s default map implementation returns a boolean and a
value, and that’s what we’ll be using here. Should you want to use errors,
you’ll have to decide how to return the error: via a constant and a local type,
or via an exposed variable. You can also, as usual, call
errors.New directly
in your return line, but it will be harder for users to compare with a known
value and decide what to do next. We simply think having the same interface
as a map makes things clearer for the end user.

Listing 7.3 cache.go: Implement Read method on the cache


// Read returns the associated value for a key,
// and a boolean to true if the key is absent.
func (c *Cache[K, V]) Read(key K) (V, bool) {
v, found := c.data[key]
return v, found
}

The most-used function is written. We could unit-test it, but in this situation,
an integration test involving multiple operations seems a better idea. If we
write a unit test now, it will be extremely tied to the implementation choices
and will not help us in any future refactoring. We would end up testing
whether Go can read from a map, which is already covered by the Go
developers.

In order to write the integration tests, in order to read from the cache, we
need to first be able to write something into it.

Upsert

If we want to read a value from our cache, we need to expose a way of


adding it in there.

A question to be raised early here is “should we let the user insert the same
key several times ?”. In most caches, a “recent” value is usually more
interesting than an “old” value. For this reason, we decided to silently
overwrite any previously existing value in our cache - but other
implementations might decide to return an error if the key is already present
when we try to add it in the map.

Since we’re overwriting any potentially existing data, we can name our
method Upsert - a combination of “insert” and “update”. It guarantees the
key will be present in the cache, associated with the specified value.

Upsert could return an error. For instance, we might want to limit the number
of elements in our cache - hitting a limit would be a valid reason to divert
from the happy path. Let’s keep this door open from the start. After all,
returning an error is perfectly normal in Go.

Listing 7.4 cache.go: Implement Upsert function on the cache


// Upsert overrides the value for a given key.
func (c *Cache[K, V]) Upsert(key K, value V) error {
c.data[key] = value

// Do not return an error for the moment,


// but it can happen in the near future.
return nil
}

Nearly there. You can start writing a unit test that writes, reads, checks the
returned type, checks the returned value for an absent key, writes another
value for the same key, etc. There are a lot of different situations that can
already be covered.

Delete

The last operation we need is deleting a key from the cache, for when we
know that this value is stale. For example, say we are pre-computing the list
of group conversations that each user is part of. Somebody creates a new
conversation and invites 5 people. Each of them will need a new computation
of the listing, but only when they open the messaging app. We can invalidate
their keys and let the system re-compute next time the list is required.

Most caches grow with no real limit. At the end of this chapter, we’ll expose
a few ways to keep the cache manageable.

Let’s expose a method to ensure an item is no longer present in a cache. This


method will take a key, and remove the entry from the cache. But what if the
key isn’t present in our cache to begin with?

Go’s answer to this question - at least for maps - is to be idempotent: rather


than considering that we’re trying to remove an entry, we think of this action
as ensuring that this entry is not in the map after the execution ofdelete . We
decided to follow the same philosophical approach with ourDelete method.
If the key is not in the cache, our method performs no operation - commonly
shortened as “no-op”.

Listing 7.5 cache.go: Implement Delete method on the cache


// Delete removes the entry for the given key.
func (c *Cache[K, V]) Delete(key K) {
delete(c.data, key)
}

To test - or not to test

Unit-testing a one-liner like this is a question that a dev team needs to solve
as a group: what is the level of testing we want to have on this, are we testing
our own code or the Go map itself? Since we didn’t add any logic on top of
the map, we decided that our code - so far - didn’t need unit tests. This
doesn’t prevent us from writing some small functional tests - a list of calls
that ensures that we indeed insert values in our cache, and that we’re able to
retrieve them.

Our first implementation of the cache seems to cover our needs - we can store
data, we can retrieve values using the keys that were used to insert them, we
can remove some data, if need be. The world seems perfect. That’s precisely
the moment when someone in your team makes a comment in the code
review - “Is this thread-safe?”

This is an excellent question, and, in order to answer it, we need to


understand how we’ll be able to prove it is (or isn’t). “Thread-safety” is
invoked when several threads - parallelised parts of a program - access the
same resource simultaneously and try to alter it. To imagine a real-world
parallel, suppose you’re having dinner with a friend, and suddenly, both of
you are thirsty. You both want to grab the bottle of water to fill your glass. If
you were to grab the bottle at the same time, and pour in different glasses at
the same time, there would probably be water all around the place, and you
wouldn’t be sure your glass is full by the end of it. For our cache, this would
mean, for instance, having two “threads” try and write a different value for
the same key. How do we know this won’t break anything?

7.2 Introducing goroutines


Let’s return for a moment to the basics of what a computer is - a set of
devices connected together. We’ve got a processor (CPU), the central unit in
charge of performing the actual computing, some memory bars, used to store
values used by the computing module, a power source, and many extra parts
such as a hard drive to store persistent data, a motherboard to connect
everything together, etc. Here, we’ll focus on the processor.

A processor is in charge of running the binary code that was generated by the
compiler. Each program, when launched, is loaded in the memory, and then
executed on the processor. But, wait, does that mean that a processor can
only run a single program at a time? The answer is no, for two reasons. The
first one is that programs run on cores, which are parts executing the binary
code in the processor. In the 2000s, processor manufacturers started shipping
their processors with 2 or more cores. Each core can dedicate its activity to
only one task at a time. If you have more cores, they can run multiple tasks at
the same time independently on a single computer. The other reason is that
our operating system, which is also a program, coordinates different tasks and
programs to run on these cores. The user interface has to run somewhere.
There must be some running piece of code that reads input from the
keyboard. There must be something that communicates with your hard drive.

In order to prevent a computer from freezing because a core would be


running a program that wouldn’t end, computer scientists have implemented
schedulers for CPUs - a way of “pausing” a program to let another one run.
This is how we could have multiple programs run at once before the
democratisation of cores.

For many programmers, the fact that several cores were present on a machine
meant that there were more resources that could be used to run a program.
After all, if the load could be balanced on two cores instead of one, maybe
the program could run twice as fast! Let’s douse your hopes right now: in
most cases, this doesn’t work.

How can we use this feature? Pieces of a program that run independently at
the same time are called threads, coroutines, fibers, or in Go, goroutines.

In this section, we’ll see what goroutines are, how to create them, and how to
manage them.

7.2.1 What’s a goroutine?


Many other programming languages use the term “thread” when they
describe a task that is launched for parallel execution. In Go, we see things
differently. First, we don’t use system “threads” directly. Instead, we use
Go’s goroutines. They are managed by the Go runtime layer that runs along
your Go program. There are many differences between goroutines and
threads, but this isn’t a topic for this book. Instead, let’s remember that a
goroutine is a way of launching a piece of code in the background.

a = taskA()
b = taskB() #A

Of course, most of the time, we want our program to execute sequentially -


we want the second task to be run after the first. But sometimes, we don’t
need the first task to have successfully returned before we run the second
one.

Let’s make a real-world comparison: suppose you’re preparing a curry. You


have your pot with curry sauce, and your pot with rice. The recipe tells you
that each one should cook for 10 minutes. You could first cook the sauce for
10 minutes, and, when it’s ready, cook the rice for 10 minutes and the sauce
is getting cold. You’d spend 20 minutes preparing your dish, when you could
have cooked both pots at the same time - provided you had enough burners -
reducing that total time to around 10 minutes.

This is what goroutines address. They allow you to run several tasks
simultaneously - provided you can launch them. This last bit is usually not an
issue - goroutines are really light to handle, and unless you start creating
millions, you should be fine.

Now, there is a word that has been used in this section that needs a closer
look. We’ve used “in parallel”, “simultaneously”, “in the background” or
“concurrently” to represent the idea that a goroutine doesn’t block its caller.
Over the years, these words have sometimes been used interchangeably.
Fortunately for us, Rob Pike wrote some proverbs for Go https://go-
(
proverbs.github.io/), and one of them deals with this specific topic. It also
helps us getting clear definitions of what each of these words mean, as we
explain right after:
Go proverb

Concurrency is not parallelism.

This proverb highlights that having two (or more) goroutines does not
guarantee any simultaneous execution on parallel cores, but that they will be
executed independently, for better or for worse. Concurrency should focus on
how to write code to support goroutines, while parallelism is what happens
when the code is executed.

7.2.2 How to launch a goroutine

Let’s remember that Go was created with, in mind, the idea that running
goroutines should be simple. The creators of Go made it extremely
straightforward: if you want a function to run in the background, you simply
prefix its call with go . That’s it. It doesn’t require any specific import or
compilation options. Here is a simple example:

Listing 7.6 parallel.go: An example of a program running goroutines

package main

import (
"fmt"
"time"
)

func printEverySecond(msg string) {


for i := 0; i < 10; i++ {
fmt.Println(msg)
time.Sleep(time.Second)
}
}

func main() {
// Run two goroutines
go printEverySecond("Hello") #A
go printEverySecond("World") #B

var input string


fmt.Scanln(&input) #C
}
When the execution reaches thefmt.Scanln line, we have our three routines
running at the same time - the main one, and those printing messages every
second. But there is a small drawback to using goroutines - they’re launched
“in the background” - but this means that they finish without letting the caller
know! This is what the problem looks like for our previous example, in lines
of code:

go cookCurrySauce()
go cookRice()
// how do I know the food is ready?

There are two major ways of dealing with this - the first one is to use
channels, and the second one is to use a library that solves the problem.

7.2.3 Getting notified a goroutine has ended using channels

Go has a specific type called “channels” that it can use for communication
between goroutines.

Go proverb

Don't communicate by sharing memory, share memory by communicating.

A channel is how we communicate in Go. We can send information such as


triggers, new data, results, errors, etc. between goroutines through channels.

A channel can be seen as a conduit to which data can be sent - and from
which data can be retrieved. A channel, in Go, is declared for a specific type
of message it will contain. For instance, if a channel were to be used to
convey integers, we’d write the following line:

var c chan int

Channels, like maps and slices, need to be instantiated with the make
function. When instantiating them, we can decide whether we want a channel
to be buffered - it will only be able to contain up to X elements - or
unbuffered - with no limit to the number of elements it contains at any given
time.
c := make(chan int, 10) #A
c := make(chan int) #B

A buffered channel that has reached its capacity becomes “blocking” on


writing attempts - as long as no message is read from the channel when it has
reached its capacity, any sending to the channel will wait for a spot in the
queue, blocking the goroutine.

The syntax to write to and read from a channel uses arrows:

c := make(chan int) #A
c <- 4 #B
i := <- c #C

The power of channels, in Go, comes from the fact that items are read from
the channel in the same chronological order they were sent. In other words,
first in, first out.

Finally, when no new messages are expected, a channel should be marked as


closed for writing. For this, we use the built-in function close . It is still
possible to read from a channel after it is closed.

c := make(chan int)
c <- 4
close(c) #A
i := <- c

Reading from a channel is a blocking call. If there are no messages in the


channel, the execution waits till we find one. As a result, we can use a
channel to notify that a goroutine is done:

c := make(chan struct{}, 1) #A

go func(doneChan chan <- struct{}) {


defer func() { #B
log.Println("done")
doneChan <- struct{}{}
close(doneChan) #C
}
// run task
}(c) #D

_ = <- c #E
We introduced two commonly used notions in this example. First, a channel
can be used to notify its listeners. Here, we only want to notify that we’re
done - and for this, we use the Go trick of empty structures:struct{} ,
because empty structures are very light (they have a memory footprint of 0
bytes). We don’t need a convoluted structure that would transport data
around, and so we don’t use one. There’s no point in overdoing it here.

The second interesting part is the signature of the function we run as our
goroutine. A small arrow <- squeezed its way between the wordschan and
struct{} . When we declare a function, we can be a bit more specific than
“here’s a channel for you to use”: we can specify in the signature of the
function whether a channel should be used for reading messages from it, for
writing messages to it, or for both. If a function should only read from a
channel of strings, its signature can be written asfunc read(c <- chan
string) . Visually, the arrow points out of the channel, an indication that
messages will be read from the channel. If we want to specify that we want to
write to a channel in a function, we can use thefunc write(c chan <-
string) syntax. Visually, the arrow helps us understand that strings will be
sent into the channel.

If we wanted to both read and write from a channel, the syntax is simplyfunc
rw(c chan string) . No arrows this time. However, we discourage passing a
channel for both reading and writing to a function - this suggests the
function’s scope is too big, and we should be able to extract the reading and
the writing into two different functions.

Finally, a channel should be closed when the job is done, to tell listening
goroutines that no more data will arrive. When a single function is in charge
of writing to a channel, that function should be in charge of closing the
channel. Leaving it open is not a problem if you don’t want to signal listeners
that you’re done.

Let’s have a final look at how we’d write our synchronisation point if we
have to handle several goroutines:

numRoutines := 2
c := make(chan struct{}, numRoutines) #A

go cookRice(c) #B
go cookCurry(c) #B

for i := 0; i < numRoutines; i++ { #C


_ = <- c
}

Bon appétit.

7.2.4 Running goroutines and having a synchronisation point


While using channels works perfectly fine, it always feels like reinventing the
wheel. While this is fine when your road needs specific wheels, it so happens
that Golang provides two libraries that replace these channels nicely. One is
present in the standard library, while the other is (still) in the experimental
packages of the Go sources.

Using sync.WaitGroup

Let’s have a look at the sync package, in particular its WaitGroup type. go
doc sync.WaitGroup tells us that WaitGroup s can be used to wait for
goroutines to finish - which is exactly what we’re trying to do here. The
WaitGroup type exposes three methods:

Add(delta int) : Registers a number of new goroutines to wait for. This


can be called several times.
Done() : Used by a goroutine to notify the WaitGroup that it has
completed its task. Should be called in adefer statement.
Wait() : The synchronisation point, called after Add() and after the
goroutines have been launched.

Let’s give these a try with our cooking example:

Listing 7.7 Cooking example using sync.WaitGroup

package main

import (
"fmt"
"sync"
)

func main() {
wg := &sync.WaitGroup{} #A
wg.Add(2) #B

go cookRice(wg) #C
go cookCurry(wg) #C

wg.Wait() #D
}

func cookRice(wg *sync.WaitGroup) {


defer wg.Done() #E
fmt.Println("Cooking rice...")
// prepare rice
}

func cookCurry(wg *sync.WaitGroup) {


defer wg.Done() #E
fmt.Println("Preparing curry sauce...")
// prepare curry
}

In this example, we created a defaultWaitGroup . BecauseWaitGroups don’t


expose any fields, they will always be created with the exact same line:wg :=
&sync.WaitGroup{} . Well, not absolutely always - you could name yours
differently - but wg is a common name for aWaitGroup .

The second step is to set the number of goroutines that this WaitGroup will
be in charge of. Here, we made a single call to Add, but it’s perfectly fine to
call Add(1) several times. This is quite common when you have to deal with
loops. We could have written our code this way, which makes it easier to
refactor, if you want bland rice or just the sauce:

wg.Add(1)
go cookRice(wg)

wg.Add(1)
go cookCurry(wg)

Then, the important part is to defer a call towg.Done() in each function we


call. This is why we need to pass a pointer to the WaitGroup in the signature
of each of these functions. Indeed, if we had passed a copy, each function
would call Done() on a copy of the WaitGroup wg, and the original wg (in
main , in our code) would never be notified. In this case, a call toWait() will
eventually result in a panic . For more details on passing values by copy or by
reference, see Appendix E.

Finally, we call wg.Wait() , which will return after the same number of
Done() have been called as the sum of all theAdd(n) we’ve performed on
this WaitGroup .

WaitGroup is a very commonly used wait of synchronising goroutines that


we’ve launched into the wild. Under the hood, in order to keep track of how
many goroutines aren’t completed yet, it uses a field of typeatomic.Uint64 .
It’s interesting to know that Go exposes types that can serve for atomic
operations - but we won’t dive into this world here. They work great for
functions that do their thing on their own. However, if anything goes wrong
and an error needs to be captured, the only way is through an error channel
that we pass to each goroutine, and from which we read after the call to Wait :

wg := &sync.WaitGroup{}
wg.Add(2)
errChan := make(chan error, 2)

go cookCurry(wg, errChan)
go cookRice(wg, errChan)

wg.Wait()

// handle the error, if any


select {
case err := <- errChan:
// deal with the error
default:
continue
}

As you can see, we can retrieve some errors from the goroutines with an error
channel. Unfortunately for us, we had to pass a channel around to read errors,
and the whole point of using a WaitGroup was to not have to use channels in
the first place… Well, guess what? There is a library that allows us to handle
errors when we’re using goroutines.
Using golang.org/x/sync/errgroup

errgroup is, as you can see, a package that is not in the standard Go library.
This means that, if we want to use it, we need to start by importing it as a
dependency of our module:go get golang.org/x/sync/errgroup .

Now, let’s have a look at what this package exposes:

go doc golang.org/x/sync/errgroup.

We can find a type Group in there, and four methods - we’ll only cover three
of them here, as they’re the most commonly used. But, first, how do we
create aGroup ? Well, we can either use a zero value -eg :=
errgroup.Group{} , or we can use theerrgroup.WithContext(ctx) function.
In our simple example, we don’t have contexts and we will go with the first
option, but, in the vast majority of cases, using the second option is
recommended, as you’ll have a variable of typecontext.Context closeby.
We will cover contexts in a later chapter. Internally, an errgroup.Group is a
sync.WaitGroup with extra fields to handle - mostly - context and errors.

Now, what can our Group do? It has aSetLimit(n) method, which reminds
us of the Add(n) method of the WaitGroup . They are different, though, in that
when we called Add(n) , we needed to have n equal to the number of
goroutines we were launching (and for which we’d later call Done() ).
SetLimit doesn’t work the same way: instead of immediately defining how
many goroutines will be launched (the errgroup.Group tracks this
internally), we specify a maximum number of goroutines allowed to be
running at the same time. Most of the time, you will want this value equal to
the number of goroutines you are running, which is the default value, but
sometimes your goroutines make use of a resource that doesn’t scale well
with load - maybe each of your goroutines calls the database, and the
database can only handle 10 calls at a time. In such cases, it’s perfectly valid
to have a hardcoded limit in your
Group .

It has a Wait() method, also quite similar to that of the WaitGroup type,
except that it returns an error. This is quite important, as we’ll soon see. And
finally, it has a Go method that takes, as its parameter, a function returning an
error. This Go method is in charge of launching the goroutine - it is also in
charge of letting the Group know when this function finishes.

As we know, many functions written in Go can return an error. In our


example, the cookCurry could, for instance, return an
ErrIngredientNotFound error. All of our functions could be returning an
error, and we don’t want to deal with the problems of retrieving all of them.
The Wait() method of the errgroup.Group type returns an error that
happened in one of the goroutines. It doesn’t return any error that happened
there - it returns the (chronologically) last one.

Now we know how to use an errgroup.Group , let’s use it in our cooking


example:

Listing 7.8 Cooking example using errgroup.Group

package main

import "golang.org/x/sync/errgroup"

func main() {
var g errgroup.Group #A
g.SetLimit(2) #B

g.Go(func() error { #C
cookRice()
return nil
})
g.Go(cookCurry) #C

err := g.Wait() #D
if err != nil {
// handle error
}
}

func cookRice() {
// cook rice here
}

func cookCurry() error {


// cook curry here - this may return an error
return nil
}
That’s it! We’ve now seen three ways of controlling the synchronisation of
goroutines. While we can use channels to notify that a function is returning,
it’s quite common to use sync.WaitGroup when we want to launch any
number of simultaneous calls, or to useerrgroup.Group when we also want
to retrieve any error from these calls.

7.3 A more thread-safe cache


But let’s get back to the initial question - is our cache thread-safe? Now we
know how to run goroutines, let’s test it! But before we run any test, let’s
keep a very important quote regarding testing by Edsger Dijkstra in mind.

Edsger Dijkstra

Program testing can be used to show the presence of bugs, but never to show
their absence !

First, let’s have a look at the test we currently have and notice one thing: it’s
extremely linear. It validates that, if we do a specific operation before another
one, then the output is predictable. Does it run anything in goroutines? No -
which means it proves absolutely nothing about thread-safety.

Our cache could possibly be used by several goroutines during the execution
of a program - for instance, several incoming requests could be processed at
the same time, causing the cache to be updated in a very short window. Let’s
start by writing a test that simulates these “simultaneous” calls.

Using goroutines

For this, we’ll use the sync.WaitGroup - we need to run goroutines and we
want to make sure they’ve all finished before we can return from the test. In
order to make things “problematic”, let’s have each of the goroutines write a
different value in the same cache, every time for the same key. Here is what
we write:

Listing 7.9 Testing the cache with goroutines


func TestCache_Parallel_goroutines(t *testing.T) {
c := cache.New[int, string]() #A

const parallelTasks = 10 #B
wg := sync.WaitGroup{}
wg.Add(parallelTasks) #B

for i := 0; i < parallelTasks; i++ {


go func(j int) { #C
defer wg.Done()
c.Upsert(4, fmt.Sprint(j)) #D
}(i)
}

wg.Wait() #E
}

In this test, we launch 10 goroutines, and each one is in charge of writing a


different value for the same key in our cache.

Using t.Parallel()

Alternatively, we can make use of thetesting package to execute parallel


tests. This feature is particularly useful in two scenarios: when you want to
reduce the time your tests will take, because you know some steps are
independant and can be run simultaneously, and when you want to make sure
you don’t have data races.

The gist is as follows: if a test function contains the line t.Parallel() , the
Go test framework will run it along with other functions in the same scope
that also have thet.Parallel() line. In other words, the execution of this
function won’t be blocking for the execution of other test functions.

Let’s write a test using the t.Parallel() feature. In our test, we want the
same index of our cache to be written at by two different calls, with different
values in each case.

Listing 7.10 Cooking example using t.Parallel()

func TestCache_Parallel(t *testing.T) {


c := cache.New[int, string]() #A
t.Run("write six", func(t *testing.T) {
t.Parallel() #B
c.Upsert(6, "six")
})

t.Run("write kuus", func(t *testing.T) {


t.Parallel() #C
c.Upsert(6, "kuus")
})
}

Now let’s run it and see what happens:go test . - everything seems fine.
However, we’re cheating here - we’ve written this test because we know
something should go wrong. We know that upserting two different values “at
the same time” is precisely a data race, and we want it to be caught. But how
can we achieve this?

7.3.1 Using go test -race .

The go test command comes with several flags, here’s how to find them.go
help test returns a short list - namely, -args , -c , -exec , -json and -o - but
it also informs us that the flags from the build command are inherited by the
test command. Let’s have a look at the output ofgo help build , then. One
of the first flags provided is -race , which “enables race detection” - precisely
what we’re looking for.

Let’s run our test again, but this time with the -race flag: go test -race .:
we get the following output

$ go test --trimpath -race .


==================
WARNING: DATA RACE
Write at 0x00c0000a53e0 by goroutine 13:
runtime.mapassign_fast64()
runtime/map_fast64.go:93 +0x0
learngo-pockets/genericcache.(*Cache[...]).Upsert()
learngo-pockets/genericcache/cache.go:28 +0x124
learngo-pockets/genericcache_test.TestCache_Parallel.func1()
learngo-pockets/genericcache/cache_test.go:73 +0x97
learngo-pockets/genericcache_test.TestCache_Parallel.func2()
learngo-pockets/genericcache/cache_test.go:74 +0x47
Previous write at 0x00c0000a53e0 by goroutine 20:
runtime.mapassign_fast64()
runtime/map_fast64.go:93 +0x0
learngo-pockets/genericcache.(*Cache[...]).Upsert()
learngo-pockets/genericcache/cache.go:28 +0x124
learngo-pockets/genericcache_test.TestCache_Parallel.func1()
learngo-pockets/genericcache/cache_test.go:73 +0x97
learngo-pockets/genericcache_test.TestCache_Parallel.func2()
learngo-pockets/genericcache/cache_test.go:74 +0x47

As you can see, Go was able to detect that we were writing at the same index
twice, at the same time. This constitutes a data race, and this is what would
make our cache not thread-safe.

You might notice that we’ve eluded describing the --trimpath flag here. The
default behaviour of Go’s test framework is to output the absolute path of
failing tests (and the stack that leads there). Usingtrimpath
- , we tell Go to
only output the path from the root of our module. This makes the output
clearer when sharing it.

We can now answer our collegue’s remark: our implementation of the cache
is not thread-safe. This is a severe flaw in design and security. We need to
work on it.

7.3.2 Add a mutex


Go proverb

Channels orchestrate; mutexes serialize.

When it comes to restricting synchronised access to a resource, computer


scientists - namely, Edsger Dijkstra - introduced the notion of semaphores. A
semaphore is a counter that keeps track of a number of threads accessing a
given resource. Semaphores are used to allow a specific number of threads to
simultaneously a variable, a connection, a socket… We can push the
semaphore to the extreme and allow up to exactly one thread to access a
resource. A semaphore that ensures MUTual EXclusion to a resource is
suitably named “mutex”. Go allows us to use mutexes through the type
Mutex , defined in the standard library’s sync package.
The sync.Mutex type

Here’s how to declare a simple mutex in Go:

var mu sync.Mutex

Before we dig into how to use our mutex, it is important to remember that a
mutex is always used to protect the access to a resource. Place it in your code
as close as possible to the resource the mutex protects.

Let’s have a look at go doc sync.Mutex . We see there that aMutex exposes
Lock() , Unlock() , and TryLock() . While the first two methods are quite
explicit, one might be tempted to useTryLock . A quick glance at its
documentation, through go doc sync.Mutex.TryLock tells us that if we
resort to using this method, we have a deeper problem.

We can lock our mutex when we want a piece of code to have exclusive
access to the resource, and unlock it afterwards. We’re almost ready to use
our mutex - there is a final line of the documentation that is worth engraving:
“a Mutex must not be copied after first use”. Copying a mutex by passing it as
a parameter to a function is a mistake that usually leads to unexpected
behaviours when locking or unlocking the mutex. They and the structures
containing them need to be passed as pointers.

The zero value of aMutex is to be unlocked. Using mutexes requires paying


special attention to the structure of the code. Indeed, it is a “common” source
of error to forget to unlock a mutex because the function exits early. As a best
practice, we recommend alwaysdefer -ring the Unlock() call right after
calling Lock() . There will be a few cases when this isn’t exactly what you
need, but these will be the exceptions to the general rule of deferring
unlocking.

Let’s return to the code and add a mutex to our cache. First, we’ll add a
mutex next to the resource we want to protect - thedata map, within the
Cache structure.

Listing 7.11 cache.go : cache with a mutex


type Cache[K comparable, V any] struct {
mu sync.Mutex
data map[K]V
}

Each method on theCache type will ensure only a single goroutine can enter
it at a time by having the same two lines:

c.mu.Lock()
defer c.mu.Unlock()

We can now re-run go test -race . : we should no longer see any data race
detected. The mutex seems to have done the job. However, using mutexes
isn’t free - there is a cost in time execution every time we lock (and unlock).
For this reason, it is worth checking we weren’t overzealous in our usage of
mutexes. In our example, while we are ensuring that no two goroutines
update the contents of the cache “simultaneously”, we’re also preventing two
goroutines from reading from our cache, which is not a conflicting operation.

The sync.RWMutex type

In order to address this specific need, the standard library exposes another
mutex: the RWMutex, a read-write mutex, also in thesync package. This
mutex is very similar to the basic Mutex - it also exposesLock() and
Unlock() - but on top of that, it also has aRLock() and a RUnlock() methods
that are used when we only want to use the mutex to read data. Any number
of goroutines can call RLock() without blocking each other, but as soon as
Lock() is called, no goroutine can access the resource - neither for reading,
nor for writing.

We can update our code - the mutex in the cache should beRWMutexa . The
Read method should only call RLock and RUnlock , as it doesn’t modify the
contents of the cache.Upsert and Delete will still need a regular Lock and
Unlock call. As a general rule, sync.Mutex is the way to go, and
sync.RWMutex should only be considered if you are facing performance
issues - and even then, caution should be the rule. Because of its richer
interface, accidentally calling RLock instead of Lock will have a disastrous
impact on the code - and the compiler won’t tell you. Don’t blindly believe
that RWMutex is faster thanMutex - instead, benchmark it for your specific
usecase, and use the appropriate one.

Running go test -race . once more should give us confidence we didn’t


add a data race. We now have a fully operational, thread-safe, generic cache!

7.4 Possible improvements


Even though our cache looks perfect, there are a couple of optimisations we
could add. The first one we present here represents the idea that no value is
frozen in time forever. After all, “the last person to have walked on the
Moon” could very much not be Gene Cernan in the near future. Sometimes,
it’s best to ignore outdated values, and the cache should tell us whether a
value has reached its expiration date. The second optimisation we’ll present is
about handling the cache’s memory footprint. Indeed, if the user doesn’t call
Delete() , the cache will only grow, storing more and more items. Not having
a limit on the size of the cache is dangerous - it could end up using too much
memory, causing some slowing down in the application.

7.4.1 Add TTL

As we mentioned previously, values retrieved in the past can become


outdated. One way of ensuring our values are never too ancient is to give
each one of them a “best before” date. Once this timestamp has passed, we
shouldn’t trust the value any longer. In computer science, this timestamp is
called a “time to live” - or TTL. In order to implement it in our cache, we
need to attach a “best before” to each of our values.

Add the timestamp

Thanks to generics, we can add an expiration date to any value by defining a


new type - an entry with timeout:

type entryWithTimeout[V any] struct {


value V
expires time.Time // After that time, the value is useless.
}
Our cache is in charge of setting theexpires value when we upsert an item in
our cache. We’ll provide a TTL to our cache as a field. This TTL could be a
hardcoded parameter of the cache - but this isn’t very user-friendly. When
writing a library, you don’t know what use will be made of it. Our cache can
be used for varying values such as “most trending posts on social media”, or
for stable values such as the list of capitals of countries of the world. It’s best
to expose this TTL as a mandatory parameter of ourNew function.

Listing 7.12 cache.go : creating a cache with a TTL

type Cache[K comparable, V any] struct {


ttl time.Duration

mu sync.Mutex
data map[K]entryWithTimeout[V]
}

func New[K comparable, V any](ttl time.Duration) Cache[K, V] {


return Cache[K, V]{
ttl: ttl,
data: make(map[K]entryWithTimeout[V]),
}
}

Update the methods

Let’s have a thought about what will happen in our Read() , Upsert() , and
Delete() methods. The easiest one isDelete : there’s nothing to change
there. A key can be removed, regardless of whether the associated value has
reached its expiration date. Then, let’s have a look atUpsert . We used to
either insert the data, or override the value. Well, things aren’t very different
now - upon insertion, we’ll add the data with the correct expire value, and
upon updating, we’ll not only override the value, but also its expire field.

Listing 7.13 cache.go : Upsert with a TTL

func (c *Cache[K, V]) Upsert(key K, value V) {


c.mu.Lock()
defer c.mu.Unlock()

c.data[key] = entryWithTimeout[V]{
value: value,
expires: time.Now().Add(c.ttl), #A
}
}

Finally, we’re left with the trickier Read() method. This is where we’ll check
whether an entry is no longer valid. We need to add a second check on top of
the present one that verifies our cache has a value for the requested key. If the
value is still valid, we can return it. But what if it’s not? In this case, in our
implementation, we decided that the user doesn’t need to know why the value
isn’t in the cache - after all, what matters is that it couldn’t be found.

Listing 7.14 cache.go: Read with a TTL

func (c *Cache[K, V]) Read(key K) (V, bool) {


c.mu.Lock()
defer c.mu.Unlock()

var zeroV V #A

e, ok := c.data[key]

switch {
case !ok:
return zeroV, false
case e.expires.Before(time.Now()):
// The value has expired.
delete(c.data, key)
return zeroV, false
default:
return e.value, true
}
}

By implementation, our Read() method now has to alter the contents of the
map. As a result, we can’t rely on aRWMutex as we did in section 3. Instead,
we use a regularsync.Mutex . This will have a small impact on performance -
two Read() can no longer be executed simultaneously.

In the implementation of our Read() method, we start by defining a non-


initialised value of type V. This is very common in generic functions that
return a constraint - indeed, we can’treturn V{} , as this would require V to
have a concrete type representation at runtime.
Now that we’ve written the code, we should test it. Our scenario, here, is to
create a cache with a rather small TTL, to insert an item, and then to wait
more than our cache’s TTL. Checking immediately if the item is available
shouldn’t return an error, but checking after a while should.

Listing 7.15 cache_test.go: Testing Read with TTL

func TestCache_TTL(t *testing.T) {


t.Parallel()

c := cache.New[string, string](5, time.Millisecond*100)


c.Upsert("Norwegian", "Blue")

// Check the item is there.


got, found := c.Read("Norwegian")
assert.True(t, found)
assert.Equal(t, "Blue", got)

time.Sleep(time.Millisecond * 200)

got, found = c.Read("Norwegian")

assert.False(t, found)
assert.Equal(t, "", got)
}

We start our test with a call to t.Parallel() . Indeed, we’re fine running this
test along with others. We recommend using this in every “light” test. If a test
requires a lot of resources - CPU, RAM, disk, network, then you might not
want to have it run with others. In our case, we’re absolutely fine.

Schrödinger’s conundrum

You might have noticed that we discard expired items only when we try to
access them viaRead() . This means that items could expire way before we
look at them, unbeknownst to us. The side effect is that our cache might be
using chunks of memory for useless data. How do we deal with that?

Well, bluntly put, we decided not to. If we were to implement “something


that regularly checks each item and gets rid of it if we know it’s no longer
usable”, we’d basically be writing a garbage collector for our cache. We’d
need to start a goroutine inNew() , and that goroutine’s only task would be to
endlessly scan the map and delete items that have reached their TTL. Instead
of implementing this, we have decided to address a slightly related issue -
controlling the size of our cache.

7.4.2 Add a maximum number of items in the cache

In order to prevent too many items from being added to the cache, we will set
a limit to our cache’s size. This will be a property of the cache, an unexposed
unsigned integer keeping track of how many items were added and removed.

Architectural decisions

We’ll need to make a decision when we try to add a new value into the cache
and the maximum number of items is reached.

In our implementation, we decided to allow this operation - and discard


another entry. There are lots of interesting choices to determine which entry
to remove from the map in this case - eligible candidates could be “the oldest
entry in the cache”, “the most recent entry in the cache”, “the least read entry
in the cache”, “the entry that hasn’t been read for the longest time”, etc. Each
of these implementations requires storing extra information in our cache. The
choice of which one to use is highly dependent on the information stored in
the cache. Here, we’ll decide to remove the entry that is the oldest in the
cache, and we consider that overriding a value should reset its timestamp - as
it does for the TTL.

For this, we need to keep track of the order in which items were inserted.
Let’s look at which options Go offers to implement this:

Maybe use a channel? This is the most intuitive implementation of a


“first in, first out” list in Go. When we Upsert a new entry, we register
the key in the channel. The first item in the channel would be the oldest.
However, this wouldn’t work, because we don’t cover the cases where
the user would call Delete or when Upsert . Indeed, in these two cases,
we’d have to move an item from “somewhere” in our channel to its tail.
Since channels in Go don’t support suppression of an element, this
implementation is not good enough.
Maybe we could use a slice? After all, slices can handle suppression of
some element in the middle of the slice without too much effort. When
we Upsert an element, if it was present, we add it to the slice, and when
we want to override it with another Upsert , we can move it to the end,
and finally Delete removes it from the slice.
Other options are available - using a binary search tree, for instance - but
we’ll go with the slice, as it covers most of our needs.

We already know, since we want our cache to hold up to a maximum number


of items, that we can give an upper bound to our slice’s capacity. We’ll
initialise it with the following syntax:

chronologicalKeys := make([]K, 0, maxSize)

In order to check whether we’ve reached the maximum number of items, we


can either store themaxSize value as a field of our cache, or we can use the
cap built-in function, on the chronologicalKeys slice. In this book, we
decided to go with the former for the sake of clarity, but this adds the cost of
storing this value in our structure.

if len(c.data) == maxSize
if len(c.data) == cap(c.chronologicalKeys)

This last parameter is here to tell the capacity of our slice at execution. When
an element is appended to a slice, if that slice’s length is equal to its capacity,
the whole slice needs to be reallocated elsewhere in memory. Setting the
correct capacity to our slice prevents these reallocations.

Now, just as we did for the TTL, let’s have a look at the impact of having this
slice in our cache for each of our exposed functions.

Implementation

New() should take another parameter: the maximum size of the cache. Having
a default value doesn't really make sense here - a cache of 10 integers
wouldn’t be the same size as a cache of 10 extremely complex structures with
lots of fields. The package reflect could help us set a maximum memory
size to our cache based on the memory footprint of a single item, but this
would be overkill. Instead, let’s have the user specify a size they think is
good enough. Then, any memory consideration is left to them.

Listing 7.16 cache.g : New with a maximum size

type Cache[K comparable, V any] struct {


ttl time.Duration

mu sync.Mutex
data map[K]entryWithTimeout[V]

maxSize int
chronologicalKeys []K
}

// New creates a new Cache with an initialised data.


func New[K comparable, V any](maxSize int, ttl time.Duration) Cache[K, V] { #A
return Cache[K, V]{
ttl: ttl,
data: make(map[K]entryWithTimeout[V]),
maxSize: maxSize,
chronologicalKeys: make([]K, 0, maxSize),
}
}

Next, we notice that adding an entry to our cache will no longer be as simple
as adding a key-value pair to a map: indeed, we now need to update the
chronologicalKeys slice - either by adding, removing, or moving one of its
elements, everytime we update the map - respectively by inserting, deleting,
or updating one item.

As a result, we refactor our code to avoid duplicating logic. We need a small


function that adds a key-value pair to our cache, and one that removes a key
from it. Both functions should be in charge of updating both our map and our
slice. Let’s start with these - and it’s also a good opportunity to use a feature
added in Go 1.21 - theslices package. This package is a helper for most
common operations on slices. Here, we’ll use it to delete all items from the
slice that have a specific value, with its DeleteFunc function. This function
returns a slice that has dropped all items that returned true in the provided
callback (it doesn't update the slice).
Listing 7.17 cache.go: Utility functions to replace the map calls

// addKeyValue inserts a key and its value into the cache.


func (c *Cache[K, V]) addKeyValue(key K, value V) {
c.data[key] = entryWithTimeout[V]{
value: value,
expires: time.Now().Add(c.ttl),
}
c.chronologicalKeys = append(c.chronologicalKeys, key)
}

// deleteKeyValue removes a key and its associated value from the cache.
func (c *Cache[K, V]) deleteKeyValue(key K) {
c.chronologicalKeys = slices.DeleteFunc(c.chronologicalKeys, func(k K) bool { return k == key })
delete(c.data, key)
}

Now that we’ve got these helping functions, we can update the code in
Read() first - all we have to do is update how we remove an entry when it
had reached its TTL:

Listing 7.18 cache.go: Read with the new helper functions

func (c *Cache[K, V]) Read(key K) (V, bool) {


...
case e.expires.Before(time.Now()):
// The value has expired.
c.deleteKeyValue(key) #A
return zeroV, false
...

Listing 7.19 cache.go: Delete with the new helper functions

func (c *Cache[K, V]) Delete(key K) {


// Lock the deletion on the map
c.mu.Lock()
defer c.mu.Unlock()

c.deleteKeyValue(key)
}

And finally, we have to update the Upsert() function. This one is slightly
tricker, as, this time, this is where the core of the feature we want to
implement resides - we want to limit the number of items that are stored in
our cache at a given time. Since this number only grows when we upsert
items, it makes sense that this function will be the most affected one. Let’s
have a look at all possibilities when the user callsUpsert() :

The cache already has a value for that key: in this case, we want to reset
the whole entry with the new value - and the new TTL. We need to also
update the position of the key in our chronological slice. We can achieve
this by deleting the old pair and adding the new one.
The cache doesn’t have a value for this key: in this case, if we’ve not
reached the maximum capacity of our cache, then we can simply insert
the new pair. However, if we have reached the maximum capacity, we
need to clear some space for the new entry - this means discarding the
item that has been there for the longest. This item is at the beginning of
our slice of keys.

Now that we know how our method should behave, let’s implement it:

Listing 7.20 cache.go: Upsert with the new helper functions

func (c *Cache[K, V]) Upsert(key K, value V) {


c.mu.Lock()
defer c.mu.Unlock()

_, alreadyPresent := c.data[key]
switch {
case alreadyPresent:
c.deleteKeyValue(key) #A
case len(c.data) == c.maxSize: #B
c.deleteKeyValue(c.chronologicalKeys[0])
}

c.addKeyValue(key, value) #C
}

There is one last chance for optimisation here. When we need to replace an
existing entry, but the cache is at maximum capacity, we know we don’t need
to discard the oldest entry - we can discard the value we’re about to replace to
create enough room for the new entry. Go’sswitch/case statement has a
very specific behaviour that we used in our implementation: when several
case statements are valid, only the first eligible one will be executed. That’s
an implicit rule that most people know without knowing it - it also applies to
default : if we enter a case statement, we won’t execute thedefault block.
We used that behaviour here to delete only the pair we need to update.
Should you ever need to enter more than one case statement, you could
consider using the keyword fallthrough . But, in our opinion, it would
probably be clearer to write a list of if statements, in that case.

Test it

Our cache is no longer a plain map. We;ve added logic with our list of items
by age, and this new logic is invisible to the end-user. As a result, it’s worth
adding a few internal tests to just make sure we’re doing everything right.

Finally, let’s think of an end-user test scenario for our new feature. We can
validate it by adding items to our cache beyond its limit. It would also make
sense to check that updated items have their insertion timestamp updated. For
this, we’ll create a cache with a small maximum capacity, insert items to the
brim, upsert the oldest, and then insert a new key. We should then be able to
retrieve the upserted value, and we should no longer be able to retrieve the
second value we added in our cache.

Listing 7.21 cache_test.go: Test the maximum capacity of the Cache

// TestCache_MaxSize tests the maximum capacity feature of a cache.


// It checks that update items are properly requeued as "new" items,
// and that we make room by removing the most ancient item for the new ones.
func TestCache_MaxSize(t *testing.T) {
t.Parallel() #A

// Give it a TTL long enough to survive this test


c := cache.New[int, int](3, time.Minute) #B

c.Upsert(1, 1)
c.Upsert(2, 2)
c.Upsert(3, 3)

got, found := c.Read(1)


assert.True(t, found)
assert.Equal(t, 1, got)

// Update 1, which will no longer make it the oldest


c.Upsert(1, 10)
// Adding a fourth element will discard the oldest - 2 in this case.
c.Upsert(4, 4)

// Trying to retrieve an element that should've been discarded by now.


got, found = c.Read(2)
assert.False(t, found)
assert.Equal(t, 0, got)
}

Congratulations! We’ve now written a generic library that we can share with
other developers. We started with a naive implementation that covered our
needs, and then we strengthened it by adding thread-safety on it. Even though
there was quite a lot of theory presented in this chapter, we managed to cover
practical requirements for a cache.

7.5 Common mistakes


In 100 Go mistakes and how to avoid them, Teiva Harsanyi dedicates 20 of
the hundred teachings to concurrency and parallelism. We highly recommend
the book to dive deeper into Go. In the meantime, here is our own, shorter,
list of common mistakes to avoid.

When to use channels in a concurrency situation

Channels are something very specific to Go, which means developers new to
the language do not get them as easily as the rest of the language. They are,
arguably, the one feature that requires a learning curve and some practice in
the entire language.

Because of this, because they can be tricky at first, do not use them if you
don’t need them. They might seem shiny, your situation might look like a
good place to use them, but think twice. Channels should be used when you
need to communicate in or out of a goroutine.

Concurrency impact of a workload type

Given a concurrency situation, you first need to determine the type of


workload: it will not lead to the same solution. Loads can be CPU, memory
or IO-bound. Running a merge sort algorithm is typically high on CPU,
whereas making REST API calls is a lot of input/output.

Take for example a program that counts the number of lines in a bunch of
files. Opening and closing files would be the bottleneck here: it is IO-bound.
The number of goroutines that can work in parallel will be determined by the
operating system’s limit, or the rules of your server if the files are remote -
you don’t want to crash it or get banned by hitting it too much.

On the other hand, if you are encoding a video on a single machine, a task
that is typically high on CPU, then you need to look at the architecture of
your machine. The GOMAXPROCS environment variable is an interesting hint.
Its default value is the number of cores of your CPU. It represents the
maximum number of goroutines that could actually run simultaneously. Any
extra goroutine will have to share a CPU with existing goroutines. It can (too
often) happen that parallelising the work actually makes things worse,
because you have already hit the max load of your CPU.

The size of a buffered channels can be a good thing to benchmark in your


performance tests, which should be executed on a machine with a similar
architecture as that where your code will be executed.

Finish your routines

Goroutines are easy to start, but don’t let them leak. As we have seen, a
program should only exit when all of its child goroutines are finished. Once
you’re done writing to your goroutines, close them so that their readers know
when to stop listening.

Explore the sync package for tools to make your life easier. Most of the types
there should not be copied, though, be careful.

7.6 Summary
A cache is a key-value storage facility. They are commonly used when
getting the value associated with a key is costly (timewise or in the
amount of resources) and when getting a previously retrieved ro
computed value is OK.
When writing a library, one question that should always be raised is “is
this library thread-safe ?”. The answer is either “yes, and I know why”,
or “No, it’s not”. There is no middle ground - the worst case scenario is
usually also the most dangerous.
Go implements genericity with, well, generics. Structures, variables, and
functions can be declared using generics.
A constraint is a requirement for a generic type. Common constraints are
any (quite explicit), comparable , which allows the use of == between
two values, or golang.org/x/exp/constraints.Ordered ., which
allows the use of > , < , >= , <= between two values.
Constraints are passed in square brackets after the name of the generic
entity:
type fieldWithName[T any] struct { value T, name string
}
var hashIndex[T myConstraint] map[uint]T
func sortSlice[T constraints.Ordered](t []T) ([]T,
error)
Constraints can be omitted in the declaration of functions when the
compiler can infer which type it should be using:
sortSlice([]int{1,4,3,2}) .
Go uses goroutines for concurrency. Goroutines are similar to what
other languages usually call threads, except that they’re not. Threads
live at an OS-level, while goroutines live farther from your silicon - they
exist in the runtime environment of Go.
In Go, goroutines are launched with the keywordgo : go do() runs the
do function in a new goroutine.
Channels to communicate data between different goroutines. When
passing a channel to a function, make it explicit in the signature that the
function will either read from the channel, with the syntax func f(c <-
chan string) , or that it will write to the channel, with the syntax func
f(c chan <- string) . If you need to both read from and write to a
channel in a single function, there is probably a design flaw.
Two types are commonly used when we need to synchronise goroutines:
sync.WaitGroup and errgroup.Group . When using sync.WaitGroup ,
start by calling Add(n) with the number of goroutines that will be
executed. Each goroutine is in charge of callingDone . The
synchronisation is achieved by calling Wait() . When using
errgroup.Group , start by setting a maximum number of parallel
goroutines with SetLimit(n) . Launch each goroutine with a call to
Go(...) . The synchronisation is achieved with a call toWait() , which
returns an error if one of the goroutines returns an error.
The choice of sync.WaitGroup or errgroup.Group is often driven by
the necessity to check for errors in at least one goroutine: use
sync.WaitGroup when errors don’t need a specific treatment. Use
errgroup.Group if you want to handle errors.
Mutexes are used whenever we want to protect a variable from
concurrent writing - or reading. In Go, we can create a mutex variable
by using the sync.Mutex type: var myMutex sync.Mutex . A mutex
should never be exposed - instead, use it in exposed functions. A mutex
should always be written close to the variable or field it protects.
You can call mu.Lock() on a mutex, but we highly recommend
immediately following this with defer mu.Unlock() . Debugging locked
mutexes is a pain.
Use t.Parallel() in your tests to let the framework know that a test is
not blocking for the execution of other tests.
Use the -race flag when testing to try and detect data races - but
remember, the failure to detect a data race doesn’t mean there are no
data races.
Use the -trimpath flag when testing to only output paths relatively to
the root of the module.
A switch/case statement will only execute the first valid case - any
subsequent case will be ignored.
8 Gordle as a service
This chapter covers

Creating and running an HTTP server that listens to messages on a given


port
Listening to endpoints with different verbs (GET, POST)
Building a response with a status code
Decoding different sources of data: path and query parameters, bodies
and headers
Testing using regular expressions

In 1962, J. C. R. Licklider mentioned the possibility of having computers


communicate one with another over a network. Since then, computer science
has travelled a long way, first through this “intergalactic computer network”,
then ARPANet, and, today, the Internet. Networks are now used on a daily
basis - when you pick up your phone to check the weather, the news, or even
the time. The possibility of using a server from a remote location was
paramount when, in 2020, the whole planet went into lockdown.

A server, in the end, is really just a machine that listens to communications


on a given set of ports and is able to answer messages that it receives. In this
chapter, we’ll implement such a server. In order to make things interesting,
our server will have an application public interface - usually referred to as
APIs - that will allow a user to play a game of Gordle. This will allow us to
focus the implementation of this chapter on the server side rather than on the
algorithmic aspect, which has already been covered in Chapter 5.

In the first part of this chapter, we’ll create the REST API, and test it with
simple tools such as an internet browser or a command. In the second part,
we’ll integrate the game of Gordle - which will require a few updates to
comply with how we want to use it in the server. Finally, we’ll mention a few
security tips.

[[Disclaimer]] Further steps, such as containerization and deployment, are not


the topic of this chapter.

Requirements:

Play Gordle on a web service. Run a service that exposes at least the
following endpoints:

NewGamecreates a session and returns a gamer ID. This will be used for
counting the number of attempts.
GetStatus returns, well, a game’s status: how many guesses are still
allowed, what were the previous hints, etc.
Guess takes a word as a parameter and returns the feedback and the
status of the game.

We will want to include, in a second step, some tracking of the players’


sessions. Many online resources use some kind of user identifier - most of the
time, authentication. We’ll see how we can convey this information here.

Non-requirements: monitoring, logging strategy, scalability (yet)

8.1 Empty shell for the new service


A web service can be thought of as a daemon - it is always running, waiting
for queries that are sent to a port of the computer hosting it. In this chapter,
we will start our development from the outside in: we will first create a
service that listens to a port but does nothing, then add empty endpoints, one
per feature, and finally, we will add their logic. This strategy is best when
some other team members, e.g. front-end developers or other teams
altogether, are waiting for your work. It is possible to return a static mocked
response that other people can use while you develop and where they can
give you feedback. Having a service that returns something, even when it’s
constant or irrelevant, is often enough to help other people design, develop,
test, or deploy their solutions.

Before we begin writing a few lines of code, it’s important to introduce some
vocabulary and understand a few theoretical notions.
8.1.1 Server, service, web service, endpoints, and HTTP
handlers

A server can be thought of as a computer, running somewhere. We usually


keep servers running, as they host services: applications that expose an API
(Application Programming Interface). Usually, we want services to be
permanently running - a stopped service is of no use. A web service is a
particular kind of service that makes use of the web protocol to receive and
send communications with the outer world.

It is important to keep in mind, for later steps of this chapter, that a service
isn’t supposed to end its execution, in other words, it is running until the user
decides to stop it.

Endpoints are the access points for the exterior into a service. For web
services, endpoints are mapped to specific URLs as we’ll soon see. A web
service, behind the scenes, will use an HTTP handler to deal with requests.
HTTP handlers receive requests and generate responses.

You should be aware that the terms webservice and endpoint are used in
different contexts with slightly different definitions, but we will use the one
above in this chapter.

8.1.2 Let’s code

Let’s start by creating the module for our service. Keeping in mind that we
will be running Gordle in an HTTP server, we can come up with a relevant
name. Remember that if you are pushing your code to a code repository, it is
always better to declare the full path of your repository as a module.

$ go mod init learngo/httpgordle

Once we’ve created thego.mod file, we can start writing the main function. It
will be responsible for creating a server and running it. For this, we’ll need
some help from the net/http package that we’ve already seen in Chapter 6.
Its documentation is quite long, but if we read only the first lines of go doc
net/http , we see that Package http provides HTTP client and server
implementations. For now, we are only interested in the server side.

A server listens to a specific port, so find any free port on your host machine.
The default port for HTTP is 80, but for development purposes, we prefer to
use another, such as 8000 or 8080, as 80 will very probably be used on your
machine by something else.

A function in the net/http package seems to achieve exactly our need -


ListenAndServe .

Listing 8.1 main.go: Create the server

package main

import "net/http"

func main() {
// Start the server.
err := http.ListenAndServe(":8080", nil) #A
if err != nil {
panic(err) #B
}
}

Right, we hope this wasn’t too frightening. We’ve given ListenAndServe


two parameters. The first one is the address we want our web service to be
listening to - localhost:8080 (the localhost can be omitted) is a popular
choice - and the second parameter is, well, the topic of the next pages: it’s the
handler that can deal with requests. If you are on Windows, consider using
another port than 8080 to avoid the Windows Firewall popup. For now, we
can keep it nil, but that’s where the logic will be implemented. We said
panicking was OK in the body of the main function — actually, yes and no.
panic dumps the whole stack trace, which could be confusing for users. It
would be more polite to dump the error and exit with a proper code like the
snippet below, the downside being not having the guarantee that all the
defer
calls have been executed.

if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
Let’s run it!

$ go run .

You might notice that the execution hangs. That’s because the
ListenAndServe function never returns - after all, that’s exactly how we want
it to behave: the service is running!

How do we test it manually? There are many tools that can be used to test an
HTTP server, we’ll mention four of them:

An Internet browser: they are designed to send HTTP requests. It will be


a bit tricky to set some parameters such as the body of the request, its
headers, or the verb we want to use, but it should do the trick for this
first implementation.
Postman: this tool has a graphical user interface that allows for finer
usage than a Web browser when it comes to sending formatted messages
over the network.
curl : this is a command-line tool that exposes everything we want to
use. This is our preferred option: not only is it simpler to share the
execution command line in a book, but also using command-line tools
rather than clickable interfaces will make testing a lot more automatable.
curl is shipped with every version of Linux or Mac OS, and with
Windows 10 and above.
Write a program in Go that creates a client to speak with our server.
We’ll cover this section in a later chapter.

The nil handler we provided can’t really do much, but still, we can see it in
action. If you open your favourite web browser and enter the URL
localhost:8080 , you’ll see a response from the default HTTP handler - a
404 message.

Should you want to usecurl , here is the same request as a command line: it
will also return a 404 message. We’ll make more extensive use ofcurl in our
next tests.

$ curl http://localhost:8080

We have successfully implemented our first HTTP server. Before we move


on to the next section, we need to kill our server. In the shell where we
executed go run . , let’s press control and C simultaneously. This sends an
interrupt signal to the current process, which terminates it. There are other
ways of terminating a running program, either via your computer’s task
monitor or by using the kill command, if you know the process ID of the
program you want to terminate. Once this is done, we can commit our work
before moving on to the next part - getting rid of this nil handler.

8.2 Adding endpoints


We’re implementing a web service that allows people to play a game of
Gordle. Let’s have a look again at the requirements: we need to be able to
create a game, to play a guess, and to retrieve the status of a game. We can
immediately see our service will be dealing with “game” entities - creating
them, using them, and displaying them. We’ll also need to be able to access
these games - either to play a guess or display their status. For this, we’ll
need some way of identifying a game and a way to store it. Storing will come
later, as we work from the outside in.

An endpoint, on an HTTP server, is a pair of a path and a HTTP method. The


path should reflect which resource is being used. In our case, we will be
dealing with games, which makes the path/games a good start. When we
need to identify a single game, we can use/games/{game_id} .

An HTTP method (or verb) describes the action we want to execute as we


call an address. There are several methods defined, but we’ll focus on the
following:

GET: is used when accessing a resource;


POST: is used when creating a resource or asking for data to be
processed;
PUT: is used to update a resource;
DELETE: is used to delete a resource.

Some of these endpoints - GET, PUT and DELETE, in this list - should be
idempotent - that is, several consecutive calls should all return the same
response, and should all leave the resources in the same state as calling them
just once. If your API is not idempotent, you need to explicitly tell your users
in the documentation. GET, PUT and DELETE should simply not be used for
non-idempotent endpoints - use POST instead.

We’re now ready to implement our first endpoint.

8.2.1 Create a new game

Creating a game is the first thing a player will do. Let’s start by having a look
at what goals we need to achieve. We don’t really need any input to create a
game of Gordle. If you remember Chapter 5, we launched a game withgo
run main.go . As we’re adding a new endpoint, we need a pair of a path and
an HTTP method. The resources we’ll want to use are the games - the /games
path seems perfect.

Which HTTP method should we use? In this case, since we are creating a
game, we should use aPOST. It could happen that you already know the
identifier of the resource to create. For instance, if we were dealing with
books, we could use the ISBN to create a book resource with the method PUT
on the following address: /books/9781633438804 .

For Gordle, we only need to create an empty game. However, in order to


keep track of it, it is paramount that we return the game’s identifier - which
will be used in the GetStatus and Guess endpoints. An identifier can take the
form of a series of digits - a phone number is an example of a digit-only
identifier - or characters - for example, a registration plate is a car’s identifier.
There are some good libraries that provide unique identifiers - for example
github.com/google/uuid . We’ll stick to random integers, as we’ve already
covered this topic in Chapter 5 when we needed to get a random word from a
list.

We’ve now defined the API for this endpoint. We know which path we want
to use, which verb should be associated with it, and what the response should
be - an identifier. The documentation would start like this:

POST /games - creates a new game and returns its ID.

Back to the code!


A few words about project organisation

There is no official rule on how to organise files within a module, nor


modules within a project. However, there are common practices that are
worth mentioning. The first important point is that a few folder names have a
specific behaviour in Go.

testdata:
We’ve already mentioned in Chapter 3 that directories namedtestdata
won’t be examined by the go tool - code inside it won’t be compiled by
go build or go run , tests written inside won’t be executed bygo test ,
and documentation won’t be visible through go doc .
internal:
It’s now time to introduce another special name for a directory:
internal . An internal directory can contain code for the current
module to use, but this code won’t be visible to other modules. For
instance, the modulegolang.org/x/text has aninternal package,
where the type InheritanceMatcher is defined. However, even though
this type is exposed (because of its capital letter), we can’t create a
variable of this type in our module: the scope of types and functions
defined in a directory named internal - or a subdirectory of an
internal directory - is limited to the current module (in our example,
the golang.org/x/text module). An internal directory is a good place
to put code you don’t want other people to use. In the case of a service,
most of the code will reside there.
vendor:
We’ll mention this one for historical reasons - there is a third directory
name to know about: vendor . We won’t go through the whole history of
the language, but earlier versions of Go used to have “versioned”
dependencies - copies local to each module. These copies would be
placed inside avendor directory - and it was a good idea to always
ignore the contents of that directory in your favourite versioning tool.
It’s best to simply never name a directory vendor , for compatibility
matters. If you really must, use vendors instead.
pkg:
You might encounter packages located in apkg package, at the root of
the module : module/pkg/my_package . In pkg , you can expect to find
libraries that could be used outside of your project. We do not encourage
the use of pkg , since it is not a Go standard. It is rather a historical
artefact or a golang-standards/project-layout, which is not the official
standard from the Go team.

These were strict rules, and we can add some suggestions that you are free to
follow. We like to expose the API of a service in an api package - a directory
at the root of the module.

File organisation of the service

We are now prepared to organise our code and can create an api directory at
the root of our module, and aninternal directory into which we’ll write all
sorts of things, including our HTTP handlers in a subdirectory. Indeed, how
we implement an endpoint won’t be of any use to external developers, and
that’s why we might as well hide this within an internal/handlers
directory.

We choose to create a package for each handler. In our case, a simple


package for all of them would be perfectly fine as well, however, by having
multiple packages, we can show how we would structure a larger project --
without actually writing a large project. Depending on the situation, you can
sometimes do everything in the same package (albeit in different files for
clarity) or spread the logic across different packages. Here we chose to keep
the logic (validations, calling the storage, etc.) inside the handlers (who are
responsible for everything related to the HTTP API), which means we prefer
to have a package per endpoint. As usual, think about how each package will
scale and grow when you add functionalities as you make this type of
decision. Finally, it’s important to take into account the notion of coupling.
Some structures and functions are coupled by nature and will need to be
updated together. The more coupled pieces of code are spread into different
packages, the more difficult it is to maintain code quality. As usual, it is a
balance to find, and later updates will provide opportunities to refactor the
organisation if it no longer fits your needs.

The HTTP API of our service can be exposed in anhttp.go file, while the
initial handler for a new game will be in a newgame/handler.go file. We’ll
bind the API to the handler in the router.go file.

Here’s our file organisation at this point:

.
├── go.mod
├── internal
│ ├── api
│ │ ├── doc.go
│ │ └── http.go
│ └── handlers
│ ├── doc.go
│ ├── newgame
│ │ └── handler.go
│ └── router.go
└── main.go

Defining the REST API

REST (REpresentational State Transfer) is a set of conventions that help


define an API on an HTTP server. It defines collections of resources and
ways to interact with them.

Let’s start with the http.go file. It should contain everything that we need to
expose to allow someone else to use the NewGameendpoint that we’ll next
implement. By everything, we mean which URL should be used and which
method. If there is anything more, such as parameters to the query, we
include them in this file. Let’s create the http.go file in the package
internal/api .

Listing 8.2 api/http.go: Define what is needed for NewGame endpoint

package api

import "net/http"

const (
NewGameRoute = "/games"
NewGameMethod = http.MethodPost #A
)

We have the needed constant for theNewGameendpoint that we’re about to


implement.

What should they expect in return? Sometimes creation endpoints only return
an ID. Here we would like to be more verbose and return the full game that
we created: the client of our Gordle game needs to know the number of
characters in the secret word and the maximum number of attempts allowed.
As we’re defining what a Game is in the API, we should think of every field
that we want to include. We can also tell the status of the game to let them
know whether they can keep playing and whether they won or lost already.
Finally, having a list of the previous attempts will help in the display.

Let’s define the shape of this JSON game.

{
"id": "1225482481867118141",
"attempts_left": 4,
"word_length": 5,
"status": "Playing",
"guesses": [
{"word":"slice","feedback":""}
],
}

This translates easily into a Go struct, with JSON tags as seen in previous
chapters.

Listing 8.3 internal/api/http.go: Define the API structure for a game

package api

// ...

// GameResponse contains the information about a game.


type GameResponse struct {
ID string `json:"id"`
AttemptsLeft byte `json:"attempts_left"`
Guesses []Guess `json:"guesses"` #A
WordLength byte `json:"word_length"`
Solution string `json:"solution,omitempty"` #B
Status string `json:"status"`
}

// Guess is a pair of a word (submitted by the player) and its feedback (provided by Gordle).
type Guess struct {
Word string `json:"word"`
Feedback string `json:"feedback"`
}

Note that all types are primary types, we are not imposing any strong typing
to our consumers.

We chose to express the feedback as a string. More on this when we start


filling it up.

We now have the structure, it is officially published to consumers, time for


the server to actually expose the endpoint.

HTTP multiplexer, handle, and handler

If you remember the first section, we provided a nil handler to the


ListenAndServe function. Let’s have a closer look at this function’s second
parameter - it’s a http.Handler . This type is declared as follows:

type Handler interface {


ServeHTTP(ResponseWriter, *Request)
}

There are several ways of implementing aHandler that can be passed to


ListenAndServe . For instance, we could define a structure and have it
implement the interface:

type newGameHandler struct {}

func (h *newGameHandler) ServeHTTP(w *http.ResponseWriter, req *http.Request) {


...
}

And in the main function:

err := http.ListenAndServe(":8080”, newGameHandler{})

This would do the trick. However, there is a minor issue here: creating a
handler this way only allows for one endpoint to be defined. As we know we
want to implement several endpoints - at least 3.

Have a look at another type provided by thenet/http package: the


ServeMux . A quick go doc http.ServeMux command shows that it is a
“request multiplexer”. This means that a ServeMux is in charge of routing
requests based on the URL they were sent to. Multiplexers - also commonly
referred to as “muxes” - are the founding stone of an HTTP service.

There are two other important points to highlight with the ServeMux : first, it
allows the registration of endpoints with the HandleFunc method, which is
what we want to achieve. Second,ServeMux has a methodServeHTTP , with
the correct signature: it implements theHandler interface, and we can pass a
ServeMux to the ListenAndServe function!

Multiplexer

Let’s start by writing the multiplexer. We will then look at the signature of
what it accepts to register an endpoint.

We need to build a http.Handler . We know that building it will at first only


take a few lines, but as soon as the service grows, it will grow fast. This is
why we write a function that builds it and returns it. All the logic of building
the mux will be isolated here.

What the function does is simply to create a new instance, make it listen to
our future endpoint and return it. We have not defined yet what the endpoint
will look like, so let’s put a placeholder first. If you want to compile to check
that everything else makes sense,nil is perfectly acceptable; although if you
use nil , don’t expect a request to your service to do anything but panic.

Listing 8.4 internal/handlers/router.go: Associate a handler to a URL

package handlers

import (
"net/http"

"learngo-pockets/httpgordle/internal/api"
"learngo-pockets/httpgordle/internal/newgame"
)

// Mux creates a multiplexer with all the endpoints for our service.
func Mux() *http.ServeMux {
mux := http.NewServeMux()
mux.HandleFunc(api.NewGameRoute, newgame.Handle) #A
return mux
}

We can finally use this Mux in the main function, replacing the previous nil
handler:

Listing 8.5 main.go: Use the new Mux() function

package main

import (
"net/http"

"learngo-pockets/httpgordle/internal/handlers"
)

func main() {
err := http.ListenAndServe(":8080", handlers.Mux())
if err != nil {
panic(err)
}
}

Now, what’s left is to implement that newgame.Handle handler that we passed


in the mux.

Handler for New Game

mux.HandleFunc has the following signature:

func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request))

This method registers a handler - the anonymous function we pass as the


second parameter - for the provided path. The benefit of this method over
http.Handle is that we don’t have to write a new http.Handler - we simply
have to provide the handler itself, the function in charge of dealing with the
request and writing the response.

For the simplest implementation, we can simply call theWrite function on


the ResponseWriter and say something to the client. Let’s write the handler
in the file internal/handlers/newgame/handler.go .

Listing 8.6 handler.go: Empty newGame handler

package newgame

import "net/http"

func Handle(w http.ResponseWriter, req *http.Request) {


_, _ = w.Write([]byte("Creating a new game"))
}

This is the very first version. We’re not even checking errors - but we will, as
we get closer to our final version.

Right, we’re getting there! We can now go run . this code and check how it
behaves. There is in fact a final step we need to achieve before we can move
on to the next section. Let’s discover it together.

If you open a browser to localhost:8080/games or run curl


http://localhost:8080/games while the service is running, you should get
a message letting you know that a game is being created. There is an invisible
parameter that is passed by these tools - we didn’t specify which HTTP
method to use, and both tools chose and sent GET a by default. We want our
NewGameendpoint to only accept POST requests. Let’s implement this.

What happens when a request is received?

Even though it might sound evident, a server should be able to serve several
clients at a time. The fact that someone is using an endpoint should not
prevent others from also calling it. Behind the scenes, this means that the
server must be able to not wait till a task is complete before serving a new
call. In Go, this is achieved with goroutines. Goroutines are Go’s version of
concurrent programming - the closest equivalent notion, in other languages,
is usually called threads, coroutines, fibers or green threads.
Goroutines, however, are different from threads, and we will cover
goroutines more extensively in a later chapter.

When a server receives a request, it starts a goroutine that will execute the
handler. Even though this might seem wonderful and extremely handy, we
will see that it comes with limitations. The last section of this chapter,
covering correctness, will present two important topics - race conditions, and
ensuring the server doesn’t explode when attempting to serve two requests at
the same time. As seen in the previous chapter, instead of running simply go
test . we will add the flag -race .

HTTP codes and header

A game of Gordle should only be created when aPOST request is received.


But what should we do when we receive the wrong method, and how do we
implement this?

The answer to the first question is clear: we should reject the message. There
is actually a specific HTTP code for wrong verbs, so we might as well use it:
http.StatusMethodNotAllowed (see Table 8.1). And as to where we should
make this check, the only logical place is within the Handle function.

HTTP response headers are set by a call to WriteHeader with an adequate


status code. If we receive a request for anything other than what we declared
in the API, we can terminate the call there and now.

Here is the implementation of the check in thenewGameHandler function, in


the file internal/handlers/newgame/handler.go .

Listing 8.7 handler.go: Reject requests using the wrong method

func Handle(w http.ResponseWriter, req *http.Request) {


if req.Method != api.NewGameMethod {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
_, _ = w.Write([]byte("Creating a new game"))
}
As you can see, this is not really the same error handling as in “regular” Go.
We have to adapt to the HTTP protocol, and this means communicating
errors through status codes. Since there is no point in trying to process a
request that was invalid, we can safely return from our handler. The error -
invalid method - has been dealt with and there’s nothing else we want to do.

Let’s run our previous test of starting the service and checking the
http://localhost:8080/games page through various tools. Depending on
your browser, you could see a 405 error. However, this could also not appear
- Firefox didn’t display anything in our tests, while Google Chrome did. Let’s
have a look with curl :

$ curl localhost:8080/games

Nothing. A completely empty response. This is quite problematic, as it makes


checking our implementation more difficult. Fortunately for us, curl also
comes with options, and an interesting one is--verbose , or -v , which prints
a lot more information. Let’s give it a try - you should get a somewhat similar
output:

$ curl -v localhost:8080/games

Output:

* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /games HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.81.0
> Accept: */*
> < HTTP/1.1 405 Method Not Allowed
< Content-Length: 0
<
* Connection #0 to host localhost left intact

Now that’s more like it. Lines starting with > are header data sent by the
client. Lines starting with < are header data returned by the service. The first
line to notice here is the first header data sent - we did send GET
a method.
Indeed, curl uses a default verb when sending a request, if none was
explicitly provided - in this case, a GET method. The other interesting line
from this output is the first found in the response section: we can see the
server returned a 405 status code and its explicit meaning “Method Not
Allowed”- which is what we are expecting.

curl allows for specific methods to be used via the--request , or -X , option.


In Postman, you’d simply change the method used by using the dropdown
box. From a web browser, things are getting a bit tricky. Sometimes, it’s
achievable to send aPOST - or a PUT, a DELETE, or anything you’d like - using
the developer’s settings, but, in most cases, we’ve reached the limits of what
browsers offer for our purpose. From now on, we’ll limit the scope of testing
with external tools to curl - Postman is quite intuitive for all basic uses and
doesn’t need guidelines. Let’s shoot aPOST on our endpoint:

$ curl -v -X POST localhost:8080/games

Output:

* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> POST /games HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.81.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 19
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host localhost left intact
Creating a new game

This is almost exactly what we wanted. Almost, because there’s a rule in the
HTTP protocol: an action that creates a resource should return a status
describing that the resource was created. Here’s what raised our attention in
the pair of exchanged headers: the status code of the response200
is , which
stands for “OK”. This status code shouldn’t be used when creating a game;
instead, according to HTTP standards, we should be using 201 , which stands
for “Created”.

Here is a table of the most common HTTP status codes, as described in the
the RFC9110 documentation at
https://datatracker.ietf.org/doc/html/rfc9110#name-status-codes
. This
documentation contains the full details regarding HyperText Transfer
Protocol (HTTP) protocols, standards and status codes. Make sure to look up
the meaning of 418 and where it comes from.

Table 8.1 Most common HTTP status codes

Code Meaning
200 OK - the server is returning what the client asked for
201 Created - server processed the request and sent the
newly created resource as response
202 Accepted - the server will treat the client’s demand
later - typically used for asynchronous processes
204 OK, nothing to return - the server correctly fulfilled the
request and there is no content to return. It is used by
POST, PUT and DELETE commands

400 Bad Request - the server cannot process due to


something that's perceived to be a client error (e.g.
malformed request, missing mandatory fields)

401 Unauthorised - the client is not allowed to do that


action, the request lacks valid authentication
credentials for the target resource

403 Forbidden - the client needs to authenticate in order to


access this resource, in other words, the server
understood the request but refuses to fulfil it

404 Not Found - the server did not find a current


representation for the target resource or is not willing
to disclose that one exists

500 Internal Error - the server encountered an error and


doesn’t know how to deal with it

Let’s bring this final change to the code before we can wrap it up and move
on to the next endpoint.
The function in charge of writing this status code is the handler. So, let’s
open newgame/handler.go and include the call to write the status code. This
code appears in the response, and we’ll use the WriteHeader method, which
only takes a single parameter - the status code to be carried with the response
body, in the file internal/handlers/newgame/handler.go .

Listing 8.8 handler.go: Set the status code of the successful response

func Handle(w http.ResponseWriter, req *http.Request) {


[...]
_, _ = w.Write([]byte("Creating a new game"))
w.WriteHeader(http.StatusCreated)
}

Now, if we restart our server, and we run:

$ curl -v -X POST http://localhost:8080/games

We should see that the response status code is now 201, shouldn’t we? Well,
unfortunately, it’s not. It’s still 200. What’s happening? Well, if we have a
look at the terminal in which the server is running, we should see a line
similar to these lines:

http: superfluous response.WriteHeader call from learngo-pockets/httpgordle/internal.newGameHandler


(newgamehandler.go:16)
What does this mean? We only have one call toWriteHeader , how can it be
superfluous? It turns out that thew.Write call is already setting a header with
a default http.StatusOK code. The good thing is thatWriteHeader will
ignore any subsequent calls once the header is set, which means it’s basically
a regular ordering bug betweenw.Write and w.WriteHeader to determine
which code will be set. It’s a rigged race, because we’re in charge of it, and
we control which one is called first. Let’s adapt the code above to complete
this section in the same file, internal/handlers/newgame/handler.go :

Listing 8.9 handler.go: Set the response message

func Handle(w http.ResponseWriter, req *http.Request) {


if req.Method != api.NewGameMethod {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte("Creating a new game"))
}

Great! After restarting it, we can observe that the server now behaves as
expected: it returns the correct status code and tells us it’s creating a game -
shouldn’t we believe it? We’ve checked that it only accepts POST requests.
The pesky line about superfluous calls toWriteHeader is now gone.

We can happily commit and move forward, but there is a way to make our
code shorter.

Open-source multiplexers

In the previous section, we made an assumption. We assumed that the list of


endpoints was set in stone and that we would never have to implement
anything else. This meant that we were able to associate a path to each
endpoint. But suppose we are now informed that we need a new endpoint,
something to be able to track the games currently being played. The best
URL for this would be /games , and the method we’d want to use would be
GET. There’s the rub. We already have a handler for this path, and our handler
ensures the method isPOST. If we were to implement this new ListGames
endpoint, we’d have to change how ournewGameHandler is implemented. As
a matter of fact, it simply wouldn’t be a newGameHandler at all! It would
listen to a given path.

This is one of the comments that can be made against the http.ServeMux
type: creating several endpoints (for different verbs) for the same path is
cumbersome. For this reason (and a few others that we’ll cover as we meet
them), writing a personal implementation of the http.Server interface is
quite common, and many of the most-starred Go projects on github are about
this. At the time we are writing, there is an open proposition by the Go team
to add this feature into the standard libraries
(https://github.com/golang/go/discussions/60227). Here are some popular
picks:

github.com/go-chi/chi: it allows for simple implementation of endpoints


github.com/gorilla/mux : it is very complete and robust
github.com/gin-gonic/gin : this is the most popular mux on github

We picked chi for its conciseness and the rate of maintenance by the
community. Let’s quickly rewrite the router with this library. First, we need
to get the dependency. For this, we need to tell the go tool to add it to our
go.mod file with the following command:

$ go get -u "github.com/go-chi/chi/v5"

You might notice that there is a v5 trailing here. If you try to access this URL
in a browser, it will return an error. However, this github repository has been
tagged with v5.0.0 at some point (and with more tags as time passes), and
using /v5 in the go get command is how we ensure we use a version
compatible with v5 - it could be v5.0.0 or v5.0.8, both offer the same API.

We can then change the router:

Listing 8.10 internal/handlers/router.go: Use the chi library

package handlers

import (
"github.com/go-chi/chi/v5" #A

"learngo-pockets/httpgordle/internal/api"
"learngo-pockets/httpgordle/internal/handlers/newgame"
)

// NewRouter returns a router that listens for requests to the following endpoints:
// - Create a new game;
//
// The provided router is ready to serve.
func NewRouter() chi.Router { #B
r := chi.NewRouter()

r.Post(api.NewGameRoute, newgame.Handle) #C

return r
}

Alternatively, if you want to make use of the NewGameMethod defined in the


API for the sake of your users, instead of calling the methodPost , you can
use another function:

r.Method(api.NewGameMethod, api.NewGameRoute, newgame.Handle)

This is more verbose but asserts that the server uses what its API exposes.
Otherwise, just remove the constant and let users read the documentation.

Exercise 1: Use a walk function to print for each handler the method, and
route.

We can remove the check for the method in our handler because chi makes
sure we never get called with any other method.Handle is now done in 2
lines. We can even use the occasion to actually return a game, as defined by
the API.

Listing 8.11 newgame/handler.go: Return a Game response

func Handle(w http.ResponseWriter, req *http.Request) {


w.Header().Set("Content-Type", "application/json") #A
w.WriteHeader(http.StatusCreated)

apiGame := api.GameResponse{} #B
err := json.NewEncoder(w).Encode(apiGame) #C
if err != nil { #D
// The header has already been set. Nothing much we can do here.
log.Printf("failed to write response: %s", err)
}
}

Try it. Now we need to test it.

Testing the game creation

This new shorter version of the handler should be easier to test: fewer lines of
code means fewer bugs. What could be blocking is that it takes two rather
complicated parameters that we need to mock or stub.

Lots of people are writing services, and this is why they wrote libraries for
making the job faster. Lots of people are testing these libraries, so of course
we don’t need to stub these ourselves. It’s always good to limit the tests to
the code you’ve written.

We can easily create a request withhttp.NewRequest . For the writer, Go has


the built-in package httptest , with a Recorder type and NewRecorder build
function.

We are making use here of two well-used testing packages require and
assert . Their use can be controversial since some purist will recommend to
call only the standard library nevertheless we will use them so you can
familiarise with them. They are found in the module
github.com/stretchr/testify , so let’s start by adding this module to our
project with go get github.com/stretchr/testify .

testify’s assert and require:

testify is a very popular library for testing Go code. It offers a lot of


validation tools, the most commonly used ones being its two packages
assert and require . Each of these packages offers similar functions - check
whether an error is nil or if two values or equal, but they have a very different
behaviour:

If a function in the assert package notices something wrong, a message


will be displayed, and the execution of the test carries on.
If a function in the require package notices something wrong, a message
will be displayed, and the execution of the test is immediately
terminated.

This helps drive which of assert or require we want to use: the former when
we need to check several values, and the latter when we know there is no
point in continuing the test if something is wrong.

You should have everything in hand now to write the test to the nominal
behaviour.

Listing 8.12 newgame/handler_test.go: Testing Handle

func TestHandle(t *testing.T) {


req, err := http.NewRequest(http.MethodPost, "/games", nil) #A
require.NoError(t, err)

recorder := httptest.NewRecorder() #B

Handle(recorder, req) #C

assert.Equal(t, http.StatusCreated, recorder.Code) #D


assert.Equal(t, "application/json", recorder.Header().Get("Content-Type")) #D
assert.JSONEq(t, `{"id":"",...}`, recorder.Body.String()) #D
}

This was our first endpoint - our service now supports the creation of (empty)
games. That was a big step, but we’ve covered a good many important
aspects of web services. Before we start playing, our next task is to ensure we
can get the status of a game given its identifier.

8.2.2 Get the game status

Here’s the picture so far: we have a service that allows for the creation of
Gordle games. Of course, the end goal is to have players make guesses, but
the second endpoint we’ll describe here is theGetStatus one. It contains a
tiny bit more than the first one, as it introduces only a single new notion:
reading a variable input from the user. This time, we can’t just always return
“a game” - we need to be able to identify which game the user wants to view.

Providing parameters to an HTTP API

There are four main ways for a user to communicate parameters (or variables)
to an HTTP web service. We will see that they are each appropriate for
specific use cases.

Path parameters;
Query parameters;
Request bodies ;
Headers .

Path parameters are used when we want to target a single resource. In our
Gordle server, an identifier can be used to target an instance of a game. The
path to target a game should be/games/{gameID} .

Indeed, it’s a common practice to use/items/{itemID} in REST APIs (more


than the singular version,/item/{itemID} ). Sometimes, we can accept more
than one path parameter. Twitter, for instance, uses
https://twitter.com/{user}/status/{messageID} to display a message
by a user. Similarly, Wikipedia uses path parameters to access its articles:
http://jp.wikipedia.org/wiki/金継ぎ., where the identifier of the article is 金継
ぎ. Our
GetStatus endpoint will implement this for the Gordle service.

Query parameters are used to filter the resources we want to target with our
request. A filter is a list of pairs of keys and their associated value. There
could be 0, 1, or many results - we don’t know, and we can’t make any
assumptions. These query parameters are passed in the URI of the request,
but at the end of it, separated from the URL by a? character. These
parameters aren’t specific to an endpoint and could be used in several places
of the API. Their syntax is {{path}}?key1=value1&key2=value2 . The most
common example is Google’s search engine: as there can’t (reasonably) be a
dedicated resource per possible query that people ask Google, each query is
sent to their servers as a query parameter, where the key qisand the value
was keyed in by the user: https://www.google.com/search?q=金継ぎ . We
will show how to use a query parameter at the end of this chapter to improve
the NewGame endpoint by allowing the caller to specify which language they
want to use.

While path and query parameters should be used to specify which resources
we want to operate on (retrieve, delete, update), or what characteristics these
resources should have, we sometimes need to provide parameters inherent to
the request itself. When we need to send data to the service, we use body
parameters. This name derives from the fact that they will be transmitted to
the service as part of the request’s body. So far, we haven’t seen request
bodies, but this will be the point of the third endpoint - .
Guess

Finally, some parameters are “meta”-parameters - they don’t affect the


execution, but, for instance, the format of the output or describe some
information about the caller. These parameters are passed in the headers of
the query - just as the status code is passed in the headers of the response.
Headers are usually the place where we store authentication information.
That’s enough theory, let’s start implementing our new endpoint!

Define the HTTP API for GetStatus

As we did earlier, we need to declare the path to this new endpoint, and the
method that we expect when it is called. We place these two values in the api
package to make them visible to other users. We want to retrieve the status of
the Game resource without changing it - aGET will do. But the path is a bit
more complex! In a REST API, every request must contain all the
information to identify which resource is targeting. In our case, this means
the request needs to contain the ID of the game. As we’ve seen previously, a
path parameter is a common way of providing this identifier:

/games/{game_id}

Indeed, how do we represent a path that is not constant? That’s where the
default net/http package is a bit too strict - and this is one of the reasons
that pushed developers into writing their own routing libraries, such aschi .
We want to be able to access a game’s status via the path /games/8476516 ,
where 8476516 would be the game’s identifier. Obviously, we can’t create
billions of routes - one per identifier - so, instead, we’ll let the chi library
determine how a path parameter should be handled.

Let’s have a look at chi ’s documentation. Since chi isn’t a package of Go’s
SDK, we need to be in a module to be able to read it. If we rungo doc
github.com/go-chi/chi/v5 , we can read that curly braces around a word are
used to represent placeholders in a path. This means we can use
/games/{game_id} and chi will be able to extract the identifier in the
handler. Achieving this with Go’s SDK would require lots of security checks,
as we’d be splitting the full path into bits separated by slashes. It’s doable,
but it would take a lot more lines than what these libraries offer.

Listing 8.13 api/http.go: Add constants for GetStatus endpoint

const (
NewGameRoute = "/games"

GameID = "id"
GetStatusRoute = "/games/{" + GameID + "}" #A
)

Defining a constant for the GameID placeholder will be useful when we want
to extract it from the request, in the handler. This is our next step.

Note that we are concatenating strings with+ here. As GetStatusRoute is a


constant, the concatenation happens only once, during compilation. For this
reason, we don’t need to use a string buffer.

Before we start listening to this route, there is one more definition we want to
specify in our documentation: what is the expected status code? This request
is asking for a resource, so the possible responses should be "200 here it is",
or "404 not found" if the game doesn’t exist. Of course, 500 internal server
error" is always a possibility but we want to avoid it as much as possible.

Implement getStatus

Create a package for the new handler, ideally


internal/handlers/getstatus , with a file for the principal Handle function.
If you decide that copy/pasting from the newgame package is a good option,
don’t forget to rename the package in both new files - in all three files, if you
dutifully added a doc.go .

For now, the implementation of the GetStatus endpoint will be limited to


only printing the game identifier that the caller passes as a path parameter: we
don’t have any storage yet, we just want to make sure that we know how to
parse the ID.

Add the path to the router. There is no specific priority when adding handles
to a mux. Grouping things based on resources and relater behaviour is usually
what makes the most sense.

...
r.Post(api.NewGamePath, newgame.Handle) #A
r.Get(api.GetStatusPath, getstatus.Handle) #B
...

Now we need to write this getstatus.Handle function. It must have the


same signature as in the first endpoint (because it’s H
a andlerFunc ). The chi
library exposes a usefulURLParam function: take a look at the documentation
before using it. If anything goes wrong, we’ll call http.Error , which writes a
message and a status code to the response writer. Remember to always
return after a call to http.Error - this is to prevent any writing to the
response. Let’s write in the file internal/handlers/getstatus/handle.go :

Listing 8.14 handle.go: Status endpoint handler

func Handle(writer http.ResponseWriter, request *http.Request) { #A


id := chi.URLParam(request, api.GameID) #B
if id == "" { #C
http.Error(writer, "missing the id of the game", http.StatusBadRequest)
return
}
log.Printf("retrieve status of game with id: %v", id)

apiGame := api.GameResponse{
ID: id,
}
// ... encode into JSON
}

For the sake of simplicity, we decided to use the standard log package which
is not thread-safe. So keep in mind that it can lead to unordered logs and
complicate later testing.

The validation of the ID could be more thorough: we could check that it is


only digits, or that it follows whatever constraints we have put in for security.
We will forget this for the sake of keeping our project pocket-sized, but keep
it in mind if you push something to production.

Run the server and call the endpoint. Does it return the ID you passed in the
URL? Congratulations. You can commit. But wait, what about the test? Good
thing you asked.

Testing getStatus

The test here is not much different from thenewGame version. We only need
to add the ID to the list of URL parameters. This requires dipping our first
toes into the notion of context, which will be covered in more detail in a later
chapter. For now, what needs to be understood is that a context is where
chi
reads URL parameters - and this means it’s how we need to add them.

Listing 8.15 getstatus/handler_test.go: Testing Handle

func TestHandle(t *testing.T) {


req, err := http.NewRequest(http.MethodPost, "/games/", nil) #A
require.NoError(t, err)

// add path parameters


rctx := chi.NewRouteContext()
rctx.URLParams.Add(api.GameID, "123456") #B
req = req.WithContext(context.WithValue(req.Context(), chi.RouteCtxKey, rctx))

recorder := httptest.NewRecorder() #C

handle(recorder, req)

assert.Equal(t, http.StatusOK, recorder.Code)


assert.JSONEq(t, `{"id":"123456","attempts_left":0,"guesses":[],"word_length":0,"status":""}`,
recorder.Body.String())
}
The rest, you can guess. Now take the time to usechi library by updating the
function Handle() on NewGameendpoint in the file
internal/handler/newgame/handler.go .

We can now create games and retrieve them. This allows us to make sure
everything is now ready for our third and last endpoint - guessing!

8.2.3 Guess

Finally, in order to play, the player must be able to send a query with their
word and get a feedback message. What will the API of the third endpoint
be?

Request definition

Adding a new endpoint means choosing a new pair of path and method. What
method are we going to use? We are changing an already-existing resource,
so PUT is in order. The path is fairly straightforward - it’ll be
/games/{game_id} , similarly to the getStatus endpoint, as that’s the
resource that we’ll be interacting with.

But then, the endpoint needs to receive parameters. It will read the guess
from the request body, encoded in JSON because we are following HTTP and
REST standards. In some cases, updating a resource requires sending its full
description - in that case, we would have the same JSON structure for the
response of the POST and GET and the request of the PUT. Here, changing
the status of a game requires only sending a word, as long as we’re providing
the ID, so we will go for the simplest JSON object.

{"guess":"hello"}

This translates into a Request that we can define in the


api package for others
to use.

Listing 8.16 api/guess.go: Request definition

// GuessRequest is the structure of the message used when submitting a guess.


type GuessRequest struct {
Guess string `json:"guess"` #A
}

Add the path to the endpoint to this file. The path is the same as for
GetStatus, but nothing proves that it will always be, so we need another
constant. Using the same path constant means forcing them to always be
identical, by design. We don’t want that - each endpoint deserves to have its
path as a constant.

GuessRoute = "/games/{" + GameID + "}"

What we do want to force by design is the consistency of theGameResponse


structure. Both GetStatus and Guess endpoints return a full game, and we
don’t want the definitions to diverge. For this, we can simply use the same
structure as the response in both endpoints.

Decoding a request body

We have an API. Time to write a new handler in a new package and plug it
into the router. Nothing needs new explanations, so we can wait while you
prepare the handle function. You can even run it with a simplelog.Printf to
make sure it works as expected.

What will this new handler do? First, it should parse the ID of the game,
exactly like we did in the previous endpoint. No surprise here.

Second, it should parse the body of the request. You already know from
previous chapters how to decode JSON messages into a Go structure. In a
flash of genius, somebody thought that theBody field of a http.Reques t
should implement the io.Reader interface - let’s make use of that!

Listing 8.17 guess/handler.go: How to read a request body

// Read the request, containing the guess, from the body of the input.
r := api.GuessRequest{} #A
err := json.NewDecoder(request.Body).Decode(&r) #B
if err != nil {
http.Error(writer, err.Error(), http.StatusBadRequest) #C
return
}

Print out your findings into the logs and check what happens when you shoot
a curl at the service:

$ curl -v -X PUT "http://localhost:8080/games/123456" -d ‘{"guess":"hello"}’

Note the -d flag for passing a body. You can also [email protected] if your
request body is saved in a file.

Does everything show properly on the logs? Did you write a test?

Testing with a request body

If we run the same test on this handle function as we have in GetStatus, it will
panic: the body of our request isnil , so we are trying to decode from anil
reader, and this doesn’t end well. The only change that needs to be made is
quite short: the http.NewRequest function that we’ve used to create requests
so far takes, as its third parameter, anio.Reader , which is quite simple to
create from a string in Go (remember to use backticks̀ to wrap a string that
contains double quotes" without having to escape them):

body := strings.NewReader(`{"guess":"pocket"}`)
req, err := http.NewRequest(http.MethodPut, "/games/123456", body))

We have the full structure of our service. We have 3 endpoints that return
something. This point is a good time to deploy the service to a testing
environment and have other people play with it. It’s the situation where you
send a link to your shiny new service to the rest of the team with a long
message asking them to try it, warning them that it does nothing yet, and
somebody will reply saying “Hey, nice work but I found a bug: how come it
always returns an empty game?” .

Well, let’s fix that.

8.3 Domain objects


Each player’s game must be stored somewhere, a concept which is
commonly referred to as “repository” - or “repo”. Out of the very many
options, the cheapest and fastest is to store it in memory. This has a lot of
downsides: if you decide to scale up a little and deploy more than one
instance, the game will only be stored in one, meaning that if yourGuess
query hits an instance on which your game isn’t registered, you get a 404;
also if the instance goes down for any reason, your game is lost.

We want this project to fit in a pocket, so we will go for this option for now.
In a bigger project, we’d use a proper database.

Separation of concerns

One of the main ways to tell whether a project’s code is clean is the
separation of concerns. In theory, each package, each structure, should have
a defined role that can be explained in one sentence, and this sentence is the
first line of its documentation. Most of the time, one sentence is enough to
cover everything. In practice, this rule can introduce complicated
communication between highly coupled ideas, as mentioned before there is
always a tradeoff to find between separating and keeping things together.

If the responsibilities of a package don’t fit into a handful of words, it is


generally a red flag: future maintainers will not know where to find what
piece of logic and will end up throwing everything away and rewriting it, not
necessarily better. Sometimes we rewrite code only in order to understand
what is going on.

Whether you choose hexagonal architecture, lasagne model-view-controller,


or any other chimaera that fits your needs, most of the time you want to
isolate the data storage management from the API details. You will see some
cases where data-storage and api design need to co-evolve to be performant,
but they are not the majority.

Software theorists often give an example of the perfect data-storing package:


“Look, you are using MariaDB, and you just need to change this package
import and boom, you are using DynamoDB”. This is nice, but it never
happens: one does not simply change their storage system. What one does,
though, a lot, is maintain it, and knowing that all the DB-related things are
here and all the non-DB-related things are not-here will help everyone a lot in
the long run.

8.3.1 Domain types

Create a new package for the data repository. Do you want other modules -
other developers - to use it? No. That meansinternal/repository will do.
This package is responsible for storing and retrieving games. Here, that was a
one-sentence documentation.

Here is the structure of our code so far:

Figure 8.1 Structure of the service


We have an api package that other modules can import, and an internal
handlers package directly called by our main .

If we go towards a hexagonal architecture, also known as ports and adapters,


and add to it the code domain, the library that we will copy from Chapter 4,
and the new data storage, we should aim for something like this:

Figure 8.2. Target structure of the service

This is a bit too complex for the size of the service, so we will keep the
domain logic inside the handlers package, but keep in mind that the core
logic and the API details should be two distinct things, with an easy-to-draw
boundary. Each different layer of our design should serve a specific purpose.

What we call the domain here is the core of the service, the business logic of
our program. All adapters should be able to rely on it, and it should rely on
none of them. This way, we prevent circular dependencies in the code and
circular knots in our brains.

The package can be simply called “domain”, “core”, or with a more specific
but limiting name. In our case, we find that it deals with a player’s session.

In an internal/session package, create aGame struct. This should contain


everything that the service needs in order to interact with Gordle games.

We know we want to respondto the API’s needs with aGame structure:

Listing 8.18 internal/session/game.go: Define the Game structure

// Game contains the information about a game.


type Game struct {
ID GameID
AttemptsLeft byte
Guesses []Guess
Status Status
}

As seen above, we need new types, let’s define what is G


a ameID and what is
a Status in the same file that above.

Listing 8.19 game.go: Define the identifier and status types

// A GameID represents the ID of a game.


type GameID string #A

// Status is the current status of the game and tells what operations can be made on it.
type Status string #B

const (
StatusPlaying = "Playing" #B
StatusWon = "Won"
StatusLost = "Lost"
)

We also want to expose the guesses, as we need to carry them around, store
them, and return them.
Listing 8.20 game.go: Define the type Guess

// A Guess is a pair of a word (submitted by the player) and its feedback (provided by Gordle).
type Guess struct {
Word string
Feedback string
}

Good start. It is not yet enough to play, but that will happen in the next
section.

One thing we can already add to the domain is a business error: if the player
sends a new game after the game is over, we should be explicit about the
problem. You can either define a custom error type or do it the short way:

// ErrGameOver is returned when a play is made but the game is over.


var ErrGameOver = errors.New("game over")

One of your authors really doesn’t like exposing global variables. You are old
enough to decide for yourself.

8.3.2 API adapters

On the API side, we can start manipulating our newGame object.

NewGame

Take for example the first endpoint: NewGame. This is how the handler
currently creates the API version of the game:

func Handle(w http.ResponseWriter, req *http.Request) {


w.Header().Set("Content-Type", "application/json") #B
w.WriteHeader(http.StatusCreated) #B

apiGame := api.GameResponse{} #A
err := json.NewEncoder(w).Encode(apiGame) #B
if err != nil //...
}

What is the responsibility of the Handle function? Dealing with the API
details: if the client needs a different flag or a different format, this is where it
should happen.

We decided to keep the business logic in the same package, but it doesn’t
mean we should keep it in the same function. Instead, we want a function that
creates and saves the game somewhere, and another to reshape it into what
the clients want.

Listing 8.21 newgame/handler.go: Using session.Game

func Handle(w http.ResponseWriter, req *http.Request) {


game, err := createGame() #A
if err != nil {
log.Printf("unable to create a new game: %s", err) #B
http.Error(w, "failed to create a new game", http.StatusInternalServerError)
return #C
}

w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated) #E

apiGame := response(game) #D
// ...
}

func createGame() (session.Game, error) {


return session.Game{}, nil
}

func response(game session.Game) api.GameResponse {


return api.GameResponse{}
}

This way, each function has a well-defined job, and we can test units rather
than big blobs. It also becomes easier to parallelise the work inside a team,
merge changes made by multiple people, and most importantly, understand
what we are reading.

GetStatus

Let’s move on to the GetStatus endpoint. The response format is identical,


which means the response function can be reused. Should it live in the
newgame package and be exposed to thegetstatus package? That would
make very little sense to the next person looking for it in order to change it.

The more straightforward solution here is to have it in aninternal/api


package with this function. We avoid import cycles and whoever comes next
will understand what is inside.

Now the name response doesn’t make sense anymore. It converts a


session.Game into a GameResponse . We can already write most of its logic
too, and even start testing it.

Listing 8.22 convert.go: Adapter between API and domain

// ToGameResponse converts a session.Game into an GameResponse.


func ToGameResponse(g session.Game) GameResponse {
apiGame := api.GameResponse{
ID: string(g.ID),
AttemptsLeft: g.AttemptsLeft,
Guesses: make([]Guess, len(g.Guesses)), #A
Status: string(g.Status),
// TODO WordLength #D
}

for index := 0; index < len(g.Guesses); index++ {


apiGame.Guesses[index].Word = g.Guesses[index].Word #B
apiGame.Guesses[index].Feedback = g.Guesses[index].Feedback
}

if g.AttemptsLeft == 0 { #C
apiGame.Solution = ""// TODO solution #D
}

return apiGame
}

Testing this presents no trick: we need to check that an input game would
come out as expected in a new shape. You can use go test -cover or your
IDE’s UI to make sure that you are covering all code branches.

Update the GetStatus handler and run its tests. Still happy? If in the previous
version, your guess slice wasnil and it is now initialised, it should fail. You
should be able to fix it by replacing the json valuenull by an empty array [] .
Guess

Finally, the last endpoint. What does the handler do? Let’s have a look.

func Handle(w http.ResponseWriter, req *http.Request) {


// ... decode request

apiGame := api.GameResponse{
ID: id,
}

// ... encode response


}

We can already replace this naive initialisation of anapi.Game with two


calls: a new local (unexposed) function with the business logic, and a
conversion into the API structure.

Listing 8.23 guess/handler.go: Using session.Game

// Handler returns the handler for the game creation endpoint.


func Handle(w http.ResponseWriter, req *http.Request) {
id := chi.URLParam(req, api.GameID)
// ...
r := api.GuessRequest{}
// ...

game, err := guess(id, r) #A


if err != nil {...}

apiGame := api.ToGameResponse(game) #B

w.Header().Set("Content-Type", "application/json")
// ...
}

func guess(id string, r api.GuessRequest) (session.Game, error) {


return session.Game{
ID: session.GameID(id),
}
}

The order of things is always the same: decode the request, validate any input
that requires validation, call the business logic, convert the returned domain
type into API-readable structures and encode the response.

If your tests pass, you can commit. Try running the service and shooting a
few curls at it to see how it behaves.

If you deploy again to the testing environment, that unhappy teammate who
did not read your warning will still be unhappy: the games are still as empty
as before.

8.4 Repository
At this point, we can ask the question of priorities: we still have two main
tasks at hand: saving the game and allowing clients to play. Which one
should we tackle first?

One way of answering is, which one will show the most progress? If we start
with the storage, the first two endpoints will be finished, we will see how
they integrate together in an end-to-end test (or at least end-to-midway,
because we won’t be able to play). If we start with playing, we will be
playing on a new empty game at every guess, so testing will be difficult and
flaky. Storage it is, then.

If we were using some proven technology for data storage, for example, a
SQL database, we would need some extra fields related to storage that the
domain does not need: think of fields like created at, deleted at, versioning…
In that case, we would create a newGame structure, and adapters between the
domain and the repository, just like we did for the API. That way, the schema
of our database would be able to evolve independently from the domain or
API.

Because we are aiming for the fastest storage option, we don’t have any
technology-related constraints. It means we can keep the domain structures.

8.4.1 In-memory database

There are loads of database options out there, all of them ideal for a limited
set of situations. As we explained before, in-memory is ideal for no situation,
but fast to write.

What it means is that we will keep a variable to store the games. All the
operations we do on games rely on their ID, so a key/value storage is perfect.
In Go, this takes the form of a map.

Let’s create a package for it. Same question as before: do you want external
modules to use your repository? Please, no. It would invalidate the whole
point of the service and its API if clients went directly to the DB. Any
additional security or logic that you would add (sending events, leaderboards,
etc.) would be immediately buggy.

Simplest repository

We have covered in previous chapters how to create an object that will work
as a dependency. If we were using an external database, we would initialise a
connection when our server starts and keep it as a dependency of the whole
service. Here we will initialise the map instead and keep it in the same way.

Let’s create the repository structure. It will hold methods such asFind and
Update .

Listing 8.24 repository/memory.go: Declare the structure

// GameRepository holds all the current games.


type GameRepository struct {
storage map[session.GameID]session.Game #A
}

Of course this requires initialisation, so we need aNew() function.

Listing 8.25 memory.go: New function for the repository

// New creates an empty game repository.


func New() *GameRepository {
return &GameRepository{
storage: make(map[session.GameID]session.Game), #A
}
}
We could have a methodUpsert , for both creating and updating. But in our
case, we know that some specific use cases create new games and others are
only working on existing ones, so we prefer to separate them. It will allow us
to validate that we are not trying to create the same entity twice or inserting a
game where we have already been playing a few guesses.

Listing 8.26 memory.go: Add a game to the storage

// Add inserts for the first time a game in memory.


func (gr *GameRepository) Add(game session.Game) error {
_, ok := gr.storage[game.ID]
if ok {
return fmt.Errorf("gameID %s already exists", game.ID) #A
}

gr.storage[game.ID] = game

return nil
}

We will come back to this error. If we want to check for it specifically in the
calling code, it is currently difficult.

On Your Own: You can write the Find and Update methods. The former
needs to retrieve from the map and return some kind of error if nothing is
there with the given ID. The latter should also prevent insertion and only
accept overwriting an already-existing value.

You can also write unit tests on the four functions. Here we would useNew in
the other 3 tests and consider the job done for it: there is no particular trick to
it.

The package works, it is tested. How do we use it, though?

8.4.2 Service-level dependency

As we said, we want the repository to be initialised on startup and passed to


the service router as a dependency. Let’s look back at what our main does.

func main() {
err := http.ListenAndServe(":8080", handlers.NewRouter())
if err != nil {
panic(err)
}
}

We can easily add the initialisation and pass the new variable to the router.

Listing 8.27 main.go: Using the repository

func main() {
db := repository.New() #A

err := http.ListenAndServe(":8080", handlers.NewRouter(db)) #B


if err != nil {
panic(err)
}
}

How does the router pass this to the handlers? Our NewRouter function does
not call the handlers, it only gives the router a reference to them, so we
cannot simply add a parameter. What we can do instead is turn our Handle
functions into anonymous functions that are created on startup.

Let’s anonymise the Handle functions and wrap them instead in aHandler
function that takes a repository as a parameter and returns the previous
http.HandleFunc .

Listing 8.28 newgame/handler.go: Using the repository

// Handler returns the handler for the game creation endpoint.


func Handler(db *repository.GameRepository) http.HandlerFunc { #A
return func(w http.ResponseWriter, _ *http.Request) { #A
game, err := createGame(db)

// ...
}
}

The contents are the same so far. We can now update the router.

Listing 8.29 router.go: Using the repository


func NewRouter(db *repository.GameRepository) chi.Router {
r := chi.NewRouter()

r.Post(api.NewGameRoute, newgame.Handler(db)) #A
r.Get(api.GetStatusRoute, getstatus.Handler(db))
r.Put(api.GuessRoute, guess.Handler(db))

return r
}

How do we test this? We can only pass a concrete repository to our Handler
function. As soon as we use a real external database, this means we need to
spin an instance and connect to it to run unit tests. That is absolutely not
sustainable. Let’s abstract it with an interface.

The NewGameendpoint only needs to add a game to the repository, nothing


else. We can actually prevent it from doing anything else by defining a
minimal interface.

Listing 8.30 newgame/handler.go: Minimal interface

type gameAdder interface { #A


Add(game session.Game) error
}

// Handler returns the handler for the game creation endpoint.


func Handler(db gameAdder) http.HandlerFunc { #B
return func(w http.ResponseWriter, _ *http.Request) {
// ...
}
}

The router’s db variable automatically implements this little interface, and


now it becomes easy to create a stub with one single method for the unit test.
Adapting the test requires only 2 changes.

Listing 8.31 newgame/handler_test.go: Stubbing the repository

func TestHandle(t *testing.T) {


handleFunc := Handler(gameAdderStub{}) #A

req, err := ...


// ...
handleFunc(recorder, req) #B

assert...
}

type gameAdderStub struct { #C


err error
}

func (g gameAdderStub) Add(_ session.Game) error {


return g.err
}

You can adapt this logic to the other two endpoints.GetStatus only needs a
finder , and Guess needs to call two methods.

Now before we rejoice, there is one thing: remember how our server accepts
requests in different goroutines and treats them concurrently? Writing into a
map is not thread-safe: it means that if two different routines write in the
same map, we cannot guarantee which one will win, if any. To fix this
problem, we can use a concept we saw in the previous chapter, a mutex. The
goal is to avoid concurrently accessing the map and to ensure the sanity of
our server.

8.4.3 Add mutex to the repository

The motivation is the same as seen in the previous chapter: we want to


protect our repository - a map, here - from concurrent accesses. Let’s keep it
simple and use async.Mutex in Add() , Find() and Update() methods in
internal/repository/memory.go . Reminder: a mutex should be placed as
close as possible to a thread-unsafe variable. We theoretically could have a
mutex on the server, preventing two requests from being handled at a time -
but that would completely defeat the purpose of having a microservice.
Imagine if your favourite web site was only accessible to a single person at a
time! On the other hand, if the storage solution was already thread-safe - a
database that, for instance, would only accept a single query at a time - we
wouldn’t need a mutex at all.

First, we need to add the mutex next to the resource we want to protect, the
storage map, inside theGameRepository structure.

Listing 8.32 memory.go: Add mutex to GameRepository

// GameRepository holds all the current games.


type GameRepository struct {
mutex sync.Mutex #A
storage map[session.GameID]session.Game
}

Then we are able to access the mutex from the receiver on each method, here
is the sample of code for theAdd method.

Listing 8.33 memory.go: Call Lock and Unlock mutex on Add()

// Add inserts for the first time a game in memory.


func (gr *GameRepository) Add(game session.Game) error {
log.Print("Adding a game...")

// Lock the reading and the writing of the game.


gr.mutex.Lock() #A
defer gr.mutex.Unlock() #B

_, ok := gr.storage[game.ID]
if ok {
return fmt.Errorf("%w (%s)", ErrConflictingID, game.ID)
}

gr.storage[game.ID] = game

return nil
}

You can now update the other methods by yourself and run the tests! If the
tests pass, your code is safely committed and you’ve had a good glass of clear
water, let’s play!

8.5 Adapting the Gordle library


From chapter 5, or from our repository, copy thegordle library. It is far more
complex than what we need: it wraps the whole session. We need to refactor
it and simplify it for our needs here.
What should it do? We know that we need the following use cases:

Create and return a new Gordle game;


Accept a guess and return the feedback.

We could choose to have the Gordle library be in charge of the number of


attempts a player has, but we decided to have this logic in the session. Let’s
split the responsibilities:

Table 8.2 Package requirements

Packagesession ’s Game Packagegordle ’s Game

· Store number of attempts left · Can create a game using a


· Store list of attempts and corpus for its solution
feedbacks · Accepts a guess and computes
· Is able to play a guess by the feedback
calling Gordle · A feedback can tell whether a
game is over

Now we need to separate the concerns of the session and gordle packages.
The gordle library does not need an ID, but thesession does. The library
does not need the previous guesses. It must tell the status of the game.

8.5.1 API of the library

Considering our previous decisions about the distribution of responsibilities,


we want to be able to create a game and play. This is what we need to
expose:

var g gordle.Game = gordle.NewGame(corpus)


var feedback gordle.Feedback = g.Play(guess)

var solution string = g.ShowAnswer()


var won bool = feedback.GameWon()

That should cover it. Add a fmt.Stringer on the feedback, and we are good.

More design questions:


Who is responsible to tell us how many attempts are allowed, and how many
are left? Who is responsible for reading the corpus file, and for picking a
random word from it? This is open to discussion: we could decide that the
library takes the solution as a string parameter, after all, it does not need
anything else. On the other hand, we could say that any use of the library
requires this behaviour, so we might as well expose it. On a third hand (?),
we could create a corpus-reading package, independent and testable.

In our case, because we copied most of the logic from a previous chapter, we
chose option 2, which keeps the old code together until we prove that it
should be refactored further. Therefore, we expose aReadCorpus function
that returns a list of strings, and this is what theNewGamefunction takes to
randomly select a word. Note thatNew taking a single solution simplifies the
tests, because we don’t have to go through randomisation.

We didn’t want to weigh this book with lots of copy-pasted code from a
previous chapter. You can find the resulting simplified package in our
repository, or play around to see what you need. Have fun going through the
exercise of reducing code yourself. For the sake of continuity, and to make
sure that we are working with the same code base, here is the final API of our
package:

Listing 8.34 API of the package

$ go doc internal/gordle

package gordle // import "learngo-pockets/httpgordle/internal/gordle"

const ErrInaccessibleCorpus = corpusError("corpus can't be opened") #A


const ErrInvalidGuess = gameError("invalid guess length") #B
func ReadCorpus(path string) ([]string, error) #A
type Feedback []hint
type Game struct{ ... }
func New(corpus []string) (*Game, error) #B

Reading the doc shows that option 3, having a corpus reader somewhere else,
would have made this API easier to understand. Feel free to refactor this way.

If you are happy with the test coverage of the library, it is time to commit and
use it.
8.5.2 Usage in the endpoints

At this point, we have everything we need to write the logic of the endpoints.
We have already created a function to isolate that logic in each of the three
endpoints, so we just need to fill this function up in each location.

The first thing to do is to keep a gordle.Game in our domain: if we add a field


of this type to session.Game , we can use the field to play.

Listing 8.35 session/game.go: Using gordle.Game in the session

// Game contains the information about a game.


type Game struct {
ID GameID
Gordle gordle.Game #A
AttemptsLeft byte
Guesses []Guess
Status Status
}

With this new type ready (and tests passing), we can complete the endpoints.

NewGame

Here, we need to create a game, generate a random ID for it and save it, then
return it.

Why random? Incremental IDs are a terrible security flaw, as anyone can
create a game and play around to mess up with other people’s games (note
that authentication is mentioned later in this chapter).

We could use an integer. As we mentioned before, generating a random


integer can be done quite fast with themath/rand package, with a rather poor
randomisation, or with better distribution but higher costs with the
crypto/rand package.

There are other alternatives, like Universal Unique ID (uuid), for which
Google’s library is most generally used in Go, or Universally Unique
Lexicographically Sortable Identifier (ULID). We picked this last one, and
we will be using the generative library found here: github.com/oklog/ulid .

If you are using a relative path to the corpus, it needs to be defined relative to
the compiled binary, not the file where it is defined, not necessarilymain.go
and not the path of execution.

Using the embed package

Go’s standard library comes with a very peculiar package:embed . This


package implements one very useful feature: it allows regular files’ contents
to be included in the source code of our program. In order to achieve this, we
use a specific instruction sent to the compiler at compilation time - they’re
called directives, or pragmas. These come in the form of a comment, starting
something like //go:{something} . Directives are quite common to indicate
specific parts of the code to code analysis tools such as linters. The syntax for
our need is the following:

//go:embed corpus/english.txt
var englishCorpus string

This englishCorpus variable needs to be defined outside of any function


(placing it inside a function would be considered a regular comment by the
compiler - and ignored).

In order to use this, first, we need to import theembed package. However, as


we’re not going to use any constant, variable, type, or function from that
package, your IDE might get rid of this line completely, and it wouldn’t be
unfair from it - after all, why import a package if we don’t use it?

There is a trick that can be used here: we can force the import of a package
by aliasing it to _ (underscore). This way, the package won’t be dropped and
will be available where we import it. Importing a package as _ is a neat trick
which is usually done when we need to call theinit functions of some
libraries. Here, it ensures that the embed package will properly load the
contents of the file located at the pathcorpus/english.txt , this location is
relative to the source.go file. We now have loaded the contents of the file
into a variable.
Listing 8.36 newgame/handler.go: Endpoint logic

func createGame(db gameAdder) (session.Game, error) {


corpus, err := gordle.ReadCorpus("corpus/english.txt") #A
if err != nil {
return session.Game{}, fmt.Errorf("unable to read corpus: %w", err)
}

game, err := gordle.New(corpus)


if err != nil {
return session.Game{}, fmt.Errorf("failed to create a new gordle game")
}

g := session.Game{
ID: session.GameID(ulid.Make().String()), #B
Gordle: *game,
AttemptsLeft: maxAttempts, #C
Guesses: []session.Guess{},
Status: session.StatusPlaying,
}

err = db.Add(g) #D
if err != nil {
return session.Game{}, fmt.Errorf("failed to save the new game")
}

return g, nil
}

We chose to keep a constant for the maximum number of attempts, but if


your corpus has words with different lengths, you can be more creative and
derive this max number from the length of the word, the difficulty settings,
the ELO of the player or any other variable.

Additional optimisation: here we are reading the corpus every time the
endpoint is called. This looks like a waste of resources. What would be ideal
would be to load it on startup, deal with any error at this point (e.g. file not
found) and fail to start if we have nothing. If the service cannot access any
list of words, it might as well not start at all.

Now that we have access to the solution, we can also update the API adapter
to add the WordLength , and other fields that we may have left out so far.
Additionally, this hard-coded path to the corpus will become a pain as soon
as we start testing.

Test it with regular expressions:

How do we test this? There is only one pitfall: randomisation. We cannot


expect the output of this createGame function to be identical when it is called
multiple times.

In this situation, it is important to determine what we want to test. We could


use a trick, passing an ID generator interface as a parameter, using one that
generates a fixed value in tests and a ULID in real life. We can also agree that
all we need to assert is that the ID is composed of that many alphanumeric
characters and if so we are happy. Whether it is our job to check whether
consecutive calls return different IDs is also arguable, we can trust that the lib
does it - but can we trust that we will always call the lib?

To validate that “the ID is composed of that many alphanumeric characters”,


the trick is to use regular expressions. Look into theregexp package, there
are a lot of options.

When testing the handler itself, we can replace the ID with a known string.
For this, we isolate the generated ID in the JSON output and replace it using
strings.Replace .

We need to look for "id":"<somenumbersandletters>" in the output body,


so the regular expression we will match against is`.+"id":"([a-zA-Z0-
9]+)".+` . Let’s break down this regular expression:

.+ : indicates a list of one or more (+) of any ( . ) characters, followed by:


"id":" : the very string composed of a double quote, the letteri , the
letter d , a colon, and a double quote, followed by:
([a-zA-Z0-9]+) : this is a captured block - that’s what the parentheses
represent. They aren’t matched by the regular expression, which only
matches one or more(+) of any letter or number (characters in the range
from a to z , A to Z or 0 to 9 ), followed by:
".+" : a string starting with a double quote, followed by one or more (+ )
of any (. ) characters.
This string is enclosed in backticks ` , which is how we avoid having to
escape double quotes in a Go string.

Listing 8.37 newgame/handler_test.go: Replace ID in JSON output

// idFinderRegexp is a regular expression that will ensure the body contains an id field with a value that
contains
// only letters (uppercase and/or lowercase) and/or digits.
idFinderRegexp := regexp.MustCompile(`.+"id":"([a-zA-Z0-9]+)".+`) #A
id := idFinderRegexp.FindStringSubmatch(body) #B
if len(id) != 2 { #C
t.Fatal("cannot find one id in the json output")
}
body = strings.Replace(body, id[1], "123456", 1) #D

assert.JSONEq(t, testCase.wantBody, body) #E

The expected body contains the known string, so we can now use
assert.JSONEq or some equivalent. If the ID does not match the expected
format, FindStringSubmatch will not find it and return only one item: the
full string. The test will fail.

Now, when we are testing thecreate function itself, we do not get an


encoded body. We can useFindStringIndex instead, which finds the
location of the leftmost match of the regular expression in a string. If the
index is there, we’re good. This is wrapped bytestify ’s Regexp function.

Listing 8.38 newgame/handler_test.go: Validating ID with a regexp

func Test_createGame(t *testing.T) {


corpusPath = "testdata/corpus.txt"

g, err := createGame(gameCreatorStub{nil}) #A
require.NoError(t, err)

assert.Regexp(t, "[A-Z0-9]+", g.ID) #B


assert.Equal(t, uint8(5), g.AttemptsLeft) #C
assert.Equal(t, 0, len(g.Guesses)) #C
}

Regular expressions are extremely powerful, but also very hard to understand
when you don’t know what you are looking at. Whenever you write one, do
not expect the next maintainer (including yourself) to find it easy to parse:
add a comment to tell them what it is looking for. Systematically.

Check that your tests are passing and properly covering your code, and you
can move on to the second endpoint.

GetStatus

Here we only need to call the DB and return the game. That’s it. And, of
course, deal with any error.

Ah. How do we deal with the errors? How do we know if the game was not
found or if there was another unexpected error (e.g., connection error in the
situation of a real database)? We want to return aStatus Not Found if the
game doesn’t exist, butInternal Error otherwise.

Fortunately, the repository exposes a specific error against which we can


check.

Listing 8.39 getstatus/handler.go: Dealing with errors

game, err := db.Find(session.GameID(id)) #A


if err != nil {
if errors.Is(err, repository.ErrNotFound) { #B
http.Error(w, "this game does not exist", http.StatusNotFound)
return
}

log.Printf("cannot fetch game %s: %s", id, err)


http.Error(w, "failed to fetch game", http.StatusInternalServerError)
return
}

Note that here we choose not to bubble up the errors: the http.Error
message doesn’t contain theerr value. Indeed, this would expose the
internals of our service to clients, and this is rarely a good idea. The words
sent back along with the error are hiding the true error’s details, so we need
to log it for debugging purposes.

Guess
Finally, let’s play!

Here we need to fetch the game, play the word, save the result and return it.

Possible errors:

What can possibly go wrong? Problematic scenarios could be: the game is
not found, the storage is not responding, the proposed word is not valid, and
the game is over, either lost or won. One thing we didn’t add yet was a
sentinel error in the domain (thesession package), to tell us that no, you
cannot play a game that you already won (or lost).

Let’s first see what the function of achieving all the work must do. We will
omit the errors first, then think about each situation.

Listing 8.40 guess/handler.go: Endpoint logic

func guess(id session.GameID, guess string, db gameGuesser) (session.Game, error) {


game, err := db.Find(id) #A

if game.AttemptsLeft == 0 || game.Status == session.StatusWon { #B


return session.Game{}, session.ErrGameOver
}

feedback, err := game.Gordle.Play(guess) #C

game.Guesses = append(game.Guesses, session.Guess{ #D


Word: guess,
Feedback: feedback.String(),
})

game.AttemptsLeft -= 1 #E

switch { #F
case feedback.GameWon():
game.Status = session.StatusWon
case game.AttemptsLeft == 0:
game.Status = session.StatusLost
default:
game.Status = session.StatusPlaying
}

err = db.Update(game) #G
return game, nil
}

That’s a long function. In each case, what should the error be?

When we look for the game, we can simply wrap the error with some context.
Not much context here, but it is an example. This error will always be of type
repository.Error , and this is where therepository.ErrNotFound can be
returned.

game, err := db.Find(id)


if err != nil {
return session.Game{}, fmt.Errorf("unable to find game: %w", err)
}

We can also get an error while playing, and it will be a gordle.Error .

feedback, err := game.Gordle.Play(guess)


if err != nil {
return session.Game{}, fmt.Errorf("unable to play move: %w", err)
}

Finally, we are calling the storage again

err = db.Update(game)
if err != nil {
return session.Game{}, fmt.Errorf("unable to save game: %w", err)
}

Because errors are values, we know what happened when we receive an error,
and we can adapt the status code and message of the HTTP response.

Listing 8.41 guess/handler.go: Using domain.Game

game, err := guess(id, r, db)


if err != nil {
switch {
case errors.Is(err, repository.ErrNotFound):
http.Error(w, err.Error(), http.StatusNotFound)
case errors.Is(err, gordle.ErrInvalidGuess):
http.Error(w, err.Error(), http.StatusBadRequest)
case errors.Is(err, session.ErrGameOver):
http.Error(w, err.Error(), http.StatusForbidden)
default:
http.Error(w, err.Error(), http.StatusInternalServerError)
}
return
}

Don’t forget to test, there is no trick but there are a lot of edge cases.

You have a functioning service! Congratulations. Do you want to write a


front-end client now? Playing the game with curl only is not user-friendly…
You can even write a CLI in Go that calls the service.

Before we leave you, though, we need to list a few warnings about the
shortcuts we took.

8.6 A few security notions and improvements


When writing a server, we need to keep in mind that the objective is to
deploy it somewhere so that it can serve requests. However, we don’t always
know how many users a single server will be in charge of, nor how many
queries they’ll send. When it comes to security, we have to bend our minds
and think of all “unhappy” paths. We need to identify what could possibly go
wrong, and prevent worst-case scenarios from happening. In this case, it’s
sometimes useful to imagine ourselves as people wanting to find flaws and
break the system.

8.6.1 Limiting the number of requests served at a time

One of the most frequent attacks against a server is called a DDoS attack -
it’s a process in which the goal is to overload the server with too many
requests. This kind of attack doesn’t extract any information from the server,
but it causes it to crash, which makes it unavailable for other users. This
attack is usually performed by having lots of computers send thousands of
requests to a server. Each request will cause the server to allocate memory to
process it - a parallel task (thread or routine), some stack allocation, etc.
Since servers have limited resources, at some point, a vast number of requests
will cause these resources to be depleted, and the server won’t be able to
handle anything at all, in the best cases.
Fortunately for us, chi offers a simple way to control how many concurrent
requests can be processed simultaneously on our web service with the
Throttle function. This function takes, as its parameter, the maximum
number of requests allowed to be processed simultaneously. It should be
called as we declare the mux.

r := chi.NewRouter()
r.Use(middleware.Throttle(10))
r.Post(...

Of course this number must be declared as a constant at the very least, but
ideally configurable.

8.6.2 User authentication

We have mentioned that clients calling the service should be authenticated so


that we can make sure a game created by one player will only be played by
that one person. It can also help in limiting the number of requests per user
and therefore the load on the service.

How do we authenticate a user? One very common protocol is OAuth (Open


Authorization) - often written as the latest version of OAuth 2.0. It is used to
authenticate a user via an authentication server. Typically, authenticating a
user would be the job of one service, which would deal with all the security
and provide a signed token. Our Gordle service would then receive this token
via an HTTP header, validate it, decode it and find the user identifier in it.

Depending on your needs, you can decide the level where you want to
authenticate the clients: indeed you can require an identification for each
player, or authenticate each application connecting to your service. In the
second option, a website and a mobile app would have different IDs and
keys, and each would have a request rate limit, regardless of the number of
players that they serve.

Authentication and the security problems that it solves could be the subject of
a full chapter but not in the scope of our book. Read OAuth2 in Action, by
Justin Richer and Antonio Sanso, for more.
8.6.3 Logging

In this chapter, we used the native and very basiclog package. This is a
terrible idea in production: it mangles the log output in a concurrent
environment, typically a service. There are lots of great logging libraries out
there that protect your output and offer formatting options. With Go 1.21
came a standard library structured logger in the form of theslog package -
we recommend using it over thelog package.

8.6.4 Error formatting

When we are returning an error, it is a simple string. Good practice teaches us


that when an endpoint’s output is formatted in JSON, the errors should also
be formatted. Take an example:

{
"error": "game over"
}

This way, your clients can use the same decoder whatever the status code of
the response.

8.6.5 Decode query parameters

As we saw before, query parameters are used in API as optional parameters.


They always come as pairs of key and value. They are used as filters to
specify the resource to be created, updated, or deleted.

Syntax

Query parameters are appended after the path and separated from it by a
question mark ?. To use them, you put the key first, followed by =, the sign
equal, and the associated value with which you want to filter. In case we have
multiple parameters, we add an ampersand & sign between each pair. The
whole list of pairs of key=value after the ? sign forms the query string.

Let’s implement an example


We want to add the possibility to choose the language in which we want to
play Gordle. To do so, let’s add the language as a query parameter. The URL
to create the game will look like this:

http://localhost:8080/game?lang=en

In this example, the key is lang and the value isen .

First, we define the constant for the key, and we declare it next to the path
parameter constant:

Listing 8.42 internal/api/http.go: Add query parameter key as constant

const (
// GameID is the name of the field that stores the game's identifier
GameID = "id"
// Lang is the language in which Gordle is played.
Lang = "lang" #A
// ...
)

That was the interesting and very easy part. We will now see how to decode
the query parameter from the request. All the query parameters are retrievable
from the URL thanks to the url library’s URL.Query() method, which we can
access from therequest.URL field in our handlers. If you check the return
type, you will see that it is an url.Values , which exposes (amongst others) a
Get(key string) string method.

$ go doc url.Values
// Values maps a string key to a list of values.
// It is typically used for query parameters and form values.
// Unlike in the http.Header map, the keys in a Values map
// are case-sensitive.
type Values map[string][]string
...

So let’s use theGet method on the Values to retrieve the language in the file
internal/handlers/newgame/handler.go

Listing 8.43 newgame/handler.go: Update handler to retrieve query parameter

// Handler returns the handler for the game creation endpoint.


func Handler(db gameAdder) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) { #A
lang := r.URL.Query().Get(api.Lang) #B
if len(lang) > 0 {
// TODO create a game in the chosen language
fmt.Println(lang) #C
}

Congratulations, you now know pretty much everything about REST APIs
and how to decode all kinds of parameters!

8.7 Summary
A web service is a program that continuously listens to a port and knows
what to do with requests based on a set of handlers.
We can usehttp.NewServeMux() to create a multiplexer that will route
requests, based on the URL they were sent to, to specific handlers.
Otherwise, use open-source libraries.
A handler’s task is to fill an http.ResponseWriter . It should Write to
it, and, sometimes, set headers withWriteHeader .
The default status code set byResponseWriter.Write is
http.StatusOK . If we want to return a different code, we need to call
WriteHeader before we start writing the contents of the response via
Write .
If an error happens in an endpoint, the handler should not return this
error. If you really want to know what’s gone wrong, a log is the place
where the error is last displayed.
Some directory names have specific meanings in Go. Files inside
testdata/ will not be compiled by go build or go run . Files inside
internal/ will not be import -able by other modules.vendor/ is a name
that should be avoided for historical reasons.
Types that we define for the API should only be used in the endpoint
handlers. The rest of the time, we should be using types of our domain.
Domain - or model - types shouldn’t be visible from outside the
service’s module. Usually, they hide inside aninternal directory.
A “repository” - oftentimes called “repo” - offers access to the data,
which can be stored in a physical database, in memory, or in any form at
all.
It is always a good idea to check whether your code is thread-safe. Write
tests and make use of mutexes when necessary.
The embed package can be used to load the contents of a file (or
directory) at compilation time. This is useful, for instance, if you want to
keep your SQL queries in .sql files, or when, as we did, you want to load
a set of hardcoded values.
Some, if not most, open-source packages use semantic versioning. Go
will natively use the latest v1.x.y version of a package if nothing is
specified. In order to enforce using v4.m.n, one should explicitly
"go
get path/to/package/v4" and useimport "path/to/package/v4" in
the .go files.
Regular expressions are a very powerful way to match patterns. You
will find yourself using them in various situations and validating
randomised values is one of them. As it can be quickly unreadable, do
not forget to explain in a comment any regular expressions you write.
REST API (REpresentational state transfer) or RESTful API is a set of
constraints defining an interface between two systems. REST APIs
communicate through HTTP and can exchange data through JSON,
HTML or even plain text.
HTTP statuses are useful to communicate precisely what happened on
the server side to the client. HTTP status code is a three-digits from 1xx
to 5xx, which could mean everything went well like 200, or resources
are not found, such as the well-known 404. Returning the proper code
means that the client can better understand what happened. When in
doubt, return 500. When at a loss, return 418.
9 Concurrent maze solver
This chapter covers

Spinning up goroutines as we need them


Communicating between different goroutines
Loading and writing a PNG image
Manipulating images and colours using a go library
Writing a GIF image
Using linked lists

The oldest representation of a maze found by archaeologists, from


palaeolithic times, was engraved in a piece of mammoth ivory. In Indo-
European mythology, mazes are often associated with engineers, like
Daedalus in Greece. They are also used as a symbol for the difficult path of a
life towards a God figure in O’odham tradition in North America, in India in
the Chakra-vyuha style or in Europe on the floors of mediaeval churches to
represent the way to salvation.

Solving a maze has been an interesting engineering exercise for ages,


including physically with an autonomous robotic mouse (see Micromouse
competitions), or virtually, using graph theory. There are countless
algorithms, each optimised for different constraints: are there loops? Is the
target inside the maze, like a treasure, or on another side, like a liberating
way out? Are there curves or only right-angled corners? In the case of
multiple possible paths, do we need the shortest or the fastest, or the one that
goes through a collection of bonus stars?

In this chapter we want to find the treasure in a maze, starting from an


entrance position. Each intersection we meet raises a question: which branch
should we explore - to the left, to the right, or straight ahead? We’ll answer
that question by exploring all branches concurrently, spinning up goroutines
each time we have an intersection.

Requirements
Find the path through a maze that has no loops (there is only one path to
reach any pixel).
The maze is a PNG RGBA image.
The command-line tool should take an input image’s path and write
another image with the pixels from entrance to the treasure highlighted.
As a bonus, it should also generate a GIF image of the exploration
process.

9.1 Maze generation


If we want to solve mazes, we need mazes to solve. Because we are
developers, we decided to quickly code a maze generator as a side project.

A handful of recognised algorithms for maze generation are available online.


By now you should have enough understanding of the Go language to be able
to code one yourself. Note that ours is available in the book’s repository, in a
builder folder, for those most in a hurry. Here are a few important points for
a maze generator.

A maze can be represented as a grid in which elements can be either walls or


paths. Two special path elements can be found in the maze: the entrance and
the treasure. The goal of the maze is to find a list of positions in the grid that
links the entrance to the treasure. These positions need to be adjacent -
teleportation isn’t allowed in a maze.

Below you can see an example of a small maze with a treasure corresponding
to an exit on the bottom edge.

Figure 9.1 Example of a small maze with a treasure (exit) on the edge
But the treasure does not have to be on an edge, the following figure shows
an example of a bigger maze with a treasure inside.

Figure 9.2 Example of a maze with a treasure inside

Since it’s nice to have a preview of the maze, we decided to encode ours as
an image. An image is a very convenient way of representing a two-
dimensional grid - it has a fixed size, and we can encode the information of
whether a grid element is a wall, a path, an entrance or a treasure by using
colour values. In the examples above, walls are painted black, while paths are
painted white.

9.1.1 What is an image?

In computer science, 2D images are mostly of two kinds - vector images and
bitmap images. Vector images are similar to mathematical entities -
regardless of how much you zoom in, lines have no thickness, points don’t
look bigger on your screen, etc. SVG (Scalable Vector Graphics) is a
common format for vector images.

On the other hand, bitmap images, also called raster images, contain a 2D
grid of picture elements. These picture elements, or “pixels”, each bear a
colour. When zooming in a raster image, pixels are simply displayed larger.
Common formats for raster images are PNG (Portable Network Graphics)
and JPEG (Joint Photographic Experts Group). JPEG images offer lossy
compression, which means they will usually require fewer bytes to store the
information - but they might also modify the image while compressing it. The
information encoded at the pixel positions is usually colour - but sometimes,
we use pixels for something else, such as heat map or density (this is how
MRI uses images to represent internal tissues), or for palettes, where each
colour has a specific meaning (for instance, a map of the world in which each
country is represented with a different colour).

The choice of an image format

Since we want to encode a 2D grid, a raster image format seems perfectly


adequate. JPEG’s lossy compression can result in a pixel’s colour being
altered, and we want exact values to represent our walls, paths, entrance and
treasure. For this reason, we have decided to encode our image as a PNG
image.

Of course, Go has a package for image manipulation - and also has a package
for most common image formats.

$ go doc image
package image // import "image"

[...]

type Image interface{ ... }


func Decode(r io.Reader) (Image, string, error)

type Point struct{ ... }

type RGBA struct{ ... }


func NewRGBA(r Rectangle) *RGBA

$ go do image.Point
type Point struct {
X, Y int
}
A Point is an X, Y coordinate pair. The axes increase right and down.
One of the types we see in theimage package is theRGBAImage. This offers
access to theRGBAAt(x, y) method, which allows us to retrieve the colour of
a pixel at a given position in the image.

9.1.2 Maze constraints

Remember the constraints we have on the initial version of the solver:

no loops - there is exactly one path from the entrance to any given point
of the maze
the generated image should be a PNG image using the RGBA colour
model

When writing the maze generator, you can also add a complexity constraint
on the length of the path from entrance to treasure to avoid straightforward
answers. In our implementation for example, that path - the solution - must
have a length of at least the height of the image plus its width.

We decided to use the following colours, but feel free to be more artistic and
colour-blind friendly:

Entrance - deep sky blue


Treasure - pink
Wall - black
Path - white

Now we have a generated maze, we can start solving it!

9.2 Maze solver


To solve the maze, we will start by opening the maze image we generated,
and we will explore the possible paths, recording them at the same time. By
the end, we will have a first hacky version, finishing when the treasure is
found.

9.2.1 Setup
As usual, start by setting up your module and creating amain.go file at the
root. This project being a simple command-line tool, we can have the
main.go file at the root of the module and the rest will live in the internal
folder.

The first step to solving the maze will be to open it.

9.2.2 Loading the maze image

As we mentioned, we want the input PNG to be passed as argument to the


tool, with the output path:

$ maze-solver maze_10x10.png solution.png

This means the first thing our main() will do is read these 2 arguments.

Listing 9.1 main.go: Reading the arguments

package main

import (
"fmt"
"log”
"os"
)

func main() {
if len(os.Args) != 3 { #A
usage()
}

inputFile := os.Args[1] #B
outputFile := os.Args[2]

log.Printf("Solving maze %q and saving it as %q", inputFile, outputFile)

// usage displays the usage of the binary and exits the program.
func usage() {
_, _ = fmt.Fprintln(os.Stderr, "Usage: maze_solver input.png output.png")
os.Exit(1) #C
}
You can already run it and check various scenarios.

Open the input image

We then open the first image, containing the maze. What kind of errors can
happen at this point? There could be no file at all, or it could be a non-PNG
image. In each case, we want to print an explicit error.

The operation can be summarised in one sentence, so, of course, we put it in


one function. It takes the string and returns an image of the*image.RGBA
type. We want a pointer, because, eventually, we’ll want to modify the image
when we write the path to the treasure. We write that function into a new file,
one that will contain the file IO operations.

func openMaze(imagePath string) (*image.RGBA, error)

Let’s call our new function openMaze . It will check that the file exists, open it
- don’t forget to defer the call to Close - and decode the PNG. The last step is
done by calling Decode from the image/png package, which takes an
io.Reader and returns animage.Image , which is an interface.

Now, unfortunately, the image.Image interface offers a single method to


access a pixel’s value -At(x, y) - which returns the color.Color of a pixel
at the intersection of thex -th column and the y -th row. The color.Color
returned by At from a regular image.Image needs to be converted to the
RGBA colour model to be usable. Instead of having to callAt(x, y).RGBA()
everytime we want to access a pixel’s value, we can use an image.RGBA - a
type that offers a very convenient methodRGBAAt(x, y) . For this, we’ll
simply try to type assert the image.Image we decoded from the file into an
image.RGBA variable.

Go offers image.RGBA , but doesn’t offer image.RGB . For this reason, it’s
simpler to consider RGBA images in this chapter, even though we only chose
colours with 100% opacity.

Create a file imagefile.go next to main.go .

Listing 9.2 imagefile.go: Open the maze image


package internal

import (
"fmt"
"image"
"image/png"
"os"
)

// openMaze opens a RGBA png image from a path.


func openMaze(imagePath string) (*image.RGBA, error) { #A
f, err := os.Open(imagePath) #B
if err != nil {
return nil, fmt.Errorf("unable to open image %s: %w", imagePath, err)
}
defer f.Close()

img, err := png.Decode(f) #C


if err != nil {
return nil, fmt.Errorf("unable to load input image from %s: %w", imagePath, err)
}

rgbaImage, ok := img.(*image.RGBA) #D
if !ok {
return nil, fmt.Errorf("expected RGBA image, got %T", img) #E
}

return rgbaImage, nil


}

We have the image. Don’t forget to call the function in your main, handle the
error properly, and you can test manually what happens in different scenarios.

In the code above, there are 2 error cases that are easy to test automatically:
unable to open the file, and unable to load the PNG image. Testing whether
the function is unable to type assert it as a RGBA image requires a PNG
image that wasn’t encoded as a RGBA image. We’ve provided such an image
in our repository: mazes/rgb.png . You should expect an output like this:

go run . mazes/rgb.png solution.png


2023/10/16 18:26:28 INFO Solving maze "mazes/rgb.png" and saving it as "solution.png"
ERROR: expected RGBA image, got *image.Paletted
exit status 1

Finally, we need to handle theos.Open error when we can’t open a file that
we are able to detect. This case is quite rare - on Unix, it requires execution
rights on a directory and no read rights on a file in that directory. Still, it may
happen and we will be happy to know if it does.

Write a test, and then we can set up the solving part.

9.2.3 Add the solver

Solver structure

Solving the maze will be done by a dedicated object, one that can be
constructed by giving it the image and that carries aSolve() method. Why?
The object will be able (later) to hold settings such as the colours of the path,
walls, entrance, treasure and solution (the path from entrance to treasure). As
you will quickly see, it will also hold the channels for communication
between the goroutines and the solution at the end. For now, let’s keep it
simple.

The Solver is the heart of the tool and would benefit from living in a
dedicated package:internal/solver . Remember - packages insideinternal
can’t be used by anyone else than your module. Create a file solver.go in
the internal/solver package.

Listing 9.3 solver.go: The Solver structure

package solver

import "image"

// Solver is capable of finding the path from the entrance to the treasure.
// The maze has to be a RGBA image.
type Solver struct {
maze *image.RGBA
}

Before we go on, let’s define the API of this object. It needs to solve the
maze and write the solution image. That makes 2 operations, so that makes 2
exposed methods.
Listing 9.4 solver.go: Solve API

// Solve finds the path from the entrance to the treasure.


func (s *Solver) Solve() error {
return nil
}

The SaveSolution method lives in imagefile.go since it is in the image


manipulation scope.

Listing 9.5 imagefile.go: Save solution API

// SaveSolution saves the image as a PNG file with the solution path highlighted.
func (s *Solver) SaveSolution(outputPath string) error {
return nil
}

New function

Actually, our implementation is highly tied to the image package. Why not
delegate the opening of the PNG image to aNew function? We will need that
function anyway.

Move the file imagefile.go containing the openImage function along with its
test to the solver package and call the function in a new function calledNew,
do not forget to remove it from the main function. It takes the path as a
parameter and returns a pointer to aSolver and an error. Note that, in a file,
we tend to write New functions and other such constructors after the structure
definition.

Listing 9.6 solver.go: New Solver

// New builds a Solver by taking the path to the PNG maze, encoded in RGBA.
func New(imagePath string) (*Solver, error) {
img, err := openMaze(imagePath)
if err != nil {
return nil, fmt.Errorf("cannot open maze image: %w", err)
}

return &Solver{
maze: img,
}, nil
}

Your current tree should look like this:

$ tree .
├── go.mod
├── internal
│ └── solver
│ ├── imagefile.go
│ └── solver.go
└── main.go

At this point you can even finish writing the main function by building your
Solver and calling its public API in the proper order. Deal with the various
errors in the way you prefer, but don’t forget that CLI tools are expected to
return a status code 1 when there is an error, viaos.Exit(1) . If you have a
doubt, you can have a look at the code in the09-
maze_solver/2_solver/2_3_add_solver/main.go folder.

Run the tests that you have written, commit and have a cup of tea, the next
section is the heart of the project.

9.3 Let’s go exploring!


We’ve loaded the maze. Our next objective is to find the path from entrance
to treasure, but first we need to find the entrance.

9.3.1 Find the entrance

The first step is quite straightforward, but of course we would not be here if
there were nothing to learn on the way.

Colour palette

In order to encode information, the maze generator used pixels to store


specific values at specific positions. In our maze, these values will represent
walls, paths, entrance, and treasure. We want to compare the colours of the
pixels against these specific values. RGBA colours are expressed as
structures in theimage/color package, and structures cannot be constants -
which means our reference colours for paths, walls, entrance, and treasure
can’t be constants. So how do we refer to them? We have a few solutions.

One is to declare the colours as global variables in thesolver package. Well,


we don’t like global variables because they can be modified by mistake,
undetected by any test, and then the whole behaviour becomes completely
unexplainable. It can be a good first step, but we prefer not to have it in
production.

An interesting alternative is to create apalette structure to hold all the


different values. It would be possible to give the tool a palette configuration
file with these colours, but this is beyond the scope of our chapter. We don’t
need to expose the structure as long as the main package doesn’t need to
change the values.

Create a file namedpalette.go in the internal/solver package.

Listing 9.7 palette.go: Declare the list of colours

// palette contains the colours of the different types of pixels in our maze.
type palette struct {
wall color.RGBA
path color.RGBA #A
entrance color.RGBA #B
treasure color.RGBA
solution color.RGBA #C
}

A palette structure populated with the values that we picked can be returned
by a defaultPalette() function, with the advantage over global variables
that nothing can change the values that a function will return. Unfortunately,
it would create a new structure every time you need it, allocating precious
memory that needs to be garbage-collected. It can become costly as soon as
we start exploring bigger mazes.

Listing 9.8 palette.go: Default colours function


// defaultPalette returns the colour palette of our maze.
func defaultPalette() palette {
return palette{
wall: color.RGBA{R: 0, G: 0, B: 0, A: 255}, #A
path: color.RGBA{R: 255, G: 255, B: 255, A: 255}, #B
entrance: color.RGBA{R: 0, G: 191, B: 255, A: 255}, #C
treasure: color.RGBA{R: 255, G: 0, B: 128, A: 255}, #D
solution: color.RGBA{R: 225, G: 140, B: 0, A: 255}, #E
}
}

What we did instead was to save these in theSolver structure, as settings to


solve the picture. It makes sense because another solver with another picture
can use a different set of colours.

Listing 9.9 solver.go: Solver with palette

type Solver struct {


maze *image.RGBA
palette palette
}

For the moment you can simply set these colours to some default values in
the New function of the solver package using the function we just defined,
defaultPalette() .

Let’s follow the signs to find the entrance.

Pixel definition

We will be going through the maze by exploring pixels, identified by their


coordinates on the 2-dimensional image. As we’ll need to navigate in these
2D grids, let’s use image.Point that Go provides.

type Point struct {


X, Y int
}

One thing we can easily anticipate with a pixel is that we will need to find its
open neighbours: the pixels bearing the Path colour that are orthogonally
connected to it. This will be easier if the pixel can give us the coordinates of
its own neighbours. Because of how the maze is implemented, we don’t want
to include diagonally-adjacent neighbours.

Create a file namedneighbours.go in the internal/solver package and


write the neighbours function.

Listing 9.10 neighbours.go: Coordinates of a pixel

package solver

import "image"

// neighbours returns an array of the 4 neighbours of a pixel.


// Some returned positions may be outside the image.
func neighbours(p image.Point) [...]image.Point {
return [...]image.Point{
{p.X, p.Y + 1}, #A
{p.X, p.Y - 1}, #B
{p.X + 1, p.Y}, #C
{p.X - 1, p.Y}, #D
}
}

Slice or array?

In this function, we returned an array. [...]image.Point is equivalent to


[4]image.Point , which is an array (the length of the array is computed at
compilation). Using an array is a minor optimisation, as it will more likely be
allocated on the stack than in the heap.

Nothing in this function guarantees that the neighbours are inside the image.
For instance, two neighbours of the top left corner, at position {0, 0}, are
outside the image. We could add a safety net here, or we could require that
none of the edges of the image should be explored.

An alternative is to consider how we’ll use these neighbours. In our code


later, we’ll want to check whether they represent an explorable section of the
maze, or a wall, or the treasure, or some other information that we code. For
this, we’ll have to look at the value returned RGBAAt(position) , for each
neighbour of a pixel. After checking its implementation, it is clear that it
returns a zero value when the position is outside the image. This means it is
safe to haveneighbours return points that would not be within the bounds of
our image.

Another thing that the API doesn’t guarantee is the order of the neighbours.
We could start from the top and go clockwise or be wild and just scramble
them. This means the test is the perfect occasion to use stretchr’s
testify
framework and its assert.ElementsMatch function.

Find the pixel of the entrance

Go back to the file solver.go file and create the function findEntrance .
We’ll have to scan the whole image to find one pixel that has the entrance
colour. In order to check each pixel’s value, a common practice in image
processing is to follow the row-major order with two nested loops: an outer
loop that will iterate over the rows of the image, and an inner one that will
iterate over the rows, just as some languages such as English or Tifinagh
write text from the leftmost position to the rightmost, and then to the next
line, from the leftmost again.

The reason for this specific pattern is that image formats tend to store pixel
values in “scanline” format, where horizontally adjacent pixels of the image
are stored in adjacent memory locations. Understandable when we remember
that most image format developers are English speakers.

Most of the time, our maze’s first pixel will be at position (0, 0) - in the top
left corner. But if we’re looking at a subsection of an image, our “top left”
corner might be at another position. Here, we can access our maze’s bounds
via the Bounds() method on the image.RGBA type. This returns two points,
that define the bounding box of our image: theMin and a Max fields

Listing 9.11 solver.go: Find entrance

// findEntrance returns the position of the maze entrance on the image.


func (s *Solver) findEntrance() (image.Point, error) {
for row := s.maze.Bounds().Min.Y; row < s.maze.Bounds().Max.Y; row++ {
for col := s.maze.Bounds().Min.X; col < s.maze.Bounds().Max.X; col++ {
if s.maze.RGBAAt(col, row) == s.palette.entrance {
return image.Point{X: col, Y: row}, nil #A
}
}
}

return image.Point{}, fmt.Errorf("entrance position not found")


}

Back in the Solve method, we can call this to know where to start.

Listing 9.12 solver.go: Call to findEntrance in Solve

// Solve finds the path from the entrance to the treasure.


func (s *Solver) Solve() error {
entrance, err := s.findEntrance()
if err != nil {
return fmt.Errorf("unable to find entrance: %w", err)
}

log.Printf("starting at %v", entrance))

return nil
}

What if there is no entrance? Make sure to cover all kinds of situations in


your tests. Maybe we want a maze to have a single entrance. If you have
trouble writing tests, you can find our test cases in the file
3_exploring/3_1_find_entrance/internal/solver/solver_internal_test.go
and all maze images used for the scenarios are located under
internal/solver/testdata .

Figure 9.3 Maze with entrance and next pixel


We’ve stepped into the maze at the entrance and we now have explored one
pixel. The next pixel(s) now need to be explored so that we can reach the
treasure!

9.3.2 Communicating new possible paths

How do we solve a maze? There are lots of optimised algorithms available,


but most of them will offer an iterative approach - turn left at every corner, go
to the location that is both nearest to the entrance and unexplored, etc. Each
time an intersection is met, a decision must be made - should we start with
the left side, then the right side, and the straight ahead? In this chapter, we
answer this question by saying: why not try all at the same time? This calls
for parallel programming, which in Go is implemented with goroutines.
Every time we’ll find a branching in the path, we’ll want to continue in one
of the possible directions and start a goroutine with the other(s). Using
goroutines means we can delegate the exploration of these other directions
and focus on our current branch till we either reach the treasure or a dead end.

Using goroutines can increase the performance - making finding the solution
faster - but this isn’t guaranteed. Starting goroutines takes time, and
communicating data with them adds on top of that. Usually, goroutines aren’t
necessary for very quick tasks. In our case, each goroutine has an
undetermined scope: we don’t know when it could end, so we might as well
give it its chance. What is certain is that using goroutines increases CPU
usage.

In the example below, we are looking for the Treasure (μ). Goroutine
Daedalus (δ) starts in A5, then goes to B5, and needs to branch. A second
goroutine, Theseus (θ), picks up at B6 while δ continues in B4, C4, etc. As
long as our maze contains no loop, Daedalus won’t meet Theseus as they
explore the maze, and this means Daedalus doesn’t need to know about
Theseus at all.

Figure 9.4 exploration by different goroutines


From there, both goroutines keep exploring and spinning new explorers. The
exploration to reach the monster could look like this (each goroutine’s
explored path is represented by a different Greek letter).

Figure 9.5 Exploration representing a possible sequence of events

How does the δ goroutine communicate that a new exploration should be


started from B6? And what should be in charge of listening to that
notification? In other words, how do goroutines communicate? Via channels,
of course.

Communication

between goroutines is the purpose of channels.

After finding the position of the entrance, our Solve function is in charge of
initiating the exploration of our maze, starting from there. As soon as a new
path should be explored, we want the solver to start the exploration of that
branch. Each explorer will be in charge of notifying new branches to our
solver with a channel, which will be listening to these notifications. We
understand from this that we need two new methods on our solver - the first
one will explore a path - we can call it explore - and the second one will be
in charge of listening to branches - let’s call it listenToBranches . Our solver
initiates the exploration of the maze by sending a message to the channel that
the listenToBranches method is listening.

The function listenToBranches reads the very first message and creates a
goroutine, the one we called Daedalus (δ), for the path starting at A5.
Daedalus looks at the neighbours of A5: only B5 is eligible. It integrates B5
to its explored path and checks the neighbours of B5. B4 and B6 are eligible
candidates for exploration. Our exploring goroutine, Daedalus, sends the path
to B6 to the channel and keeps exploring B4, C4, and so on. Meanwhile, the
listener reads the message sent by Daedalus and spins a new goroutine,
Theseus (θ), which goes on to C6, C7 and so on, until it finds a dead end and
finishes. Meanwhile, Daedalus has sent the path up to E2 to the channel, and
the path up to E4, and the listener has spun two new goroutines, λ and φ.

Of course in this scenario, take the grammatical tenses with a dash of salt,
because depending on your architecture and the random reassignments
decided by the CPU, the future, the present and the past can vary from one
run to the next.

Figure 9.6 The sequence of events in Solve


Implementing it will help us understand the details better.

Create the channel

What does Daedalus need to communicate in order to start an autonomous


goroutine that will be able to explore the path from B6 onwards?

Let’s ask another question: what does Theseus need to know? What does the
goroutine that gets to the treasure need to know? The path so far. That’s all.
The path through pixels that have been explored to reach this point, which we
can express as a linked list ofimage.Point .

Create a file internal/solver/path.go and add thepath structure.

Listing 9.13 path.go: Using a linked list to store the path so far

package solver

import "image"

// path represents a route from the entrance of the maze up to a position.


type path struct {
previousStep *path
at image.Point
}

Add a field to the Solver : a channel whose messages are pointers topath
a .

Listing 9.14 solver.go: Add a channel to the Solver structure

type Solver struct {


maze *image.RGBA
palette palette
pathsToExplore chan *path #A
}

This is where goroutines will publish new paths to explore, and where the
listenToBranches method will listen in order to spin up new exploration
goroutines. That method will be added in 9.3.4: common sense dictates that
you cannot read from a channel into which nothing has been written yet (we
will see that it’s actually not so simple).

9.3.3 Record the explored path

Considering that the aim of the program is to tell us how to go from one point
to another, we need to know, whenever we explore a new pixel, how we got
here.

Behaviour of each goroutine

Let’s write the explore function, which takes a path as its parameter and
explores it. It will go on until it either finds a dead end or the treasure and
will publish to the channel any branch it does not take. This function is what
each goroutine will do.

For a first version, if we find the treasure, let’s only print a message and stop
exploring with a return . We will come back to that later.

We chose to put this function in a dedicated fileexplore.go in the solver


package: it is important enough, and we will regularly come back to debug it.
Listing 9.15 explore.go: Explore one path

package solver

import (
"image"
"log"
)

// explore one path and publish to the s.pathsToExplore channel


// any branch we discover that we don't take.
func (s *Solver) explore(pathToBranch *path) {
if pathToBranch == nil {
// This is a safety net. It should be used, but when it's needed, at least it's there.
return
}

pos := pathToBranch.at #A

for { #B
// We know we'll have up to 3 new neighbours to explore.
candidates := make([]image.Point, 0, 3)
for _, n := range neighbours(pos) { #C
if pathToBranch.isPreviousStep(n){ #D
// Let's not return to the previous position
continue
}
// Look at the colour of this pixel.
// RGBAAt returns a color.RGBA{} zero value if the pixel is outside the bounds of the
switch s.maze.RGBAAt(n.X, n.Y) { #E
image.
case s.palette.treasure:
log.Printf("Treasure found at %v!", n)
return
case s.palette.path: #F
candidates = append(candidates, n)
}
}

if len(candidates) == 0 {
log.Printf("I must have taken the wrong turn at position %v.", pos)
return
}

// See below
}
}

// isPreviousStep returns true if the given point is the previous position of the path.
func (p path) isPreviousStep(n image.Point) bool {
return p.previousStep != nil && p.previousStep.at == n
}

Note that, so far, neighbours can only be Wall, Path, Entrance, or Treasure.
Entrance has already been skipped because it was the previous pixel, and
there is nothing we can do about a Wall. This is why we only have 2 cases in
the switch - and we didn’t add an empty default case: we either find the
treasure, or another position to explore. Anything else is not interesting. It is
important to note that, as we look for eligible neighbours, we don’t want to
go back on our steps and return to the previous position. This could lead to an
endless creation of goroutines and a crash of the program. We’ll explore
ways of preventing this in section 9.6.1.

Then, there are two cases to consider: either we have no next pixel to explore,
in which case it was a dead end, or we do have next pixels to explore - which
is when we’ll have to send messages to the listening goroutine.. In the case of
a dead end, we print a log message to understand what is going on, but there
is nothing else to do anymore. We exit the loop and the goroutine ends its
execution.

Branching out

What happens when we do have one or more pixel candidates for


exploration? We keep one for ourselves (we decided it’d be the first,
candidates[0] ) and send the path to the others on the channel. Then we
continue the exploration one step further.

Listing 9.16 explore.go: Keep exploring

for _, candidate := range candidates[1:] { #A


branch := &path{previousStep: pathToBranch, at: candidate}
s.pathsToExplore <- branch #B
}
pathToBranch = &path{previousStep: pathToBranch, at: candidates[0]} #C
pos = candidates[0]

This is a rather long function. We could refactor by extracting logical pieces


of code. Loops are usually a good candidate, because they can be summarised
in one sentence, which is logically a unit. Let’s have a look at our options:

Can we extract the inside of the big infinite loop? There would be too
many variables to return: the next position, the next “previous” position,
a signal about whether to exit, and possibly an error.
Can we extract the first part, where we look for candidates? This is
where we chose to exit in the case of a success. Possible but not too
easy.
Can we extract the second switch, where we look at the candidates? In
this situation, we also exit in the case of a dead end. We could pull out
the publishing loop, but would the code really become easier to
understand for 3 lines?

Let’s keep it this way and see whether writing a test is overly complicated or
not. As often, because we want to isolate the logic as much as possible, the
test is a bit more complicated than the code itself and requires at least the
same notions, so we will keep it for just a bit later. Don’t make it a habit,
though.

9.3.4 Wait for unexplored paths and start a goroutine

As we explore, we need a function that listens to the channel and starts a new
goroutine for each message in the channel. It doesn’t need to be complex: for
each message in the channel, callexplore .

The keyword in Go for listening to all the messages published to a channel is


range , exactly like with slices or maps. Using range over a channel is
blocking, meaning that this function will wait for a message to be published
on the channel as long asclose(s.pathsToExplore) wasn’t called.

Listing 9.17 explore.go: Listen to the channel

// listenToBranches creates a new goroutine for each branch published in s.pathsToExplore.


func (s *Solver) listenToBranches() {
for p := range s.pathsToExplore {
go s.explore(p)
}
}

This is a very short implementation; it can work but it has a catch with
goroutines. We know when they start, but we don’t know when they end.
Here, we are not keeping track of the different goroutines (nor their amount),
which means the program can end while some of them are still running and
keep using memory and CPU. We will need to fix this before considering our
code correct.

But let’s first make our program work. The last thing we need to do in order
to kickstart the exploration is to publish the first message.

Start the first goroutine - buffers on channels

How do we start the first goroutine, the one we called Daedalus in our
example? We only need to publish the entrance to the channel and start
listening.

Let’s come back to the Solve function. It knows the position of the entrance
pixel. We can publish that.

s.pathsToExplore <- &path{previousStep: nil, at: entrance}

Don’t forget to call the listening function listenTobranches after that and
try running the program. Do you get an error? You should. What we have is
this:

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send (nil chan)]:


learngo/09/maze/internal/solver.(*Solver).Solve(0x1400009c180)

We are trying to write to a nil channel and to read from it. Initialise it in the
New function, and try again.

pathsToExplore: make(chan *path),


Same problem. We have chosen an unbuffered channel and sending a
message to it before reading from it will keep causing a fatal error.

Unbuffered channels

A send operation on an unbuffered channel blocks the sending goroutine until


a corresponding receive on the same channel at which point the value is
transmitted and both goroutines may continue. The Go Programming
Language, A. A. A. Donovan & B. W. Kernighan

We can’t write to an unbuffered channel before we start reading from it.


What we can do instead is give a one-value buffer to the channel. One value
is enough because after writing this value our main goroutine will listen
forever.

pathsToExplore: make(chan *point, 1),

Alternatively, if we had built an unbuffered channel with make(chan


*point) , the writing to that channel in the Solve function could have been
done in a goroutine:

go func() { s.pathsToExplore <- &path{previousStep: nil, at: entrance} }()

As you see if you have logs everywhere, a size of one is enough to get to the
solution of a small maze, but we still finish in a deadlock. As we said, the
main goroutine listens forever, even when all subroutines have stopped
publishing. Go is able to notice that and ends the execution with a deadlock
error after a little while. We need a way to tell the listening method to stop.

9.3.5 Stop listening, we found it: short version

When one goroutine finds the treasure, it needs to save the path leading to it
somewhere, and somehow tell all the other goroutines to stop looking, as well
as tell the listener to stop listening. We will start by implementing a quick
version, so that we can get something pretty as soon as possible, see its
limitations and find a better solution.

We took a small shortcut a few pages ago, at the point where we found the
treasure. At that point, we need to save the path of pixels somewhere where
the SaveSolution function can find it. But where? Most of the time, the
straightforward answer is good enough: we can put it inside theSolver .
Additionally, it can serve as a flag to tell different goroutines that the treasure
has been found and that they can stop looking for it.

Listing 9.18 solver.go: Add the solution

type Solver struct {


maze *image.RGBA
palette palette
pathsToExplore chan *path

solution *path #A
}

In the explore function, writing to the field is done in just one line. Let’s go
back to the switch:

Listing 9.19 explore.go: Save the solution

case s.palette.treasure:
s.solution = &path{previousStep: pathToBranch, at: n} #A
log.Printf("Treasure found at %v!", n))
return

Wait - any goroutine could be writing to s.solution . How do we make sure


that we don’t create a race condition? Let’s add a mutex to protect ourselves
against this. The mutex is a new field of theSolver structure. Here’s how we
use it in our case:

Listing 9.20 explore.go: Save the solution with a mutex

case s.palette.treasure:
s.mutex.Lock()
defer s.mutex.Unlock() #A
if s.solution == nil {
s.solution = &path{previousStep: pathToBranch, at: n}
log.Printf("Treasure found at %v!", n))
}
return
Now how do we tell other goroutines to stop? Let’s start by using this
solution field as a flag. Not a great solution, but a fast one. Since we’ll need
to check in several places whether the solution was found, we can write a
function:

Listing 9.21 explore.go: Stop listening to new messages

func (s *Solver) listenToBranches() {


for p := range s.pathsToExplore {
go s.explore(p)
if s.solutionFound() { #A
return
}
}
}

// solutionFound returns whether the solution was found.


func (s *Solver) solutionFound() bool {
s.mutex.Lock()
defer s.mutex.Unlock()
return s.solution == nil
}

We have another infinite loop that could use a stop in theexplore function.
We can change the infinite loop so that it stops when the solution is found.

Listing 9.22 explore.go: Stop exploring a path

func (s *Solver) explore(pathToBranch []image.Point) {


//...

for !s.solutionFound() { #A
candidates := ...

Let’s run it. Has it stopped deadlocking? Depending on the complexity of our
input maze, maybe yes, maybe no, because our solution is hacky. Before we
fix it properly, it’s time to start automating the test.

9.3.6 Test one goroutine’s logic

We can test this on an image that is only 4 pixels wide and 5 high: we need a
2x3 grid plus some mandatory walls on 3 sides.

Here are a few test cases:

Only a path to the treasure


A maze with one path leading to a deadend
A maze with two branches
A maze with a cross
A maze with a treasure and a deadend

Figure 9.7 A few cases of mazes

If we send the first pixel as parameters, we can count the number of branches
that have been published to the channel. For this, we’ll create a Solver , but
we won’t listen to its channel. At the end of the run, each branch will have
been published to the channel. We can check the number of messages inside
a channel with the len built-in function, just as we would for slices or maps.
Since we won’t be listening to the channel, we need to build it with enough
capacity to store all the messages that will be published there.

Create a test fileexplore_internal_test.go and the test function


TestSolver_explore . Feel free to copy from the book repository
internal/solver/testdata folder.

Listing 9.23 explore_internal_test.go: Test explore function

func TestSolver_explore(t *testing.T) {


tests := map[string]struct {
inputImage string #A
wantSize int #B
}{
"cross": {
inputImage: "testdata/explore_cross.png",
wantSize: 2,
},
// ...
}
for name, tt := range tests {
name, tt := name, tt

t.Run(name, func(t *testing.T) {


t.Parallel() #C

maze, err := openMaze(tt.inputImage) #D


require.NoError(t, err)

s := &Solver{
maze: maze,
palette: defaultPalette(),
pathsToExplore: make(chan *path, 3), #E
}

s.explore(&path{at: image.Point{0, 2}}) #F

assert.Equal(t, tt.wantSize, len(s.pathsToExplore)) #G


})
}
}

Feel free to add more possibilities. You can also write a second test function
for the cases wherelen(s.pathsToExplore) > 0 , listen to all the messages
and check that we get what you expect. Be careful not to rely on the order of
the neighbours sent by theneighbours() function, because it is not
guaranteed by the implementation. Currently, the behaviour is to always
continue exploring in this order of preference: above, below, right, and left.
Imagine a future developer reordering the neighbours and breaking this
seemingly unrelated test.

Why are we not using New? We want to control the size of the channel in the
situation of our test, because we are not reading from it. A standardSolver
has an unbuffered channel; here we want a buffer of 3 paths for all the
potential candidates.

We found the treasure, now we need to show how to get there!

9.4 Show the result


We have a solution by now, printable on the terminal, but it’s not exactly
human-friendly. The program takes a path in parameter for the output image,
writing it should present no particular traps.

Listing 9.24 imagefile.go: Save the output image

// SaveSolution saves the image as a PNG file with the solution path highlighted.
func (s *Solver) SaveSolution(outputPath string) (err error) {
f, err := os.Create(outputPath) #C
if err != nil {
return fmt.Errorf("unable to create output image file at %s", outputPath)
}
defer func() {
if closeErr := f.Close(); closeErr != nil {
err = errors.Join(err, fmt.Errorf("unable to close file: %w", closeErr))
}
}()

stepsFromTreasure := s.solution
// Paint the path from last position (treasure) back to first position (entrance).
for stepsFromTreasure != nil {
s.maze.Set(stepsFromTreasure.at.X, stepsFromTreasure.at.Y, s.palette.solution)
stepsFromTreasure = stepsFromTreasure.previousStep #A
}

err = png.Encode(f, s.maze) #B


if err != nil {
return fmt.Errorf("unable to write output image at %s: %w", outputPath, err)
}

return nil
}

There is a small piece of code that needs explaining here. We start by


creating a file, which should always be followed by a defer f.Close() .
However, Close() returns an error, and if we don’t check it, we lose it. So,
how can we return both the error that could happen if the deferred call to
Close() fails and any other error thatEncode could return? If we have a close
look at the signature of the method, we see that we named the output error.
This allows us to override err in the deferred anonymous function and return
an error that would be both the error returned byEncode and the one returned
by Close .

Run the program on a small maze.

$ go run . mazes/maze50_50.png sol.png


2023/10/18 09:56:40 INFO Solving maze "mazes/maze50_50.png" and saving it as "sol.png"
2023/10/18 09:56:40 INFO starting at (0,25)
2023/10/18 09:56:40 INFO Treasure found: (18,0)!

You should observe a new file in your project, sol.png, which looks
somewhat like this:

Figure 9.8 Example of a resolved maze 50px-wide

Then try something bigger. It still ends with a deadlock if your maze is
complex enough, doesn’t it?

The reason is as follows: when one goroutine finds the solution and saves it
in our Solver object, in the next nanoseconds, the listener stops listening to
the channel, but some other explorers are still looping through their
neighbours and publishing to that same channel. As soon as that channel has
received as many messages as its capacity allows, writing to it causes a
deadlock - and the program exits.

How do we fix this?

9.5 Notify the treasure is found


We have a working solution that is flaky for two reasons: the most obvious
one, a technical issue, is that we deadlock when we work on big enough
mazes. The other and more urgent reason, a design issue, is that we are not
waiting on all of our goroutines before we end the program! Since our maze
has only one solution, there is no point for other goroutines to keep searching
once we’ve found the treasure. Fixing the latter will solve the former - if the
goroutines stop exploring, they’ll stop publishing new paths to explore and
the channel won’t be full, and we won’t have deadlocks any more.

9.5.1 Keep track of all the goroutines

The listenToBranches method, responsible for listening to the


communication channel and starting goroutines, is the one that knows how
many goroutines it started, and how many are still running. It should be the
one keeping track of them and waiting for them to finish. The easiest way to
keep track of goroutines is to use async.WaitGroup .

Add a wait group to the listenToBranches function. Every time a message is


received, it should add one tracker to the wait group before spinning a new
goroutine. That goroutine should then tell the wait group when it is done with
its work. We don’t want to pollute the explore method with this logic or
spread it across multiple functions, so we can make good use of an
anonymous function to call both explore and Done .

Listing 9.25 explore.go: Stop listening to new messages

func (s *Solver) listenToBranches() {


wg := sync.WaitGroup{}
defer wg.Wait() #A

for p := range s.pathsToExplore {


wg.Add(1) #B
go func(path *path) {
defer wg.Done() #C
s.explore(path)
}(p) #D
if s.solution != nil {
return
}
}
}

This way, we can be sure that the program will only end when all the
goroutines are finished.
You might have noticed that we used an anonymous function with a
parameter. Why did we do this? The reason is both important and a bit
complicated. All versions of Go, up to 1.22, suffer from the way for loops
are handled: versions 1.21 and prior would overwrite the variable used to
iterate - in our case, thep pointer. While this would be fine if we didn’t run
concurrent activities in our loop’s body, things are different here.

Consider the following piece of code:

for p := range s.pathsToExplore {


wg.Add(1)
go func() {
defer wg.Done()
s.explore(p)
}()
}

This is equivalent to the following, as of Go 1.21:

var p *path
for {
p = <-s.pathsToExplore
wg.Add(1)
go func() {
defer wg.Done()
s.explore(p)
}()
}

Before Go 1.22, we would have no guarantee that the value passed to the first
call to explore is indeed the value we first read from the channel - it might
have been overridden by the second value by the time the code execution
reaches explore. There are two common and useful tricks to prevent that
value from being overridden. The first one is the one we presented above - by
passing the pointerp as a parameter of our anonymous function that starts the
goroutine (rather than somewhere within the goroutine), we ensure that it
isn’t overridden: indeed, the for loop can’t read the next message from the
channel as long as the goroutine hasn’t be started.

The other common trick that is frequently used is to simply manually copy the iteration variable inside the
loop. Most of the time, the name of the copy that is used is also the name of the iteration variable, which
mightpseem
:= p strange.
for p := range s.pathsToExplore {
wg.Add(1)
go func() {
defer wg.Done()
// use the local p
s.explore(p)
}()
...

We aren’t simply replacing p by itself - here, we are sending thefor loop


variable to the shadow, and we make a safe copy of it that we can send to the
goroutine. This trick is very commonly found when using test tables, in
which we usually call t.Run() on a function that has at.Parallel() in it.

We are now sure that ourlistenToBranches function won’t return while


some goroutines are still exploring, thanks to the call toWait() . We don’t
want the explorers to reach dead ends (which they will, eventually). We also
don’t want them to start new goroutines at intersections - this would cost
CPU and memory for no reason. It’d be nice if we could kindly ask them to
stop exploring as soon as we know the treasure was found.

9.5.2 Send a quit signal

We already have a stop-exploration condition in ourexplore function: the


“infinite” for loop keeps exploring as long as the solution isn’t found. But
we don’t like this check on solution : the field is used both to save a value
and as a flag. This makes the code complex to understand, hard to refactor,
and it introduces a data race. There must be a better way to communicate
between goroutines that the job is done. Wait… Communication between
goroutines? A job for a channel, of course!

Add a quit channel: the select keyword

Let’s replace the check of the value of solution in thelistenToBranches


method. We’ll be expecting the explorer that found the solution to send a
message in a new channel. We wantlistenToBranches to read from that
channel in order to know that it should stop launching exploration goroutines.
However, our listenToBranches function is already listening to a channel -
how could it be listening to two at a time, if listening to one is blocking?
This is where the keyword select is useful in Go: it accepts severalcase
statements (similarly to a switch ), which can either be “read from a channel”
or “write to a channel”, and whichever is validated first gets to be executed -
and the others are skipped. If severalcase statements are eligible at the same
time, Go will pick a random one.

However, most of the time we aren’t interested in only the first message
received from a channel - we want to process all of them. For this, we can use
the for-select combination, which allows us to listen to several channels at the
same time. It can be seen as an extension of the for msg := range
myChannel loop that we used to listen forever to one channel.

In our case, we can replace the for-range loop with a for-select loop, which
will have the same behaviour.

for {
select {
case p := <-s.pathsToExplore:
wg.Add(1)
go func(p *path) {
defer wg.Done()
s.explore(p)
}(p)
}
}

selectalso accepts adefault entry, which is mostly used when there is


something else than theselect in the infinite for loop.

Let’s add a channel to ourSolver structure, which we will use to inform


listenToBranches , and later the explorer goroutines, that the solution was
found and that it’s time to stop. We can call it quit . Since this channel won’t
carry any meaningful messages, we can declare it as a channel of empty
structures: chan struct{} . Empty structures are a nice feature of Go, as they
are extremely lightweight (they use 0 bytes) and quick to create or copy. We
need to initialise the channel in theNew function, when creating a Solver .

quit: make(chan struct{})

We don’t need a buffered channel, as we will listen to it in the


listenToBranches function. Let’s see what this looks like.

Listing 9.26 explore.go: Stop listening to new messages

func (s *Solver) listenToBranches() {


wg := sync.WaitGroup{}
defer wg.Wait()

for {
select {
case <-s.quit: #A
log.Println("the treasure has been found, stopping worker")
return
case p := <-s.pathsToExplore: #B
wg.Add(1)
go func(p *path) {
defer wg.Done()

s.explore(p)
}(p)
}
}
}

We’re listening to the quit channel - but we need to write a message to this
channel for it to be useful. Let’s do this in the explore method, when we find
the treasure.

Listing 9.27 explore.go: Notify that the solution was found

func (s *Solver) explore(pathToBranch *path) {


...
switch s.maze.RGBAAt(n.X, n.Y) {
case s.palette.treasure:
s.mutex.Lock()
defer s.mutex.Unlock()
if s.solution == nil {
s.solution = &path{previousStep: pathToBranch, at: n}
log.Printf("Treasure found at %v!", n)
s.quit <- struct{}{} #A
}

return
Let’s run this on a few mazes. Does it work? Since we’re working with
goroutines, nothing is absolutely deterministic, but we did have errors on our
side when running this. Indeed, as soon as our goroutine listening to the new
paths exits, we still have some goroutines trying to write in that channel - and
that’s a blocking action. Some of our explorers will go into deadlock mode,
trying to write to a channel that nothing reads. So far, we’ve just changed the
way we reach the same issue. But fortunately for us, there is hope.

To solve this, we need to make sure the explorers stop exploring as soon as
the solution is found. Could we read from the quit channel? Well, we could,
but there is a small conundrum: we don’t know how many explorers are still
running at this moment. And we would need to have one message per
explorer goroutine if we want each explorer to quit when we find the treasure.
Which means we would need to broadcast a potentially huge number of
messages in thequit channel to ensure every explorer receives its own.

But there is a more interesting way of solving this issue. We canclose the
quit channel. A closed channel is a channel to which writing is impossible,
but reading is still possible. A closed channel can’t be reopened, it’s a final
action to take. The interesting part of a closed channel is that we can always
read from it, and this will always return a value - either one that was
previously written in there, or the zero value of the type of messages the
channel transmits if there are no written values left to read.

In order to know whether the value read from a channel was written there in
the first place, or if it’s kindly returned because we’re trying to read from a
closed channel, we can use the second value returned by the <- operator:

msg, ok := <- myChannel

For our business, we don’t really need to care where the empty structure
comes from, we only want to try and read from thatquit channel. Indeed, if
we make sure nothing writes to this channel, the only moment when we could
read from it will be when it’s closed. And several goroutines can try to read
from a closed channel without stealing each other’s message - which means
several goroutines can now know if it’s time to stop working.

Let’s replace the s.quit <- struct{}{} line with a close(s.quit) . This
doesn’t change the code in thelistenToBranches method.

But how can we best use thisquit channel when exploring?

Stop explorers

The explore method performs two tasks: it is in charge of advancing through


the maze and of notifying our listener of paths it doesn’t explore. We would
like both of these tasks to be ended whenever the solution is found.

Let’s start by thinking about what’s happening with the pathsToExplore


channel once the solution is found. When thequit channel is closed by an
exploring goroutine, the listenToBranches function returns, which means
nothing is listening to the pathsToExplore channel any longer, which implies
we cannot write to that channel when the solution is found.

How do we make sure we only write there when allowed? We can’t first
check quit and then write to pathsToExplore - this would leave a tiny gap
during which another explorer could close quit :

select {
case <-s.quit:
log.Printf("I'm an unlucky branch, someone else found the treasure, I give up at position %v.", pos)
return
default:
# A goroutine could close quit between the line above and the line below
s.pathsToExplore <- branch
}

This solution isn’t secure enough because of that gap. Instead, we want the
same logic as we have in thelistenToBranches method:

select {
case <-s.quit:
log.Printf("I'm an unlucky branch, someone else found the treasure, I give up at position %v.", pos)
return
case s.pathsToExplore <- branch:
//continue execution after the select block

}
In this piece of code, we first check whetherquit is closed (it’s the only case
when we can read a message from it). If it is, we return, otherwise, we
publish our new branch.

Listing 9.28 explore.go: Explore only if the treasure is not found

func (s *Solver) explore(pathToBranch *path) {


// ...

for _, candidate := range candidates[1:] {


branch := &path{previousStep: pathToBranch, at: candidate}
select {
// s.quit returns a zero value only when the channel was closed case <-s.quit:
log.Printf("I'm an unlucky branch, someone else found the treasure, I give up at position
%v.", pos)
return
case s.pathsToExplore <- branch:
// continue execution after the select block
}
}

Finally, we want to stop the exploring goroutines when the solution is found.
We can do this as the first operation of our infinite loop in the explore
method:

Listing 9.29 explore.go: First, check if the solution was found

func (s *Solver) explore(pathToBranch *path) {


for {
// Let's first check whether we should quit.
select {
case <-s.quit:
return #A
default:
// Continue the exploration.
}

It is a common pattern to dedicate a channel to communicate that everything


should stop. Do not hesitate to have a look at the full method in this book’s
dedicated repository, at
5_notify_treasure_found/5_2_send_quit_signal/explore.go .

You now have a working maze explorer, with a few known limitations. If you
want to extend it, we have a few ideas to present.

9.6 Visualisation
There are numerous ways to go further with this pocket project. One of the
issues we haven’t raised yet is that of mazes that contain loops. Imagine a
maze containing the following extract:

The four X-marked spaces are paths, but they all go around a “pillar” - a
piece of wall not connected to any other wall. At each intersection, a
goroutine would either stay close to the pillar or exit the room - but it would
still create a branch that would explore around the room. This means that
such a room would create an unlimited number of goroutines, something
very, very harmful for the wellbeing of our computers. We could ask for the
user to provide a maze with no loops, but we could also try and handle it as
part of the exploration.

Finally, we did solve the maze, but wouldn’t it be nice if we could also show
the intermediary steps? Here, we’ll animate our progression through the maze
and produce a nice GIF file of the exploration.

9.6.1 Overcome the loop constraint

We set ourselves a constraint in the beginning: the maze should never have
loops - if you see it as a graph, it is a tree, where each node only has one path
leading to it.

If we remove that constraint, there can be multiple possible paths to the


treasure. The goal of our chapter is not to find the shortest: we would need to
keep track of the distance of each pixel to the entrance by giving it a weight,
which could be achieved with increasing values of RGBA pixels.

What we want to try instead in this chapter is just to find one of the solutions
and avoid going through the same pixels multiple times.
Strategy

Since we don’t want a goroutine to explore pixels that were previously


explored - by either this goroutine or another one - we want to be able to treat
them as non-candidate neighbours of pixels being explored. For this, an easy
trick is to mark them as explored as we explore them. First, define an
explored value in the palette structure:

type palette struct {


...
solution color.RGBA
explored color.RGBA
}

And have our defaultPalette function return a value for this field (a
different value than from the palette.path ).

Then, in the explore function, all we have to do is paint the pixel we’re
exploring, and voil→!

Listing 9.30 explore.go: Stop exploring a path

func (s *Solver) explore(pathToBranch *path) {


if pathToBranch == nil {
// This is a safety net. It should be used, but when it's needed, at least it's there.
return
}

pos := pathToBranch.at

for {
s.maze.Set(pos.X, pos.Y, s.palette.explored) #A
select {

From now on, a pixel that was explored will not be eligible in our search for
candidates - as it won’t be of thes.palette.path colour.

Let’s run the program, you should see an image with the solution and the path
explored coloured such as the image below:

Figure 9.9 Example of a maze solved with the explored pixels coloured blue
Implementation traps

But wait - we’re working with several goroutines, and we want each one to
modify the contents of a pixel in our image - this is a door wide open for race
conditions. Is this a problem in this scenario? One could argue that in this
case it isn’t: whichever goroutine gets there first, the result will always be the
same - each of the goroutines writing the same contents at that pixel, we’re
fine with any overwritten or partially written value. But this is totally wrong.

Race conditions

are undefined behavior. Avoid at all costs.

There is no such thing as a benign race condition. Even in a situation when an


innocent developer could think well, whichever goroutines writes first, it will
be the same result, no. It will not, because undefined behaviour means that
the memory could be corrupted somewhere else, or the code could
accidentally trigger a bomb. We don’t know. It is undefined. The compiler
makes a lot of assumptions when turning our code into machine language,
and one of these assumptions is that there is no data race.

9.6.2 Animate the exploration

We know our solution reaches the treasure. We have some logs that tell us
which dead ends we managed to find. But this isn’t very visual, and since this
is a chapter about images, let’s make it more fun!

The objective of this section is to generate a list of frames as we progress


through the maze. Each frame should display the state of exploration at a
given moment. We’ll use another image format for this - a GIF, Graphics
Image Format - which can be used for animations (even though it wasn’t the
initial design). Without having to debate on the pronunciation of this format’s
name (at least until the audiobook version of this chapter), it’s interesting to
know that a GIF can contain more than a single raster frame. We can encode
several frames within a single GIF file, and we can specify a duration for
each of them to appear when the GIF file is displayed. These durations are
expressed in hundredths of a second, as this is closest to our screen’s refresh
rate a unit can be.

Now we’ve decided we want to show the state of the exploration, how do we
do it?

Adding frames to the GIF

Well, first, we need to be able to keep track of the pixels we’ve explored so
far - which is precisely what we did in 9.5.1. Second, we need to add the
frames - the image with its currently explored pixels - at specific moments of
our exploration. Let’s start by adding to our Solver structure a new field in
charge of holding the GIF using the type from the standard library
image/gif .

type Solver struct {


...
animation *gif.GIF
}

When do we want to take snapshots of our exploration? If we answer this


question with a time unit - such as every millisecond - we might face
different outputs depending on how fast the program runs on a computer.
Otherwise, it could be worth considering that we want to display the status
after 10 new pixels have been discovered. Although this would work, we’d
face severe problems as our maze grows. Suppose our maze contains 40% of
path pixels: on a 10 x 10 maze, there are about 40 path pixels to explore -
which would make a 4-frame GIF. However, on a 1000 x 1000 maze, there
would be about 400,000 pixels to explore - resulting in a 40,000-frame GIF.
Such a file would, first, be very heavy, and second, if we gave each frame
one hundredth of a second to be displayed, it’d take more than 6 minutes, in
the worst case, to display the exploration.

Instead, we can decide to go with a different approach: let’s decide that we


want our final GIF to be 30 frames long if we explore the whole maze. That’s
an arbitrary number, but it will make for an animation that won’t be too long.
This means we need to print the state of exploration after
total_explorable_pixels / 30 pixels were explored. We need to count all
explorable pixels for our animation, let’s write a function for that in a new
file, animation.go :

Listing 9.31 animation.go: Counting all explorable pixels

// countExplorablePixels scans the maze and counts the number


// of pixels that are not walls.
func (s *Solver) countExplorablePixels() int {
explorablePixels := 0
for row := 0; row < s.maze.Bounds().Dy(); row++ { #A
for col := 0; col < s.maze.Bounds().Dx(); col++ { #B
if s.maze.RGBAAt(col, row) != s.palette.wall { #C
explorablePixels++
}
}
}
return explorablePixels
}

In section 9.6.1, we added an operation when we encountered a new


unexplored pixel - we painted it. Here, we want to do something else when
we meet a new pixel. This calls for refactoring these actions into a single
method, on the Solver structure, that we can callregisterExploredPixel .
This function will paint explored pixels and, depending on how many were
explored, it will also be in charge of adding the frame to our animation.
However, while painting a pixel with a colour doesn’t take too long, “adding
the frame to the animation” will mean copying the whole image, something
that might take a long time. We don’t want that copying to block any
exploration process, which means we want the explorers to asynchronously
send notifications that a new pixel is to be marked as registered. We wrote
this method in a file named
animation.go .

There are mostly two ways in Go to make asynchronous calls. The first one is
to make the call in a goroutine:

go s.registerExploredPixel(pos)
This is a perfectly valid option, but one has to ask themself if race conditions
could happen. Ultimately, this method will require the explicit use of a
mutex. But what did we say about communication between goroutines?

The second option, which we’ll use here, is to use a channel into which
explorers send pixels they want registered. This approach means we will have
our registerExploredPixel receive pixels from a channel. There is no need
for a mutex, as long as we process the pixels read from the channel one at a
time. Let’s add this channel to our Solver structure. Don’t forget to initialise
it in the New function.

type Solver struct {


...
exploredPixels chan image.Point
animation *gif.GIF
}

The explorer’s “infinite” for loop can be updated to either abort when the
quit channel was closed because the solution was found, or send a pixel for
registration and continue with the exploration.

Listing 9.32 explore.go: Registering pixels as explored

func (s *Solver) explore(pathToBranch *path) { #A


...
for {
// Let's first check whether we should quit.
select {
case <-s.quit:
return
case s.exploredPixels <- pos: #A
// Continue the exploration.
}
)
...
}

Now we can write the function responsible for registering explored pixels.

In order to know how often we should write a new frame, we define


totalExpectedFrames as the number of frames we want in the output gif,
let’s say 30 max. We won't get exactly 30, because we won't be exploring
every pixel. We then count the total number of explorable pixels and use the
for/select pattern to keep going until we are told to quit.

Every time we receive the position of a newly-explored pixel, we paint it,


increment the counter of explored pixels, and if we reached the threshold,
paint a new frame.

Listing 9.33 animation.go: Implement registerExploredPixels

// registerExploredPixels registers positions as explored on the image,


// and, if we reach a threshold, adds the frame to the output GIF.
func (s *Solver) registerExploredPixels() {
const totalExpectedFrames = 30

explorablePixels := s.countExplorablePixels() #A
pixelsExplored := 0

for {
select {
case <-s.quit: #B
return
case pos := <-s.exploredPixels: #C
s.maze.Set(pos.X, pos.Y, s.palette.explored) #D
pixelsExplored++
if pixelsExplored%(explorablePixels/totalExpectedFrames) == 0 {
s.drawCurrentFrameToGIF()
}
}
}
}

What is this drawCurrentFrameToGIF method? What does it do? How do we


paint the frame? First, if we have a look atgo doc gif.GIF.Image , we notice
that the GIF structure uses a slice of paletted images. This is a compression
algorithm by which each colour used in the image is stored in a palette, and
each pixel, instead of being encoded with the classic RGBA values, is
encoded with the key of its colour in the palette. Palettes usually have fewer
pixels than the whole RGBA spectrum can offer - which sometimes leads to
compression artefacts in resulting images. So, how do we create a paletted
image? Well, it’s quite straightforward, Go has an
image/color/palette
package. This package only offers two palettes -Plan9 , and WebSafe
(with a
sweet mention to “early versions of Nestcape Navigator”). Here, the choice is
yours. We also have a decision to make - do we want our GIF animation to be
the same size our initial maze was, or do we want it of fixed size? Using the
same size as the input image is simpler, but it will make most GIFs too small,
or too large. Having a frame of a different size than our maze will require
pixel interpolation, as we’ll see in a few lines. For the purpose of this chapter,
we’ll go with a constant width of 500 pixels, and height in same ratio as input
image pixels for each frame:

const gifSize = 500


frame := image.NewPaletted(image.Rect(0, 0, gifSize,
gifSize*s.maze.Bounds().Dy()/s.maze.Bounds().Dx()), palette.Plan9)
Using the image/color/palette in our code will cause a conflict! Indeed,
we already have a type called palette in our package - it defines what colours
the walls and the paths should be expected. We can easily resolve this
conflict by aliasing the import.

Aliasing imports

In Go, it’s sometimes useful to alias an import. Here, we’ll use import plt
"image/color/palette". When aliasing imports, it’s best to use an alias that
resembles the original package name to keep the code clear.

We’ve created an empty canvas, let’s draw the current state of the explored
maze into it. Unfortunately, Go’s image/draw package doesn’t allow for
scaling images - and therefore doesn’t allow for any interpolation
whatsoever. Instead, we’ll have to usegolang.org/x/image/draw , its more
versatile version. This package offers agolang.org/x/image/draw.Scaler
interface, which shrinks or expands a rectangle section of an input image to a
rectangle section of an output image.golang.org/x/image/draw exposes
three types that implement theScaler interface: NearestNeighbor ,
CatmullRom , and ApproxBiLinear . For the purposes of this chapter, we’ll
stick to NearestNeighbor , as it’s the one that won’t blur our pixels’ edges.

draw.NearestNeighbor.Scale(frame, frame.Rect, s.maze, s.maze.Bounds(), draw.Over, nil)

Finally, we can add the frame to our GIF image. All three operations can be
written into a single method called by markPixelExplored:

Listing 9.34 animation.go: Drawing the frame to the GIF


package solver

import (
"image"
plt "image/color/palette" #A

"golang.org/x/image/draw"
)

// ...

// drawCurrentFrameToGIF adds the current state of the maze as a frame of the animation.
func (s *Solver) drawCurrentFrameToGIF() {
const (
// gifWidth is the width of the generated GIF.
gifWidth = 500
// frameDuration is the duration in hundredth of a second of each frame.
// 20 hundredths of a second per frame means 5 frames per second.
frameDuration = 20
)

// Create a paletted frame that has the same ratio as the input image
frame := image.NewPaletted(image.Rect(0, 0, gifSize,
gifWidth*s.maze.Bounds().Dy()/s.maze.Bounds().Dx()), plt.Plan9)
// Convert RGBA to paletted
draw.NearestNeighbor.Scale(frame, frame.Rect, s.maze, s.maze.Bounds(), draw.Over, nil)

s.animation.Image = append(s.animation.Image, frame)


s.animation.Delay = append(s.animation.Delay, frameDuration)
}

We now have a single goroutine in charge of updating the values of the pixel
of our image, which does it pixel per pixel, as they come through the channel.
Let’s not forget to start this registerExploredPixels method in Solve . We
now have two “listening” goroutines we want to start - listenToBranches
and registerExploredPixels . To launch both and synchronise after they’ve
returned, we can use async.WaitGroup :

Listing 9.35 solver.go: Launch listeners in Solve

func (s *Solver) Solve() error {


// ...
log.Printf("starting at %v", entrance)

s.pathsToExplore <- &path{previousStep: nil, at: entrance}


wg := sync.WaitGroup{}
wg.Add(2)

defer wg.Wait() #A

go func() { #B
defer wg.Done()
// Launch the goroutine in charge of drawing the GIF image.
s.registerExploredPixels()
}()

go func() { #C
defer wg.Done()
// Listen for new paths to explore. This only returns when the maze is solved.
s.listenToBranches()
}()

return nil
}

Generating the GIF file

We’ve now added frames to our GIF. Each of them was copied, pixel by
pixel, from the maze being explored.

Let’s draw the GIF file! For this, we’ll simply plug somewhere in our code
when we know we’re ready to print it. The current SaveSolution function is
a good choice, since it’s already in charge of writing an output file. Let’s call
a new method in there to draw our final GIF.

Listing 9.36 imagefile.go: Generate the GIF file

func (s *Solver) SaveSolution(outputPath string) error {


// ...
gifPath := strings.Replace(outputPath, "png", "gif", -1)
err = s.saveAnimation(gifPath)
if err != nil {
return fmt.Errorf(...)
}

return nil
}

// saveAnimation writes the gif file.


func (s *Solver) saveAnimation(gifPath string) error {
outputImage, err := os.Create(gifPath)
if err != nil {
return fmt.Errorf(...)
}

defer func() {
if closeErr := outputImage.Close(); closeErr != nil {
// Return err and closeErr, in worst case scenario.
err = errors.Join(err, fmt.Errorf("unable to close file: %w", closeErr))
}
}()

log.Printf("animation contains %d frames\n", len(s.animation.Image))


err = gif.EncodeAll(outputImage, s.animation)
if err != nil {
return fmt.Errorf("unable to encode gif: %w", err)
}

return nil

This code is very similar to that of the encoding of the PNG image.

Now, let’s run the program:

$ go run . mazes/maze50_50.png solution.png


2023/10/18 11:42:57 INFO Solving maze "mazes/maze50_50.png" and saving it as "solution.png"
2023/10/18 11:42:57 INFO starting at (0,25)
2023/10/18 11:43:00 INFO Treasure found: (18,0)!
2023/10/18 11:43:00 INFO the treasure has been found, worker going to sleep
2023/10/18 11:43:00 INFO animation contains 30 frames

This should generate the solution.png image, but also a solution.gif file. Open
this file to see how the maze was explored! Do you notice anything? The
solution doesn’t appear very clearly - if it is at all displayed - and the loop
restarts immediately. It’d be nice to make sure the solution is added to the list
of frames, and that this final frame is printed for a longer duration. In 9.4, we
added the painting of the solution to theSaveSolution method. Now that we
need to do something on the GIF, we might want a dedicated method for this
and move the logic out of the code that writes files into the solver. Let’s write
the final lines of code for this chapter. First, paint the pixels between the
entrance and the solution in the image stored in the solver, and then add a
final frame (which will include the painted solution pixels) to the GIF. By
setting a longer value, we ensure that the final frame will be displayed long
enough to be admired!

Listing 9.37 solver.go: Finalise exploration by saving solution

func (s *Solver) Solve() error {


// ...
wg.Wait()

s.writeLastFrame()

return nil
}

// writeLastFrame writes the last frame of the gif, with the solution highlighted.
func (s *Solver) writeLastFrame() {
stepsFromTreasure := s.solution
// Paint the path from entrance to the treasure.
for stepsFromTreasure != nil { #A
s.maze.Set(stepsFromTreasure.at.X, stepsFromTreasure.at.Y, s.palette.solution)
stepsFromTreasure = stepsFromTreasure.previousStep
}

const solutionFrameDuration = 300 // 3 seconds


// Add the solution frame, with the coloured path, to the output gif.
s.drawCurrentFrameToGIF() #B
s.animation.Delay[len(s.animation.Delay)-1] = solutionFrameDuration
}

Rerun the program and open the GIF. You can adjust the values of the frame
durations, or the number of frames, to get the look and feel you really want!
Unfortunately, we can’t include the GIF in this book, but share yours with
your friends!

9.7 Summary
In computer science, the main type of two-dimensional images are raster
images and vector images. Vector images are used in fonts and logos, in
infographics, or in icons. Vector images are very scalable - you can
zoom in and not see any artefacts.
The other half of the images we use are raster images - two-dimensional
grids of pixels. Each pixel of an image has a colour which can be
expressed in the RGBA colour model (but it might be encoded in
another colour model, such as the YCbCr, for JPEG images). The value
of the colour can be used to encore either a physical information, such as
the amount of light of red, green, and blue frequencies that is emitted by
an object (as in the picture of a flower), any numerical information, such
as the density of population, or finally a palette can be used to represent
areas of same category, such as in a map, where each country has its
own colour.
The
image/png package is used toDecode a file into an image.Image .
This Image will frequently be type asserted to aRGBAor NRGBA. To
encode an image, use theEncode function from the package you wish to
encode your image - available options aregif , jpeg , and png . Other
formats require third-party libraries.
Images usually have their pixel at position (0, 0) in the upper-left.
However, some images might have (0, 0) in any other corner. It all
depends on the image format and the image’s metadata. Use what the
image package returns to iterate over the pixels of an image.
You can access a pixel’s value in an with the
image.Image .At()
method. This returns acolor.Color() that you have to convert to
color.RGBA . When using an image.RGBA , you can useRGBAAt() instead,
which will return a color.RGBA that can then be compared to known
values.
In order to write a pixel to an image.RGBA , use theSet(x, y, rgba)
method.
When scanning a whole image, use two nested loops, the outermost one
iterating over the rows, and the innermost one iterating over the
columns. This is beneficial, performance-wise, for all “scanline”
formats.
When you can’t have global constants, it’s slightly cleaner to have a
function that returns configuration values rather than using global
variables. Avoid exposing global variables for safety reasons: other
pieces of code might change them.
Writing to an unbuffered channel that isn’t read from is blocking. Either
write to it in a goroutine, or use a buffered channel, whose size should
be the maximum number of elements that will be written there before
the reading starts.
When starting goroutines in loops, make sure your loop variables are
protected. The loop variables can be the messages you read from a
channel, the keys or values of a map you iterate through, or the elements
of a slice.
There are three common ways of protecting iterators of a for-loop when
a goroutine is launched inside the loop:
you can either use a version of Go that guarantees that (currently,
it’s considered for Go 1.22)
you can shadow the loop variable with another one in your loop
(usually, we give the new variable the same name as the loop
variable)
finally you can launch your goroutine with an anonymous function
that takes the loop variable as a parameter.
The select keyword allows a piece of code to listen to several channels.
Whenever a message is published in any of the channels, the code
written in the case statement will be executed.
If several case statements in aselect are eligible, Go will pick a
random one.
It is common to have one of thecase statements of aselect be a return
condition. This is especially true in servers, where the processing of an
input request should be ended as soon as the request is cancelled.
The for-select “infinite loop of listening” pattern is very common.
Usually, one of the cases of theselect block will contain the condition
to exit the loop.
10 Habits Tracker using gRPC
This chapter covers

Writing a web service using Protobuf and generating the Go code of its
gRPC definition
The Context interface in Go
Running the service with basic endpoints
Testing with integration tests

As developers, we spend most of our day in front of a screen for work, on top
of any leisure activity we might have. Unfortunately, the effects of a high
number of hours watching these lit pixels - albeit sometimes positive for
moral or psychological aspects - are mostly considered negative for eyesight,
causing eye fatigue, dry eyes, or difficulty focusing. On the other hand, there
are some activities that will alleviate these ophthalmic conditions - most of
them include simply doing something else than watching a screen. Usually,
recommendations go along the path of regularly taking a stroll, reading a
book, or having a physical activity.

It’s never easy to pick up a new habit, and no one has ever gone from never
jogging to running a marathon. The goal is always incremental. But the
important point is to track how much of these habits one can get done in a
week, and maybe adjust objectives for the next week.

In this chapter, we’ll write a service in charge of registering such habits. The
user will be able to create habits, give them an expected frequency - a number
of times per week they are expected to be completed - and list them. We’ve
already written an HTTP service, this time we’ll focus on another popular
network remote procedure call protocol, this one developed by Google:
gRPC.

Functional requirements

Create and delete a habit


List the created habits
Tick a created habit
Get the status of a habit

Technical requirements

gRPC service, with Protobuf in and out


Run locally
In-memory DB to start

10.1 API definition


In the same vein as we did in chapter 8, we are going to create a web service
to track personal habits; that is, a Go program that runs indefinitely, ready to
listen to requests and respond to them. Requests are sent by clients, who need
to know what to send and how to understand the response. Such a set of
definitions that is called an Application Programming Interface, or API for
short. Here, the client is the user who wants to track her habits. She will do it
by calling the endpoints such as exposed on an API we are
CreateHabit
going to build.

In this case, we want the communication between the clients and our service
to use the gRPC framework, where messages are encoded using the Protocol
Buffers (Protobuf) format and using the HTTP/2 network layer. Protocol
Buffers are a programming language-independent description of how these
messages are encoded.

Protocol Buffers

Protocol Buffer fields are a mechanism used for serializing structured data.
While this can also be achieved by lots of other ways (JSON, XML, yaml,
…), Protocol Buffers have an emphasis on two important points: versioning
the serialized model, and reducing any non-data information. Invented by
Google in 2001 and released to the public in 2008, they are perfect for high-
network applications like microservices. Protocol Buffers is a way to describe
communication between programs in a cross-language way. You can define
what data is being sent via message definitions. You can also define
endpoints for what is being communicated. Messages and service APIs, the
endpoints are written in Protocol Buffers files (text files with, usually, the
.proto file extension), which can then be compiled to generate clients for the
programming language of your choice, as we’ll explain below. Clients can be
generated for many common languages, including Go. A few limitations:
Protobuf messages are not self-describing – you need to know how to read
them before you can access their contents. And this also means we can’t
simply use regular tools such as curl to send messages to a gRPC endpoint -
testing will also be a bit trickier than with JSON APIs.

The first step in the development of a system, once we know the


requirements, is generally to define the API: how the system will be used.
Any language can be used, as long as we know our users will use this
language - or that some tools can be used to generate adequate files to
connect to our servers. In this section, we’ll use Protobuf to declare our API.
Our Protobuf files will be compiled into Go files that we can use to
implement our service.

The final API will resemble the following, and throughout this chapter, we go
through each step necessary to implement these endpoints:

Listing 10.1 Habits service API

// Habits is a service for registering and tracking habits.


type HabitsService interface {
// CreateHabit is the endpoint that registers a habit.
CreateHabit(CreateHabitRequest) (CreateHabitResponse, error)

// ListHabits is the endpoint that returns all habits.


ListHabits(ListHabitsRequest) (ListHabitsResponse, error)

// TickHabit is the endpoint to tick a habit.


TickHabit(TickHabitRequest) (TickHabitResponse, error)

// GetHabitStatus is the endpoint to retrieve the status of ticks of a habit.


GetHabitStatus(GetHabitStatusRequest) (GetHabitStatusResponse, error);
}

10.1.1 Protobuf declaration


While this is not a book about Protobuf, we need a few basics to define our
API.

Initialise your go module the usual way: create a directory, and run:

go mod init learngo-pockets/habits

Even before creating amain.go or anything, create a folder at the root of the
project, named api/proto , where we can store the Protobuf files. Their
extension is .proto .

The service’s job is to deal with habits, so create a filehabit.proto where


we can define what a habit is. For the moment, we will give it a name and a
weekly frequency. For example, if I want to practise Go 5 times a week, I
want to be able to send something along those lines:

{"practice Go", 5}

Habit entity

Let’s start with a minimal API definition of what a habit is. It has a name and
a weekly frequency. We can write a Protobuf file with the entity.

Each proto file starts with the version of the protocol, then defines a Protobuf
package, and in our case, because we want to generate Go code, a Go
package. Generated code will end up in the folder named after the package
and situated inside thego_package module path. As often, a piece of code
will make things clearer.

Listing 10.2 habit.proto: Headers of the proto file

syntax = "proto3"; #A

package habits; #B
option go_package = "learngo-pockets/habits/api"; #C

Every structure in Protobuf is a message and every field is given a number


that will allow consumers to recognise it. If we decide that the name is 1, it
will have to stay 1 forever, and future versions with different fields will still
look for the name at index 1.

Listing 10.3 habit.proto: define the Habit message

// Habit represents an objective one wants to complete a given number of times per week.
message Habit { #A
// Name of the habit, cannot be empty
string name = 1; #B
// Frequency, expressed in times per week.
int32 weekly_frequency = 2;
}

In a Protobuf message, each field must have a unique identifier. There is no


point in leaving gaps, just follow incremental order. The syntax is to list each
field with its type followed by its name and identifier. You can find more
examples and lists of supported types here:
https://protobuf.dev/programming-guides/proto3/ .

Don’t hesitate to be extremely verbose in your comments: this is what users


will read in order to figure out how to use what you made, not the generated
Go code. Comments in the proto files will be carried over into the generated
code.

Service definition

Once we have this simpleHabit object, we can declare a service to


manipulate it. In another file, define a service that will use this message.

It is good practice, for version compatibility, to define a Request and a


Response, even when they are empty or when they contain only one field: it
makes changes smaller and avoids breaking the API. The gRPC Habits
service is in charge of registering and tracking habits. That is the place where
we will add along the way all the needed endpoints to track the habits.

Listing 10.4 service.proto: define the Habits service

syntax = "proto3";

package habits;
option go_package = "learngo-pockets/habits/api"; #A
// Habits is a service for registering and tracking habits.
service Habits { #B
}

First endpoint: Create

This service exposes nothing, as you can see. The first endpoint that we need
is for creating a habit to track.

A generally accepted best practice when naming inputs and outputs of


endpoints is to have a dedicated message (a structure with fields) for each of
them, called request and response, or input and output, even when they are
empty or contain only one field. Consider the difference between these 2
signatures:

func CreateHabit(Habit)
func CreateHabit(CreateHabitRequest) CreateHabitResponse

In the first case, we give a habit and expect nothing in return. Simple. In the
second case, we need to define two additional structures, it’s verbose, it’s
annoying - the first one will just contain a Habit field and the other will be
empty. What would be the point?

The point is version intercompatibility. Let’s say in the next version we want
to add a user token to identify which user is creating the habit, and then
return a habit identifier. In the more verbose case, we would just add a field
in each structure, and if it is not mandatory, any code written for the initial
version will still work, whereas in the first and straightforward case, we
would break the whole API.

For this reason, theCreateHabit endpoint will use its Request and Response .
You might wonder what happens in the case of errors - why wouldn’t they
appear in the proto API? The answer is that the gRPC-compilation tool will
be in charge of adding support for errors. This support differs from language
to language - in Go, we can have several returned values, whereas in C++ or
Java, the error needs to be returned differently - which means we don’t write
errors in the proto file - but, don’t panic, the Golang interface compiled from
this proto will allow us to return an error.
We can add the endpoint to the service with one line, then define 2 new
messages.

Listing 10.5 service.proto: define the Habits service

service Habits {
// CreateHabit is the endpoint that registers a habit.
rpc CreateHabit(CreateHabitRequest) returns (CreateHabitResponse); #A
}

In order to use theHabit message in the response, we need to import the


neighbouring file.

Listing 10.6 service.proto: define the CreateHabit in and out

import "habit.proto"; #A

service Habits {
...
}

// CreateHabitRequest is the message sent to create a habit.


message CreateHabitRequest {
// Name of the new habit. Cannot be empty.
string name = 1;
// Frequency of the new habit. Will default to once per week.
optional int32 weekly_frequency = 2; #B
}

// CreateHabitResponse is the response of the create endpoint.


message CreateHabitResponse {
Habit habit = 1;
}

And done. In a handful of lines, we have an API for the first step of the
tracker, which is the creation of a habit. As you can see, there is no path and
no verb: they are specific to HTTP. gRPC does not use them.

As we want to use it in Go, now is the time to generate the Go code.

10.1.2 Code generation


Generating code from Protobuf files is done usingprotoc , standing for proto
compiler. We will also install two plugins, namely protoc-gen-go and
protoc-gen-go-grpc .
https://grpc.io/docs/languages/go/quickstart/#prerequisiteslists all steps for
this, but we’ll repeat them here.

Installation steps

Depending on your system, installing protoc might be achievable simply,


through a package management tool such as Homebrew’s brew (on Mac) or
apt (on Linux). Unfortunately, there are a few more steps when installing it
on Windows. Here are the commands you can run from a terminal:

$ apt install -y protobuf-compiler #Linux


$ brew install protobuf #Mac

Once protoc is installed, getting the Golang-specific dependencies is made


easy by the fact that we can ask Go to do it:

$ go install
google.golang.org/protobuf/cmd/protoc-gen-go@latest
$ go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest

These two utilities are used to compile.proto files declaring messages and
services into Golang files. They work as plugins for protoc – they will be
called if protoc detects we want to compile Golang files.

Compilation

The compilation command is pretty long. We will try to decompose it step by


step.

In your favourite terminal, navigate to the root of the go module and try the
very minimal version:

protoc api/proto/habit.proto

The compiler complains: it needs output directives. For what language should
it generate the compiled files? Where? Thego_out parameter will tell the
compiler both the requested output language and the location for compiled
files by specifying the target folder. By specifying this option, we also tell
protoc to use theprotoc-gen-go plugin.

protoc --go_out=api/ api/proto/habit.proto

This generates a Habit structure, but it puts it in an impractical location: the


whole module tree is created all over again:

.
├── api
│ ├── learngo-pockets
│ │ └── habits
│ │ └── api
│ │ └── habit.pb.go
│ └── proto
│ ├── habit.proto
│ └── service.proto

That’s not what we want, we would like the Go file to appear directly in the
api folder. Fortunately for us, there is an option for that: --
go_opt=paths=source_relative .

protoc --go_out=api/ --go_opt=paths=source_relative api/proto/habit.proto

Now the tree looks like what we want. The last step is to compile all of the
proto files, not only Habit.

protoc --go_out=api/ --go_opt=paths=source_relative api/proto/*.proto

It doesn’t work. What the compiler is doing with this command is to take
each file separately and generate a Go file for it. When it reaches
service.proto , it cannot import Habit because we never told it where to
look.

The -I option has the following documentation if you run protoc --help :
Specify the directory in which to search for imports. May be specified
multiple times; directories will be searched in order. If not given, the current
working directory is used.

Perfect for our needs.


protoc -I=api/proto/ --go_out=api/ --go_opt=paths=source_relative api/proto/*.proto

Once you have run this command, your tree should look like this:

.
├── api
│ ├── habit.pb.go
│ ├── proto
│ │ ├── habit.proto
│ │ └── service.proto
│ └── service.pb.go
├── go.mod
└── go.sum

All the messages exist as Go structures, but not the service yet. We also need
to generate the gRPC part.

The options are quite similar to the pure Go ones:go-grpc_out and go-
grpc_opt . Passing these options on the command line will silently tell
protoc to use theprotoc-gen-go-grpc plugin.

protoc -I=api/proto/
--go_out=api/ --go_opt=paths=source_relative
--go-grpc_out=api/
--go-grpc_opt=paths=source_relative api/proto/*.proto

There is one final parameter that we must talk about, when it comes to the Go
gRPC compiler, and this has to do with forward-compatibility. Suppose that
we’re happy with the current proto API, that we use it to compile the Golang
files, and that we implement the server interface with a structure of our own.
Then, let’s assume we want to add a new endpoint - we’ll have to update the
proto file, and regenerate the Golang files. As mentioned on the go-grpc
repository, “it is a requirement that adding methods to a service cannot break
existing implementations of the service”. So, how did they ensure this
requirement is always met?

There are two options. The first one is to require that any implementation of
the server embeds a type defined in the generated file. The other is to allow
for the developers to not implement the required server interface. While this
second option is not recommended, it is still available by passing another
parameter to the command line:
--go-grpc_opt=paths=source_relative,require_unimplemented_servers=false

In the rest of this chapter, we will use files that were generated without this
final option - and we’ll remind you to embed the type when creating the
server type.

Automated generation

Remember to put this massive command in a place where you and future
maintainers will find it, typically in a Makefile or as part of a script. You
might wonder why we wouldn’t place this in a generate.go file with a
//go:generate directive – the reason is that we were lazy in our command-
line and used a* to send all the .proto files to protoc . Unfortunately, while
shells understand how to expand*.proto into “every file with a proto
expansion”, go generate doesn’t, which prevents us from using the same
command-line directly in a //go:generate directive. However, if you have
access tobash or sh , or any other shell you fancy, you can tell go generate
to run a command in a shell with the following syntax (don’t forget the
double quotes around the command that you really want to run) :

//go:generate bash -c "protoc -I=api/proto/ {...} api/proto/*.proto"

We provided an example of agenerate.go (a common name for files that


only contain //go:generate directives) file that contains a similar command.
We slightly adapted it because we placed it directly into theapi directory.
It’s up to you to decide whether you want a target in your Makefile or if
you’d rather call go generate to produce these files.

Make sure to document it, like everything that is not considered general
knowledge in the industry.

Your tree should now look something like this:

.
├── api
│ ├── proto
│ │ ├── habit.proto
│ │ └── service.proto
│ ├── generate.go
│ ├── habit.pb.go
│ ├── service_grpc.pb.go
│ └── service.pb.go
├── go.mod
└── go.sum

Ready to start coding in Go?

10.2 Empty service


Now that we have an API exposing the create habit endpoint for our user, we
can write the code and make it run. We will first create an empty service,
make it run, and then add the endpoints. After that will come the data layer,
and finally integration tests, and we will be ready to start again with more
functionalities.

10.2.1 Creating a small logger

A logger is often the first package that is written in a module, as it’ll likely be
used by every other package. But loggers can sometimes be problematic -
they’ll write to whatever output we tell them to write to. Sometimes, this
causes issues - for instance, should the log messages always be printed when
testing? And to what output? In this section, we’ll implement a small logger
that will make it easier for us to both run and test our code with logs.

We can notice that thetesting.T structure already implements a(t *T)


Logf(format string, args ...any) method that will only print what we
called it with when the current test fails. In order to be able to use a logger in
our code and in our tests, let’s write a small logger that will only expose one
method - the same as exposed by testing.T . This way, we will be able to
create loggers in our code, and inject thet test variable in tests as the test
logger. This will prevent output-jamming.

Listing 10.7 log/log.go: define a small logger

package log

import (
"io"
"log"
"sync"
)

// A Logger that can log messages


type Logger struct {
mutex sync.Mutex #A
logger *log.Logger
}

// New returns a logger.


func New(output io.Writer) *Logger {
return &Logger{
logger: log.New(output, "", log.Ldate|log.Ltime), #B
}
}

// Logf sends a message to the log if the severity is high enough.


func (l *Logger) Logf(format string, args ...any) {
l.mutex.Lock()
defer l.mutex.Unlock()
l.logger.Printf(format, args...)
}

Now that we have our basic logger, we can start implementing the server
package. We will need this logger there.

10.2.2 Server structure

First, we create a structure that will be our server. It would not make sense if
it were to stay empty for long; it will soon contain a repository for data
retention.

In a new folder internal/server , create aserver.go file and add the struct
with a New function. As we’ll want to use a logger, let’s declare a one-method
interface that we will use as our logger.

Listing 10.8 server.go: define the web service

// Server is the implementation of the gRPC server.


type Server struct {
lgr Logger #A
}
// New returns a Server that can ListenAndServe.
func New(lgr Logger) *Server { #B
return &Server{
lgr: lgr,
}
}

type Logger interface {


Logf(format string, args ...any)
}

Not very interesting yet. We need to add aListenAndServe method on the


Server , so that it can start listening to and serving new requests sent on a
given port. Ports are virtual “doors” through which messages transfer, either
internally or with the rest of the world, so that only one application can listen
to a given port on the same machine. Ports are identified with their port
number - 80 is used by HTTP, 443 by HTTPS, etc. When listening to a
specific port, either use one that is assigned by a standard, such as 80 for
HTTP, or use a port number between 1024 and 49151 for internal usage.

A gRPC server, just as the HTTP server we saw in Chapter 8, is first and
foremost a good listener. We give it a port to listen to, and start it with a call
to Serve . This call will only return when the server shuts down.

But a gRPC server is a bit more than a HTTP server - it must implement the
desired gRPC API. For this, we start by creating a barren server using the
grpc package, and we then attach our implementation to that server by
registering it.

Listing 10.9 server.go: listen to a given port

import (
...
"google.golang.org/grpc"

"learngo-pockets/habits/api"
)

// ListenAndServe starts listening to the port and serving requests.


func (s *Server) ListenAndServe(port int) error {
const addr = "127.0.0.1"
listener, err := net.Listen("tcp", net.JoinHostPort(addr, strconv.Itoa(port))) #A
if err != nil {
return fmt.Errorf("unable to listen to tcp port %d: %w", port, err)
}

grpcServer := grpc.NewServer() #B
api.RegisterHabitsServer(grpcServer, s) #C

s.lgr.Logf("starting server on port %d\n", port)

err = grpcServer.Serve(listener) #D
if err != nil {
return fmt.Errorf("error while listening: %w", err)
}

// Stop or GracefulStop was called, no reason to be alarmed.


return nil
}

There are better ways of starting the server to support graceful shutdown. We
will improve this later in the chapter. Additionally, if you want to allocate a
free port randomly, you can use port0. The documentation of net.Listen
explains which networks are supported.

Wait… This does not compile. We cannot register aHabitService that does
not know how to create a habit. As you can see,api.RegisterHabitsServer
takes as a second parameter anything that implements the HabitServer
interface, which was generated from our Protobuf service. We just need to
implement that one method.

When trying to compile or run, we also faced an error mentioning that our
Server type cannot be registered as aHabitsServer because it doesn’t
implement a method namedmustEmbedUnimplementedHabitsServer . This is
a reminder that, when we generated the Go files from the proto files, we used
the recommended way, which requires embedding a structure, as the non-
implemented method’s name suggests. So, let’s embed the required type:

// Server is the implementation of the gRPC server.


type Server struct {
api.UnimplementedHabitsServer
lgr Logger
}
Composition and embedding

Both concepts extend the notion of a structure, but in a different way. While
composition, which in Go is achieved by listing named fields of a structure,
represents a “has-a” relationship between two types, embedding corresponds
to a “is-a” relationship. In our case, our Server being an
UnimplementedHabitsServer, it has an implementation for that required
method.

As we know that this method will require tests and probably side functions,
we can already put it in a create.go file in the server package. The signature
of this function was generated by theprotoc toolchain; we can’t alter it. As
we’ll see, there is a mysterious first parameter, into which we’ll dive later in
this chapter.

Listing 10.10 create.go: implement the HabitServer interface

// CreateHabit is the endpoint that registers a habit.


func (s *Server) CreateHabit(
_ context.Context, #A
request *api.CreateHabitRequest
) (*api.CreateHabitResponse, error) {
s.lgr.Logf("CreateHabit request received: %s", request)

return &api.CreateHabitResponse{
Habit: request.Habit,
}, nil
}

This should be enough for now. We will come back to it very quickly. Our
endpoint is implemented - our whole service is implemented. It’s now time to
spin it up.

10.2.3 Creating and running the server

We can call theseNew and ListenAndServe functions in main . As this is a


web service, we prefer to put themain.go file in a cmd/habits-server
folder, decluttering the root of the module. On some operating systems, such
as Windows, go run dir/main.go will cause an executable file called
dir.exe to be generated and executed - placing the main.go file in an aptly
named directory is important in that regard.

What the main function does is create a new instance of our server and call
Listen , which only returns if there is an error. Since we need to inject a
logger into our server instance, we can create it in the main function and pass
it via server.New . We can use that logger in the main function too.

Here is how we create a new server in our main package and run it:

Listing 10.11 main.go: Run it

package main

import
"fmt"
"os"

"learngo-pockets/habits/internal/server"
"learngo-pockets/habits/log"
)

const port = 28710 #A

func main() {
lgr := log.New(os.Stdout) #B

srv := server.New(lgr) #C

err := srv.Listen(port) #D
if err != nil {
lgr.Logf("Error while running the server: %s", err.Error())
os.Exit(1) #E
}
}

There is basically no logic inside the main function. It means that our service
will be easier to test: all the logic is in isolated packages.

Run it!

go run cmd/habits-server/main.go

It does absolutely nothing, but it runs. How wonderful! Add a few logs in
ListenAndServe to make sure.

10.3 First endpoint: create


We have a running gRPC server that implements the desired API. We might
want to go a step beyond and have our endpoint do something else than print
a pretty message. This is, after all, Chapter 10.

Before we start coding, let’s do a bit of thinking. The compiled code,


generated from the Protobuf files, has defined aHabit structure. Should we
reuse that structure or define a new one ? The answer is quite straightforward
here: it's always best to not leak protocol definitions into the core business
code, because it'll create problems when we start to add support for other
protocols, such as XML or JSON. These data definitions that are used only
for describing protocol are called Data Transfer Objects or DTO's for short.
Instead, our core business code, often called “domain” or “model”, should
have types for every entity that needs to be handled internally. Let’s look at a
clean target architecture.

Figure 10.1 Architectural diagram with domain and connectors

For the exact same reasons that we saw in Chapter 8, the Go structures
representing the transferable data, here the generated code, must be capable
of evolving independently from the rest of the code.

10.3.1 Business layer

In Chapter 8, we created asession package with our logic. Create an


internal/habit folder where we can define our domainHabit and the types
that it needs. We want to keep the data that was received as input but also
remember when the habit was created and give it an ID to find it again. These
fields are not part of the input message - we are able to add them here
because we’re not re-using the API structure.

Listing 10.12 internal/habit/habit.go: define the business types

// ID is the identifier of the Habit.


type ID string

// Name is a short string that represents the name of a Habit.


type Name string

// WeeklyFrequency is the number of times a Habit should happen every week.


type WeeklyFrequency uint

// Habit to track.
type Habit struct {
ID ID #A
Name Name
WeeklyFrequency WeeklyFrequency
CreationTime time.Time #A
}

It is always good to create a specific type for each of the fields in our main
entity, even though the usage might seem more verbose: functions and
methods will take typed arguments that will serve as documentation and
make the API clearer. For example, if a function takes the name and the ID
and both are strings, it’s quite easy to mix them up, while if one is explicitly
an ID and the other explicitly a name, casting the name into an ID type
should raise a red flag to the developer writing the call.

Requirement: Create a habit

If we look at the requirements, the first thing we need to do is create aHabit .


We defined some optional values in the Protobuf documentation, meaning
that we must complete the fields if needed. The first question here is: is input
validation the job of the API layer, or the domain layer? Both solutions make
sense for different reasons. We decided that if some new feature needs to
create a habit inside our service, it will call the domain directly and we want
this to always return a valid entity. We cannot rely on the API layer
NewHabit
to always send what the domain needs.
If you are in a situation where validation on the API layer makes more sense,
there are a few libraries out there that can do it for you with a few tags.

What if the input is invalid? Just like HTTP, gRPC uses different status codes
to make sense of the response defined by the RPC API. These codes are
included in the error that is returned alongside the response, by the endpoint.

Table 10.1 A few gRPC status codes

Code Number Description

OK 0 Not an error; returned on success.


INVALID_ARGUMENT 3 The client specified an invalid
argument.
NOT_FOUND 5 Some requested entity (e.g., file or
directory) was not found.

PERMISSION_DENIED 7 The caller does not have permission to


execute the specified operation.
UNIMPLEMENTED 12 The operation is not implemented or
is not supported/enabled in this
service.

INTERNAL 13 An unspecified error occurred while


processing the request.

These are only a few, the rest, with deeper explanations, can be found in the
official documentation. You can run go doc
google.golang.org/grpc/codes to have the list. Well, not exactly: go doc
limits its output, which causes only the first few codes to be printed. To get
the whole list, run:

go doc --all google.golang.org/grpc/codes

Knowing this, any invalid input will return a code 3 ( INVALID_ARGUMENT).


Should the business layer be in charge of returning such a code 3? Certainly
not. Returning this code is the role of the API layer - the domain layer is not
even aware that we’re implementing a gRPC server - and it needs to know
what happened inside the domain layer. Perfect occasion to use a typed error.

Validate with typed error

Let’s start with the validateAndComplete function, in a new create.go file


dedicated to this business logic. It must check that the name is not empty, set
the frequency to one if empty, and also fill up the two internal fields.
Arguably, it could be two functions: validate and complete.

Listing 10.13 internal/habit/create.go: validate and complete the entity

// validateAndCompleteHabit fills the habit with values that we want in our database.
// Returns InvalidInputError. #D
func validateAndCompleteHabit(h Habit) (Habit, error) {
// name cannot be empty
h.Name = Name(strings.TrimSpace(string(h.Name))) #A
if h.Name == "" {
return Habit{}, InvalidInputError{field: "name", reason: "cannot be empty"} #E
}

if h.WeeklyFrequency == 0 { #B
h.WeeklyFrequency = 1
}

if h.ID == "" { #C
h.ID = ID(uuid.NewString())
}

if h.CreationTime.Equal(time.Time{}) {
h.CreationTime = time.Now()
}

return h, nil
}

We now need to define this typed error, in a newerrors.go file in the habit
package:

Listing 10.14 errors.go: typed error for invalid input

// InvalidInputError is returned when user-input data is invalid.


type InvalidInputError struct {
field string
reason string #A
}

// Error implements error.


func (e InvalidInputError) Error() string {
return fmt.Sprintf("invalid input in field %s: %s", e.field, e.reason)
}

We could expose the given value of the field too: it is very useful when we
get an error to know what the server actually got - it differs from what we
think we sent more often than we care to admit. But this error will be logged,
copied around, and malevolent users could send in gigabytes of data and
crash our system. There are some ways to avoid this (limiting the size of
requests, truncating logs, etc.) but for now, let’s just avoid logging the field.

Testing the validation

We can already write an easy unit test for thisvalidateAndComplete


function. No need to mock any dependency, what a pleasure!

When writing this test, we found out that each test case had very different
assertions, so we chose to write a named function for each. You can write
several independentTestXxx functions, or group them inside a single one
with an explicit name:

Listing 10.15 create_internal_test.go: test completeHabit

func Test_validateAndFillDetails(t *testing.T) {


t.Parallel()

t.Run("Full", testValidateAndFillDetailsFull)
t.Run("Partial", testValidateAndFillDetailsPartial)
t.Run("SpaceName", testValidateAndFillDetailsSpaceName)
}

The first function, testValidateAndFillDetailsFull , checks that if the


habit is complete, nothing is changed.

func testValidateAndFillDetailsFull(t *testing.T) {


t.Parallel()

h := Habit{...all fields are filled...}

got, err := validateAndCompleteHabit(h)


require.NoError(t, err)
assert.Equal(t, h, got)
}

The second function checks that if the habit is incomplete, ID and creation
time are filled up, the rest did not change. Each run of the test will give us
different values, so we are only checking for “not empty”. If you want to be
more thorough, you can check that the ID follows a given format, using
regular expressions and that the time is within the past second or so.

func testValidateAndFillDetailsPartial(t *testing.T) {


t.Parallel()

h := Habit{...}

got, err := validateAndCompleteHabit(h)


require.NoError(t, err)
assert.Equal(t, h.Name, got.Name)
assert.Equal(t, h.WeeklyFrequency, got.WeeklyFrequency)
assert.NotEmpty(t, got.ID)
assert.NotEmpty(t, got.CreationTime)
}

Then we check the name. If we send only spaces, we should get an


InvalidInputError . At this point in the development, the exact content of
the error might still change, so we just focus on the type.

func testValidateAndFillDetailsSpaceName(t *testing.T) {


t.Parallel()

h := Habit{Name:" ",...}

_, err := validateAndCompleteHabit(h)
assert.ErrorAs(t, err, &InvalidInputError{})
}

Good. Run the test, make sure you are happy about your coverage.

Now let’s call the endpoint. The Create function in the business, or domain,
layer, will fill the habit and will be ready to save it to a data storage.

Listing 10.16 create.go: business function to create a habit

// Create validates the Habit, saves it and returns it.


func Create(_ context.Context, h Habit) (Habit, error) {
h, err := validateAndFillDetails(h)
if err != nil {
return err #A
}

// Need to add the habit to data storage...

return nil
}

You can already write a closed-box test for this one, or at least the structure
for the test.

We added a “context” as the first parameter, but we ignored it until now.


Why? The answer to this is quite simple, and we’ll explain it fully in section
10.6. For now, let’s provide a context.Context variable, which, in Go, is
almost always called ctx .

10.3.2 API layer

Now that we’ve implemented the validation in the domain layer, let’s move
back to the server package and update the
CreateHabit method on the server
structure.

What should it do? This is the gRPC layer, where we transform an API-
specific signature into domain objects, call the domain function and
transform the response back into API-specific types.

Listing 10.17 create.go: API layer

// CreateHabit is the endpoint that registers a habit.


func (s *Server) CreateHabit(ctx context.Context, request *api.CreateHabitRequest)
(*api.CreateHabitResponse,
var freq uint #A error) {
if request.WeeklyFrequency != nil {
freq = uint(*request.WeeklyFrequency)
}

h := habit.Habit{
Name: habit.Name(request.Name),
WeeklyFrequency: habit.WeeklyFrequency(freq),
}

createdHabit, err := habit.Create(ctx, h)


if err != nil {
...
}

s.lgr.Logf("Habit %s successfully registered", createdHabit.ID)

return &api.CreateHabitResponse{
Habit: &api.Habit{
Id: string(createdHabit.ID),
Name: string(createdHabit.Name),
WeeklyFrequency: int32(createdHabit.WeeklyFrequency),
},
}, nil #B
}

If we want the default value of a habit’s frequency to be 1, why are we setting


the freq to 0 (by using Go’s default value) when it is absent? We decided
that this default value was a business requirement and not an API definition.
It is arguable and can only be decided case by case. Imagine what you expect
if you call the domain method with an empty frequency, from somewhere
else than the API layer, and act accordingly.

How do we manage the error returned by the domain layer? We made sure
that if the error is caused by a bad input, it will have a specific type. We can
use errors.As to cast it into the InvalidInputError type and check whether
we should return a code 3. To be perfectly honest, we could use errors.Is
instead because we are not using any field of method specific to the type
InvalidInputError , but we chose to show you howAs can be used.

But what if we receive something that is not anInvalidInputError ? After


all, our future implementation of the endpoint logic might have to face
database calls, which could cause errors that would not be due to an input
message validation.
A rule of thumb to remember, when implementing a gRPC endpoint, is that
every return statement should either return anil error, or an error built from
the status.Error function (or Errorf ). The status package is a neighbour
of the codes package :google.golang.org/grpc/status . When in doubt of
which error code we should return, the default choice ofcodes.Internal .

Listing 10.18 create.go: error management

got, err := habit.Create(ctx, h)


if err != nil {
var invalidErr habit.InvalidInputError
if errors.As(err, &invalidErr) {
return nil, status.Error(codes.InvalidArgument, invalidErr.Error())
}
// other error
return nil, status.Errorf(codes.Internal, "cannot save habit %v: %w", h, err)
}

When a service ends up having several endpoints, checking the error and
outputting the appropriate status code can be factored in a single function,
toAPIErrorf(err error, format string, args ...any) . Feel free to
implement it when the need arises.

Time to test our server manually.

Hand testing

The tool we used 2 chapters ago to call our service,curl , only does HTTP
calls, but it has a cousin,grpcurl , which does the same job. There are
alternative options to grpcurl - many providing a graphical user interface,
but this one is the one we find most convenient. If you fancy a nice GUI,
Postman supports gRPC and can send Protobuf messages to servers since its
version 10.

First, start your server with go run cmd/main.go .

Next, you can install grpcurl with the following command:

go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
Now, we can start using the tool to send requests to our server. There is a
major difference between curl and grpcurl : the format of the message was,
for curl , a regular JSON document, whereas forgrpcurl , we need to provide
a valid Protobuf entity. If you remember the beginning of this chapter,
Protocol Buffers have indexed fields, which means the message we’ll send
via grpcurl will need to be properly written, with its fields in the correct
positions. There are two options for us here - we can either provide the proto
definition to grpcurl , or we could have it ask for that definition from the
server. The second option is called reflection, and we won’t be using it here.
Indeed, reflection adds a small overhead to our server - something that
usually we don’t want to ship to production.

So, here is how we tell grpcurl how to structure our query (and understand
the response): we simply pass it the proto files with the-proto parameter -
we’ll give it the service.proto file, as this is where the definition of the
endpoints lie. Since some of the files include other files, we need to specify
the “root” from which they refer, via the -import-path parameter. Finally,
we need to tell which endpoint we want to aim at. This is passed as the final
parameter of the request - in the form of{package}.{service}/{endpoint} .

Here is the command line that we are able to run:

grpcurl \
-import-path /path/to/learngo-pockets/habits/api/proto/ \ #A
-proto service.proto \ #B
-plaintext -d '{"name":"clean the kitchen"}' \ #C
localhost:28710 \ #D
habits.Habits/CreateHabit #E

If everything went fine, you should receive a response from the server
(formatted in JSON). Does it contain an ID field? Is the weekly frequency
set?

Did you also try with an “invalid” name for the habit?

Check the server’s output - it should be logging a message every time a


request is received. If your tests are conclusive, it’s time to do a bit more than
logging in our endpoint!
10.3.3 Data retention

The service tells its clients that it can create habits, but it doesn’t store them.
We need to fix that.

Repository package

For the first version, we can use the same kind of in-memory repository that
we used for games in Chapter 8, in a package called internal/repository .
It has the same drawbacks: unscalable, probably unstable very soon, but it
gives us something quickly, so it is ok for a proof of concept.

Write a Repository structure with a New function that builds it and initialises
its map of data. Similarly to the New function of the server package, we want
to inject a Logger in here too. For now, we will need one method on the
Repository type, Create ; but soon we’ll want to add List , which will return
all the contents of our database.

If you have followed us through nine chapters, you should be able to create
the package, expose the right functions, structures and methods, and of
course cover them with some tests. Do not forget to add a mutex to lock the
data when reading and writing on the repository storage.

$ go doc
package repository // import "learngo-pockets/habits/internal/repository"

Package repository accesses the habits data.

type Error string


type Logger {...}
type HabitRepository struct{ ... }
func New(lgr Logger) *HabitRepository

$ go doc HabitRepository
package repository // import "."

type HabitRepository struct {


// Has unexported fields.
}
HabitRepository holds all the current habits.
func New(lgr Logger) *HabitRepository
func (hr *HabitRepository) Create(_ context.Context, habit habit.Habit) error
func (hr *HabitRepository) ListAll(_ context.Context) ([]habit.Habit, error)

As you can see, by having acontext.Context parameter in each of our


methods, we anticipated that when we replace this with a real database, we
will need a context to stop looking for data when a client interrupts the call.

If you need it, remember that an example of the code can be found in the
book’s repository.

Dependency injection

Now, we didn’t really test this repository package. The main reason here is
that all we do in it is write to a map, and that we list all values of a map. We
can add the call toAdd inside the domain function. Here is a flow of the call
from the client to the database. You can imagine the same flow back with
either errors or nils.

We have implemented the api-to-domain connection in the server package,


but we lack the right-hand calls. For that, we need the server to have an
instance of the Repository connector, and we need the domain’s Create
function to expect a small interface with Add in it.

Let’s first inject a repository dependency to the server. But why don’t we
simply call repository.New() in the server, rather than doing it in themain
function? As we’ll see, this makes tests a lot simpler than having to rely on a
hardcoded implementation of that dependency. This is one of Go’s best
usages of its lightweight interfaces. We are using an interface here so that
tests for the server can use mocks.

Listing 10.19 server.go: Adding a repository connector to the server

// Server is the implementation of the grpc server.


type Server struct {
db Repository
lgr Logger
}

type Repository interface { #A


Add(ctx context.Context, habit habit.Habit) error
FindAll(ctx context.Context) ([]habit.Habit, error)
}

// New returns a Server that can Listen.


func New(repo Repository, lgr Logger) *Server {
return &Server{
db: repo,
lgr: lgr,
}
}

Update the main function to comply with this new signature of New - we need
to pass an entity that implements that interface, such as the output of
repository.New(...) .

Second, theCreate endpoint on the domain needs to take an interface as


parameter, also for stubbing mocking purposes, but also to reduce the scope
of problems: by using an interface with only the Add method, we assure that
Create cannot use any other future method and mess with the logic. Imagine
that you observe in your logs that calls toFindEverything are messing with
the performance of the service. You know by seeing this interface that the
culprit is not Create and you can move on.

Listing 10.20 create.go: Call the repository in the logic function

type habitCreator interface {


Add(ctx context.Context, habit Habit) error
}

// Create adds a habit into the DB.


func Create(ctx context.Context, db habitCreator, h Habit) (Habit, error) {
h, err := validateAndCompleteHabit(h)
if err != nil {
return Habit{}, err
}

err = db.Add(ctx, h)
if err != nil {
return Habit{}, fmt.Errorf("cannot save habit: %w", err)
}

return h, nil
}
Here we are, right? Can you see in the logs that your call goes all the way to
the DB? Let’s write a couple of tests, to ensure we properly catch the errors.
For this, we’ll start with a simple stub, as we did in Chapter 6, to implement
the habitCreator interface.

But how can we update the tests ofCreate to make sure thatAdd is properly
called? That’s what we are going to see in the next part.

10.4 Unit testing with generated mocks


In the last chapters, we have seen how you can write your own stubs when
the interface is small enough and the logic simple. There are a few libraries
out there capable of taking an interface and generating mocks instead.

Stubs vs Mocks

Stubbing and mocking are two very common ways of making use of an
interface for tests. While stubbing consists in writing a structure that
implements the interface and returns “hard-coded” values, in order to test the
behaviour of your code when the stubbed dependencies returns this or that,
mocking adds on top a check on how many times each dependency was
called, and if it was with the correct parameters.

10.4.1 Generate mocks

The best known libraries are mockgen,mockify and minimock . They are
based on different design decisions, so feel free to pick your favourite. In our
example, we choseminimock because it provides mocked functions with
typed parameters.

go install github.com/gojuno/minimock/v3/cmd/minimock@latest

Because the mocks are generated, this is a perfect occasion to use the
go:generate syntax. Pick an interface, e.g.habitCreator , and add this line
above:

//go:generate minimock -i habitCreator -s "_mock.go" -o "mocks"


We are asking minimock to generate a mock for the interface -(i )
habitCreator , in a file with a specific suffix ( -s ) and in a specific output
folder ( -o ). Create that folder before you can continue: from the root of the
module, it will be internal/habit/mocks .

In your favourite terminal, navigate to the habit package and run

$ go generate .

or alternatively, navigate to the root of the module and run all the generate
commands in the project with

$ go generate ./...

You can see a new file has appeared in the mocks folder. Check the contents
with go doc .

$ go doc internal/habit/mocks
package mocks // import "learngo-pockets/habits/internal/habit/mocks"

type HabitCreatorMock struct{ ... }


func NewHabitCreatorMock(t minimock.Tester) *HabitCreatorMock
type HabitCreatorMockAddExpectation struct{ ... }
type HabitCreatorMockAddParams struct{ ... }
type HabitCreatorMockAddResults struct{ ... }

10.4.2 Use the mocks

The closed-box test for Create does not compile anymore. Let’s fix it.

First, there are two imports that we need to add. One is pretty obvious but the
second calls for a little explanation.

import (
// ...
"learngo-pockets/habits/internal/habit/mocks"
"github.com/gojuno/minimock/v3"
)

The first import is here to access the mocks we just generated. The second,
on the other hand, is about theminimock library.
If you pay extremely close attention, you’ll realise that the second import’s
path ends with “/v3 ”. Are we really importing a package named v3 ? This
would be a very strange name for a package…

Versioning modules in Go

Sometimes, a module needs to go through heavy changes that make the new
version incompatible with the previous one. Interrupting the backward
compatibility of a module requires a version change. When this happens, the
go.mod file should be updated to reflect the version: the first line of
minimock’s go.mod is module github.com/gojuno/minimock/v3 .

Users who want to useminimock (or any other versioned module) have to
remember to specify the version they want to use in the import path, right
after the name of the module, for instance :import
"github.com/jackc/pgx/v5" and import
"github.com/jackc/pgx/v5/pgxpool" . When using functions or types
defined in these packages, ignore the /v5 “ ” part : pgx.Connect(...) or
pgxpool.New(...) .

But why do we need to use theminimock library? Wouldn’t the generated


code contain enough tools? It turns out it doesn’t. Indeed, the generated
mocks package exposes a NewHabitCreator(...) function, which returns the
type we want - HabitCreatorMock , an implementation of the habitCreator
interface. But this function’s parameter is of type *minimock.Controller .
Don’t be scared, there is no other package to import.

Next, we can now define a function that builds a mock for each of the test
cases. It takes a controller and returns a mocked instance of the required
interface, habitCreator . The test case structure will hold a new field whose
type is a function - to be honest, if you look at the test with error cases that
we have in the book’s repository, you will see that it would be far easier to
read if we had written it as two separate functions, but we wanted to show
you how functions can make your life better as fields of a test case struct.

The controller is created at the start of the test, it can be shared by all the test
cases, and, if your version of minimock is recent enough (v3.3.0), it
automatically registers a check at the end of the test to ensure each expected
call was met with an actual call.

In the nominal test case, or happy flow, the mock should take the input habit,
previously declared as a variable calledh, and return no error.

Listing 10.21 create_test.go: Add a mock function to each test case

tests := map[string]struct {
db func(ctl *minimock.Controller) *mocks.HabitCreatorMock #A
expectedErr error
}{
"nominal": {
db: func(ctl *minimock.Controller) *mocks.HabitCreatorMock {
db := mocks.NewHabitCreatorMock(ctl)
db.AddMock.Expect(ctx, h).Return(nil) #A
return db
},
expectedErr: nil,
},
}

Finally, we can plug this into the TestCreate function.

Listing 10.22 create_test.go: use the mock when testing

t.Run(name, func(t *testing.T) {


t.Parallel()

ctrl := minimock.NewController(t)
defer ctrl.Finish() #A

db := tt.db(ctrl)

got, err := habit.Create(ctx, db, h)


assert.ErrorIs(t, err, tt.expectedErr)
if tt.expectedErr == nil {
assert.Equal(t, h.Name, got.Name)
}

It runs and succeeds. You can commit to make sure you don’t forget this
state, before playing around with the mocks. For example, what happens if
you comment out the call to Add ? Your test should tell you.
You can also read the documentation of the mocks package and how the
minimock tool can best be used, in order to define your favourite style.

There is one more way of testing that is perfect for this kind of CRUD
service: integration testing.

10.5 Integration testing


Where unit tests focus on one function, integration tests will check the
behaviour of the entire service, with scenarios. Here are example scenarios
for our service:

Scenario 1: Add and list

Add a habit: walk in the forest 3 times a week


Add a habit: water the plants twice a week
List the habits: check the 2 habits that we get
Add a habit: read a book 5 times a week
List the habits: check the 3 habits that we get

Scenario 2: Add and delete

Add a habit: walk in the forest 3 times a week


List the habits: check the 1 habit that we got
Add a habit: no name, expected error with code 3
Add a habit: water the plants twice a week
List the habits: check the 2 habits that we got
Remove the first habit
List the habits: check that we still have the second

And so on. Some people will intertwine this with API testing, others will
separate testing the flows from testing the gRPC response for each endpoint’s
error cases.

In order to test a whole flow, we need to be able to list the habits that we
saved. Then, we will write the first scenario.
10.5.1 List habits

Adding an endpoint requires a few additions, but it should be quick enough:

1. Update the Protobuf file with a ListHabits endpoint. First, because this
way you can already publish the interface for the rest of your team to use and
mock.

Listing 10.23 service.proto: add the list endpoint

// Habits is a service for registering and tracking habits.


service Habits {
// CreateHabit is the endpoint that registers a habit.
rpc CreateHabit(CreateHabitRequest) returns (CreateHabitResponse);

// ListHabits is the endpoint that returns all habits.


rpc ListHabits(ListHabitsRequest) returns (ListHabitsResponse);
}

// ListHabitsRequest is the request to list all the habits saved.


message ListHabitsRequest { #A
}

// ListHabitsResponse is the response with all the saved habits.


message ListHabitsResponse {
repeated Habit habits = 1; #B
}

From there you can regenerate the corresponding go files withgo generate
./...

2. Add the logic in the domain layer, following the pattern of


internal/habit/create.go . Don’t forget to use a tiny interface for the
database, generate a mock for it, and test thoroughly. Note that it is not
always necessary to have a mock framework to test. An alternative using a
simple function replacing the database behaviour is to have a structure
holding the output content you want to mock and have the database call
returning it.

Listing 10.24 list_test.go: Test List without minimock

// MockList is a mock for FindAll method response.


type MockList struct { #A
Items []habit.Habit
Err error
}

// FindAll is a mock which returns the passed list of items and error.
func (l MockList) FindAll(context.Context) ([]habit.Habit, error) { return list.Items, list.Err }

func TestListHabits(t *testing.T) {

// TODO: Write the needed content for the tests cases

"empty": {
db: MockList{Items: nil, Err: nil}, #B
expectedErr: nil,
expectedHabits: nil,
},
"2 items": {
db: MockList{Items: habits, Err: nil},
expectedErr: nil,
expectedHabits: habits,
},
"error case": {
db: MockList{Items: nil, Err: dbErr},
expectedErr: dbErr,
expectedHabits: nil,
},

for name, tc := range tests {


name, tc := name, tc

t.Run(name, func(t *testing.T) {


t.Parallel()

got, err := habit.ListHabits(context.Background(), tc.db) #C


assert.ErrorIs(t, err, tc.expectedErr)
assert.ElementsMatch(t, tc.expectedHabits, got)
})
}
}

3. If you don’t have one already, write the repository function that lists all the
saved habits. The repository should return a deterministic list of habits, sorted
using a specific criterium such as the creation date of the habits.

Listing 10.25 memory.go: deterministic output of habits


// FindAll returns all habits sorted by creation time.
func (hr *HabitRepository) FindAll(_ context.Context) ([]habit.Habit, error) {
log.Infof("Listing habits, sorted by creation time...")

// Lock the reading and the writing of the habits.


hr.mutex.Lock()
defer hr.mutex.Unlock()

habits := make([]habit.Habit, 0)
for _, h := range hr.habits {
habits = append(habits, h)
}

// Ensure the output is deterministic by sorting the habits.


sort.Slice(habits, func(i, j int) bool { #A
return habits[i].CreationTime.Before(habits[j].CreationTime)
})

return habits, nil


}

4. Add the ListHabits method to the service, following the pattern of


internal/server/create.go . Isolate the transformation of the generated
structure into the domain structure in a separate function, unit-test it too. We
decided that a repository containing no habits shouldn’t be a problem and
return an error, but feel free to make a different choice here. Also, think about
determinism: if the repository contains two elements, should they always be
returned in the same order by the endpoint? Determinism is very important,
and we can only recommend enforcing it wherever possible. Testing
deterministic endpoints is a lot simpler than testing non-deterministic
endpoints!

5. Test manually with grpcurl or a similar tool.

Now that you trust that this new endpoint works the way you expect, we can
write an integration test.

10.5.2 Integration with go test

We want to write a test that will go through every layer of the service, all the
way to the network outside of it. Considering that our database is currently a
hacky in-memory thing, there is no point in mocking it, but when we finally
use a real database system, it will be necessary to either mock it or run an
instance locally.

This test will run the service for real and call it as any client would.

Run a service

First, create a test file internal/server/integration_test.go . In it, add a


TestIntegration function. There are several places an integration test file
can be stored; since we are testing the API of the server, we placed it close to
it.

First, we create a gRPC server instance and register it - something similar to


what we already have in themain function. Second, we create a listener - by
giving an empty string as the address parameter, we ask it to find a free port
on the host and use it. Third, we run that server in a parallel thread, so that
the rest of the test can keep running and calls can be made to it. Of course, we
need the server to stop at the end of the test, whenever that is.

If we need to write any “utility” function, we can start their implementation


with t.Helper() . This will tell the Go test suite to ignore this layer when an
error is surfaced.

Listing 10.26 integration_test.go: start the service

func TestIntegration(t *testing.T) {


grpcServ := newServer(t)
listener, err := net.Listen("tcp", "")
require.NoError(t, err)

wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
err = grpcServ.Serve(listener)
require.NoError(t, err)
}()
defer func() {
// terminate the GRPC server
grpcServ.Stop()
// when that is done, and no error were caught, we can end this test
wg.Wait() #A
}()

func newServer(t *testing.T) *grpc.Server {


t.Helper() #B
s := server.New(repository.New(t), t) #C

return s.registerGRPCServer()
}

The server is running, a client can enter the scene.

Create a client

A client needs to know to which address to send its requests and what the
shape of the server is (what the endpoints are). We create a function to build
that new client - it takes the address as a parameter.

Note that we need to pass some credentials to connect to the server. The grpc
library kindly offers a function that generates credentials that disable
transport security (TLS). While this is usually a security breach, we’re
running our server in a very restricted environment, and we can accept not
having to pass credentials. Depending on which network the request will be
sent through, you might have to use credentials, or you might be able to use
the insecure package to generate some for you.

Listing 10.27 integration_test.go: create a client

func TestIntegration(t *testing.T) {


...
// create client
habitsCli, err := newClient(t, listener.Addr().String())
require.NoError(t, err)
}

func newClient(t *testing.T, serverAddress string) (api.HabitsClient, error) {


t.Helper()
creds := grpc.WithTransportCredentials(insecure.NewCredentials()) #A
conn, err := grpc.Dial(serverAddress, creds)
if err != nil {
return nil, err
}

return api.NewHabitsClient(conn), nil


}

The scene is set, we can start the scenario.

Run scenario

As we are only testing two endpoints, we can create a function for the happy
path for each of them. Then we create an error path function for
CreateHabit , becauseListHabits never returns a business error.

Here is an example with the list habits endpoints, which is the trickier one: it
returns generated IDs whose value will change at every run. So, we overwrite
it after checking it’s been filled.

Listing 10.28 integration_test.go: function to list a Habit

func listHabitsMatches(t *testing.T, habitsCli api.HabitsClient, expected []*api.Habit) {


list, err := habitsCli.ListHabits(context.Background(), &api.ListHabitsRequest{})
require.NoError(t, err)

for i := range list.Habits {


assert.NotEqual(t, "", list.Habits[i].Id)
list.Habits[i].Id = "" #A
}
assert.Equal(t, list.Habits, expected) #B
}

Consider the option of basing your integration test on a struct that holds the
client as a field and can hold methods that wrap calls to the client, to make
your test easier to read. We decided to use functions only, but all of them will
start with the same 2 arguments, which can become very verbose - and is
usually a cue for refactoring.

With this kind of helper functions, the scenario can look fairly readable:
Listing 10.29 integration_test.go: scenario in the code

// add 2 habits
addHabit(t, habitsCli, nil, "walk in the forest")
addHabit(t, habitsCli, ptr(3), "read a few pages")
addHabitWithError(t, habitsCli, 5, " ", codes.InvalidArgument)

// check that the 2 habits are present


listHabitsMatches(t, habitsCli, []*api.Habit{...})

// ...

func ptr(i int32) *int32 {


return &i
}

func addHabit(t *testing.T, habitsCli api.HabitsClient, freq *int32, name string) {


_, err := habitsCli.CreateHabit(context.Background(), &api.CreateHabitRequest{
Name: name,
WeeklyFrequency: freq,
})
assert.NoError(t, err)
}

Make sure you isolate your different scenarios so that they can run in parallel
- you can even run that many instances of the server in parallel, one for each
integration scenario. You can also use this opportunity to play with
concurrency and call Add a large number of times concurrently to check for
performance.

Using test.short to only run lightweight tests

So far, our test resembles any other unit test that we’ve written - apart that it’s
called “integration”. Sometimes, these integration tests can be quite intense,
because they go through lots of features and cases, or because they include
some benchmarks or run some load or performance tests. These tests usually
take quite some time, and it isn’t advised to include them in continuous
integration toolchains, as they might slow down the delivery process. For
instance, it could be a requirement to have “light” tests run on pull requests,
but “heavy” tests to run on tagging and image building. The
go test ./...
command accepts a-short flag. Setting this flag in the command line will
change the output of thetesting.Short() function, which we can invoke in
any test.

Let’s start our TestIntegration with a check on that flag:

func TestIntegration(t *testing.T) {


// Skip this test when running lightweight suites
if testing.Short() {
t.Skip()
}

grpcServ := newServer(t)

Now, let’s have a look at the output of go test -v ./... . Using the -v flag
makes the output verbose and lists each test function called. The output
should contain TestIntegration :

> go test -v ./...


...
ok learngo-pockets/habits/internal/habit 1.004s
=== RUN TestIntegration
create.go:17: Create request received: name:"walk in the forest"
...
--- PASS: TestIntegration (0.00s)
...

Let’s try the same, but this time with the short flag: go test -v -short
./... . This time, the output should explicitly indicate that TestIntegration
was skipped:

> go test -v -short ./...


...
ok learngo-pockets/habits/internal/habit 1.004s
=== RUN TestIntegration
integration_test.go:23:
--- SKIP: TestIntegration (0.00s)
...

We now have a way of running only the “lightweight” tests in our CI


pipelines - simply, when running a more expensive or consuming one, check
for that -short flag. Its presence should be an indicator that we want a quick
result.

Now, so far, we’ve been using our own in-memory database, which we quite
trust. Indeed, if that database were to be unavailable, we’d have serious issues
in our server itself, since the server and the database are part of the same
program. But most of the time, the database is a remote entity, one that could
behave erratically, because of network issues, or external load, or lots of
other pesky bugs - or, worse, what if our own query crashes that database?
We already handle the case when the database returns an error, but what if it
doesn’t answer our query? How long should we wait before realising
something is wrong?

10.6 Getting the best out of the context


When relying on remote services, a good practice is usually to allow the
callee to provide a response within a specific time frame. There are two ways
of expressing a time limit - either by providing a timeout, just as Mission
Impossible’s “This message will self-destruct in 5 seconds”, or by providing
a deadline, “You have until Friday, 11 AM”. The choice of which one to use
depends on the activity being performed, but the former is more common
than the latter, when it comes to calls to remote network entities.

10.6.1 What is a context?

Earlier in the chapter, we haven’t detailed what aContext really is, but now
is the time to run go doc context.Context to find out. As we can read there,
the purpose of a context is to carry around deadlines, cancellation signals,
and values across API boundaries.

The documentation also tells us that aContext is an interface with the


following methods:

Value(key any) any


Deadline() (deadline time.Time, ok bool)
Done() <-chan struct{}
Err() error

Before we go any further, it’s important to note that even though


context.Context is an interface, it is one of the few that you should never
need to implement.
The Value method available of a context is here to implement the “carrying
values across API boundaries” requirement. While a context can be seen as a
key-value storage, we highly recommend you don’t think of it that way. If
you place important values inside context, then it's not clearly visible
anymore what input different functions require. It's better to only stick to
sending non-critical data, such as monitoring identifiers, or request identifiers
in context. Business values shouldn’t be passed via the context. If you’re
thinking of this as an option, we recommend going for an alternative. We’ll
see an example below where the context’s storage feature is used.

The Deadline method returns when a context’s deadline is set - and if it is.
The deadline is the time when the context will start saying it’s reached its
expiration date to whomever might ask.

The Done method returns a channel that will be closed when the context has
reached its end. CallingDone() is simpler than comparing Deadline() with
time.Now() , so this is what is usually done.

Finally, the Err method returns an error describing why the channel returned
by Done is closed, or nil if it isn’t closed yet.

Now, let’s have a look at how to create contexts.

10.6.2 Create a context

Golang’s context package offers several functions that allow for the creation
of a context. They are mostly divided into two categories: those that create a
child context from a parent one, and those that spring one out of the blue.

This latter set contains only context.Background() and context.TODO() .


We recommend always creating your application’s context in yourmain
function, and passing it around to any dependency that might need it. Create
it with the Background method, and try to avoid calling TODO. Overall, your
application should have a single parent context.

Now that we’ve got a context, we can create children. They can come in a
variety of shapes, but the main difference lies in how we want to set their
Deadline property. We can call WithDeadline to provide a specific
timestamp at which time the child will be cancelled, or WithTimeout if we
want to specify how long a context should “live”. The second option is by far
the most common when it comes to making calls to remote services.

Most of the time, however, a function will receive a context as one of its
parameters. There is a silent convention to always provide the context as the
first parameter of a call to a function - and, just as we usually name the errors
err , we similarly very often call our contexts ctx . And when a function
receives a context, it should not try to create a new one withBackground() or
TODO() . In our gRPC generated code, we can observe that the endpoints’
signatures all start with an incoming context - that’s the one we should be
using.

While it might seem repetitive to always have to pass the context as a


parameter, we strongly encourage you to resist the urge of using composition
to save a context into a variable.

10.6.3 Using a context

When should we create a child context, though? Why not provide the parent
context - after all, it might have a deadline itself… The answer here is having
good practices in coding. It’s about controlling with precision every call that
is made across your network. If a service needs to call two other services to
answer a request, we want to know which of these two is taking an awfully
long time to return a response, if any. To achieve this, each remote call must
be using its own context, with its own deadline. Depending on your
application, a timeout value can range between 10 milliseconds and some
days. Don’t be too strict and take into account network latency.

Let’s have an example with our database. Let’s say that we want to ensure
that the repository call in our Create endpoint doesn’t take too long. We do
this by creating a context with a timeout of 100 milliseconds, and we call our
repository.

Listing 10.30 create.go: Adding a context around the db call

// Create adds a habit into the DB.


func Create(ctx context.Context, db habitCreator, h Habit) (Habit, error) {
h, err := validateAndCompleteHabit(h)
if err != nil {
return Habit{}, err
}

dbCtx, cancel := context.WithTimeout(ctx, time.Second) #A


defer cancel()
err = db.Add(dbCtx, h) #B
if err != nil {
return Habit{}, fmt.Errorf("cannot save habit: %w", err)
}

return h, nil
}

If we run this, everything works fine. But it’s not tested, and we should test it.
It’s even worse than not tested - it actually breaks our existing tests! Indeed,
if you remember, we are using mocks, and mocks are very strict about what
they expect. Our tests didn’t worry too much about the context - after all, we
didn’t fiddle with it so far. But now, the context used to call Add in the
Create endpoint is not the Background one any more. We should update the
test, but how can we provide the exact same context? We would need to
know exactly when the deadline is to be able to expect it properly.

Many mocking libraries face this issue at some point. Minimock has decided
to expose aminimock.AnyContext variable that will match any
context.Context variable. Some other mocking libraries go a step beyond,
and propose amock.Anything variable that can be used as a wildcard for any
input parameter. We only need to use the context, so we’ll limit ourselves to
this option.

Listing 10.31 create_test.go: Mocking the child context

func TestCreate(t *testing.T) {


// ...
"nominal": {
db: func(ctl *minimock.Controller) *mocks.HabitCreatorMock {
db := mocks.NewHabitCreatorMock(ctl)
db.AddMock.Expect(minimock.AnyContext, h).Return(nil) #A
return db
},
expectedErr: nil,
},
"error case": {
db: func(ctl *minimock.Controller) *mocks.HabitCreatorMock {
db := mocks.NewHabitCreatorMock(ctl)
db.AddMock.Expect(minimock.AnyContext, h).Return(dbErr) #A
return db
},
expectedErr: dbErr,
},

While this fixes the current tests, it doesn’t test the current feature of having a
timeout on our database call. For this, we will need to improve our mock.

As you now know, a context is something that can reach its deadline, and,
when this happens, the channel returnedDone() is closed - which means
reading from it starts returning a zero value instead of being a blocking call.
This is how applications check for a cancelled / expired timeout. The
following piece of code is present in various forms in most libraries that
handle timeouts:

select {
// Read from channel used by backend to communicate response
case response := <-responseChan:
return response, nil
// Check for deadline
case <-ctx.Done():
return nil, ctx.Err()
}

In this select, whichever happens first causes the function to return - either we
received a response, or the deadline was met. The line case <- ctx.Done():
appears more than 60 times in the standard library alone - and it’s always
followed by returning the cause of the deadline, viactx.Err() .

Let’s add a test that implements this logic. For this, we can’t useExpect , as
this immediately returns the specified values. Instead, we’ll have to overwrite
the behaviour of the Add method, which minimock allows with the Set
method:

Listing 10.32 create_test.go: Testing the timeout

func TestCreate(t *testing.T) {


// ...
"db timeout": {
db: func(ctl *minimock.Controller) *mocks.HabitCreatorMock {
db := mocks.NewHabitCreatorMock(ctl)
db.AddMock.Set(
func(ctx context.Context, habit habit.Habit) error {
select {
// This tick is longer than a database call
case <-time.Tick(2 * time.Second):
return nil
case <-ctx.Done():
return ctx.Err()
}
})
return db
},
expectedErr: context.DeadlineExceeded, #A
},

So, as we’ve seen, contexts can be used to detect unexpectedly long remote
calls. Some functions allow you to register pairs of key-values inside a
context, but we recommend keeping that option as a last resort.

Instead, let’s resume our habits server, and implement the final endpoint -
one that allows us to keep track of what we do on a weekly basis.

10.7 Track your habits


Congratulations, at this point of the chapter, you know all the basics about
gRPC in Go! This next section is a guided exercise where you will prove
your autonomy and test your aggregated knowledge. If you encounter any
struggle, you can find all the code in the repository.

First, let’s quickly reset the final goal of this pocket project and define what
tracking a habit means. We have the possibility to create a list of habits with a
target weekly frequency - how about being able to tick one of our habits
whenever we achieve it and retrieve the current status, so we can plan the rest
of the week? Am I done for the week? Should I block a time slot to go for a
walk? All these questions will you be able to answer!

We will build the following scenario:


1. Create several habits
2. List the created habits
3. Tick the habits that you achieved
4. Get the status of the habits

We are missing the bricks to fulfil step 3 and 4 so let’s go for the
implementation. On the API, we will need two new endpoints TickHabit and
GetHabitStatus .

10.7.1 Tick a habit

Let’s define a new endpoint TickHabit on the proto side with its associated
request and response.

Listing 10.33 service.proto: TickHabit definition

// TickHabit is the endpoint to tick a habit.


rpc TickHabit(TickHabitRequest) returns (TickHabitResponse);

// TickHabitRequest holds the identifier of a habit to tick it.


message TickHabitRequest {
// The identifier of the habit we want to tick.
string habit_id = 1;
}

// TickHabitRequest is the response to TickHabit endpoint.


// Currently empty but open to grow.
message TickHabitResponse { #A
}

Do not forget to regenerate the Go library!

Then, add the implementation on the server with the following signature:

TickHabit(ctx context.Context, request *api.TickHabitRequest) (*api.TickHabitResponse, error)

Ticks and habits being different notions, we would store them in different
tables in an SQL database. In our memory implementation, we will store the
ticks in a structure of its own, next to the habits. This will allow us, if we
develop a UI, to retrieve only the list of habits or the full status of a habit for
a week.
10.7.2 Store ticks per week

This is the same logic that we did previously for the habits in memory, the
only tricky part is the data definition. We want to store all the ticks for each
habit, and because we want to get a weekly status, we will store ticks grouped
by week. The built-in Go library package provides a very useful method
named ISOWeek() that returns the ISO 8601 year and week number in which
that time occurs. Running go doc time.ISOWeek returns:

func (t Time) ISOWeek() (year, week int)

We will use the naming ISOWeek in our code. Let’s create a new package
called isoweek and a new file where we will define a ISO8601 structure
holding a Year and a Week.

Listing 10.34 isoweek/isoweek.go: ISO8681 structure

package isoweek

// ISO8601 holds the number of the week and the year.


type ISO8601 struct {
Year int
Week int
}

It is now easy to define the data storage type inRepository . In order to


retrieve the current status of a habit for the current week, our storage will
store for each habit id a map of the isoweek with its associated events which
are timestamps:

storage map[habit.ID]map[isoweek.ISO8681][]time.Time

For more readability, we chose to have a custom typeticksPerWeek which is


a map holding all the timestamps per ISO Week. It will now be very easy to
retrieve the current status of a habit at the current time. If you want to extend
the project, you can even have an endpoint retrieving the status for a given
week or date.

Let’s add a new type of storage toHabitRepository and renamedb into


habits to be more explicit.
Listing 10.35 repository/memory.go: ticks storage

// ticksPerWeek holds all the timestamps for a given week number.


type ticksPerWeek map[isoweek.ISO8601][]time.Time

// HabitRepository holds all the current habits.


type HabitRepository struct {
habits map[habit.ID]habit.Habit #A
ticks map[habit.ID]ticksPerWeek #B
}

Do you feel confident in creating the needed methods? But let’s not anticipate
and create only theAdd for the moment. All the logic of ISO8601 computation
is done in a dedicated function and pass it directly as a parameter, it will be
easy to reuse over the other endpoints and tests.

AddTick(_ context.Context, id habit.ID, t time.Time) error

Do not forget to verify if the habit and the ISOWeek exist in the storage
before inserting a new tick.

The full implementation of Tick on the domain side should now be ready!

Listing 10.36 habit/tick.go: Tick implementation

package habit

import (
"context"
"fmt"
"time"
)

//go:generate minimock -i habitFinder -s "_mock.go" -o "mocks"


type habitFinder interface {
Find(ctx context.Context, id ID) (Habit, error)
}

//go:generate minimock -i tickAdder -s "_mock.go" -o "mocks"


type tickAdder interface {
AddTick(ctx context.Context, id ID, t time.Time) error
}

// Tick inserts a new tick for a habit.


func Tick(ctx context.Context, habitDB habitFinder, tickDB tickAdder, id ID, t time.Time) error {
// Check if the habit exists.
_, err := habitDB.Find(ctx, id) #A
if err != nil {
return fmt.Errorf("cannot find habit %q: %w", id, err)
}

// AddTick adds a new tick for the habit.


err = tickDB.AddTick(ctx, id, t)
if err != nil {
return fmt.Errorf("cannot insert tick for habit %q: %w", id, err)
}

return nil
}

We now just have to call it on the server side and transform the request and
the response. Since we need to provide a timestamp when the habit was
ticked, we could either have it passed by the caller or the gRPC endpoint. If
the value is set in the server layer rather than read from the request, we need
to remember that the server and the client might be in different timezones,
and that the “current day” is only a relative notion.

Wait! What happens if I try to tick a habit that does not exist?

10.7.3 Handle corner cases

If the habit does not exist in the habits repository, we do not want to have
inconsistent data and store a new tick for an unknown habit. Let’s create a
Find method on the habit repository:

Find(ctx context.Context, id habit.ID) (habit.Habit, error)

A good practice is to create a custom error that is checked on the server side
to return the proper gRPC code. Here, we chose to switch on the domain
error and convert in codes.NotFound for example if the habit does not exist
in the database.

err := habit.Tick(ctx, s.db, s.db, habit.ID(request.HabitId), time.Now())


if err != nil {
switch {
case errors.Is(err, r.ErrNotFound):
return nil, status.Errorf(codes.NotFound, "couldn't find habit %q in repository",
default:
request.HabitId)
return nil, status.Errorf(codes.Internal, "cannot tick habit %q: %s", request.HabitId,
} err.Error())
}

You can test the endpoint manually usinggrpcurl on a created habit, for
instance one with the following ID: 98ab1bbe-41d5-4ed3-8f33-
e4f7bec448c8.

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"98ab1bbe-41d5-4ed3-8f33-e4f7bec448c8"}' \
localhost:28710 \
habits.Habits/TickHabit

Upon inspection, no errors have been returned. Our interest now lies in
determining the frequency of calls made to the Tick endpoint for a given
habit. Let us proceed to retrieve this information.

10.7.4 Get habit status

The last task entails retrieving the count of habit ticks for a given week. This
requires both the habit ID and a timestamp to specify the desired week. We
shall proceed by implementing an endpoint capable of accepting the ID and
timestamp parameters, which will then furnish habit details alongside the tick
count. The proto definition looks like below:

Listing 10.37 api/proto/service.go: GetHabitStatus definition

// GetHabitStatus is the endpoint to retrieve the status of ticks of a habit.


rpc GetHabitStatus(GetHabitStatusRequest) returns (GetHabitStatusResponse);

// GetHabitStatusRequest is the request to GetHabitStatus endpoint.


message GetHabitStatusRequest {
// The identifier of the habit we want to retrieve.
string habit_id = 1;

// The time for which we want to retrieve the status of a habit.


optional google.protobuf.Timestamp timestamp = 2;
}
// GetHabitStatusResponse is the response to retrieving the status of a habit.
message GetHabitStatusResponse {
// All the information of a habit.
Habit habit = 1;
// The number of times the habit has been ticked for a given week.
int32 ticks_count = 2;
}

The remainder of the steps? You should be able to do it all alone!

1. Create a method on the server side


2. Isolate the logic on the domain
3. Retrieve the data on the repository side
4. Plug all the calls
5. Do not forget to test!

If you test with grpcurl, you should obtain something like below:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"98ab1bbe-41d5-4ed3-8f33-e4f7bec448c8"}' \
localhost:28710 \
habits.Habits/GetHabitStatus

{
"habit": {
"id": "98ab1bbe-41d5-4ed3-8f33-e4f7bec448c8",
"name": "read a few pages",
"weeklyFrequency": 3
},
"ticksCount": 2
}

10.7.5 Add a timestamp

It would be more fun to be able to dive into the past to tick a habit we forgot
to update. To do so, we can extend the two last endpoints by adding a
timestamp to the requests.

In proto, there are different types we can import that will be nicely serialised
in the programming language you choose. You can always refer to the full
list of well-known types in the Protocol Buffers documentation
(https://protobuf.dev/reference/protobuf/google.protobuf/). The expected
format is a RFC3339 date string such as “2024-01-25T10:05:08+00:00”.
Let’s import the timestamp type by adding this line in the top imports of our
file service.proto.

import "google/protobuf/timestamp.proto";

Here is an example ofGetHabitStatusRequest on how to add the timestamp


as a new field using the protoTimestamp type. By default, a field is optional
in proto version 3, but you can ask for a field to be present by adding the
required keyword:

Listing 10.38 api/proto/service.go: Import the timestamp

message GetHabitStatusRequest {
string habit_id = 1;
google.protobuf.Timestamp time = 2;
}

Update the two endpoints and test your code!

10.7.6 Habit Tracker in action

We are now able to play with the habit tracker, so let’s play a full scenario
where we add habits, we tick them, retrieve their status, tick again with a
timestamp and retrieve their status for the given date. We can do it manually
by using grpcurl in your terminal or we can update the integration test. Let’s
see first the grpcurl commands and compare the responses.

1. Create a habit “Write some Go code”


Request:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"name":"Write some Go code", "weekly_frequency":3}' \
localhost:28710 \
habits.Habits/CreateHabit
Response:

{
"habit": {
"id": "94c573f1-df03-45ec-97fc-8b8fc9943472",
"name": "Write some Go code",
"weeklyFrequency": 3
}
}

2. Create a habit “Read a few pages”


Request:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"name":"Read a few pages", "weekly_frequency":5}' \
localhost:28710 \
habits.Habits/CreateHabit

Response:

{
"habit": {
"id": "96b72dce-7a2e-43ce-9091-0f9fc447b8a1",
"name": "Read a few pages",
"weeklyFrequency": 5
}
}

3. Retrieve the list of the habits


Request:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{}' \
localhost:28710 \
habits.Habits/ListHabits

Response:

{
"habits": [
{
"id": "94c573f1-df03-45ec-97fc-8b8fc9943472",
"name": "Write some Go code",
"weeklyFrequency": 3
},
{
"id": "96b72dce-7a2e-43ce-9091-0f9fc447b8a1",
"name": "Read a few pages",
"weeklyFrequency": 5
}
]
}

4. Tick habit “Write some Go code” without a timestamp because you just
did it
Request:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"94c573f1-df03-45ec-97fc-8b8fc9943472"}' \
localhost:28710 \
habits.Habits/TickHabit

Response:

5. Get the status of the habit “Write some Go code” for the current week
Request:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"94c573f1-df03-45ec-97fc-8b8fc9943472"}' \
localhost:28710 \
habits.Habits/GetHabitStatus

Response:

{
"habit": {
"id": "94c573f1-df03-45ec-97fc-8b8fc9943472",
"name": "Write some Go code",
"weeklyFrequency": 3
},
"ticksCount": 1
}

6. Tick habit “Read a few pages” without a timestamp because you are
doing it
Request:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"96b72dce-7a2e-43ce-9091-0f9fc447b8a1"}' \
localhost:28710 \
habits.Habits/TickHabit

Response:

7. Get the status of the habit “Read a few pages” for the current week
Request:

$ grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"96b72dce-7a2e-43ce-9091-0f9fc447b8a1"}' \
localhost:28710 \
habits.Habits/GetHabitStatus

Response:

{
"habit": {
"id": "96b72dce-7a2e-43ce-9091-0f9fc447b8a1",
"name": "Read a few pages",
"weeklyFrequency": 5
},
"ticksCount": 1
}

8. Tick habit “Read a few pages” with a timestamp in the previous week
Request:

grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"96b72dce-7a2e-43ce-9091-0f9fc447b8a1", "timestamp": "2024-01-
24T20:24:06+00:00"}' \
localhost:28710 \
habits.Habits/TickHabit
Response:

9. Get the status of the habit “Read a few pages” during the previous week
Request:

grpcurl \
-import-path api/proto/ \
-proto service.proto \
-plaintext -d '{"habit_id":"96b72dce-7a2e-43ce-9091-0f9fc447b8a1", "timestamp": "2024-01-
24T20:24:06+00:00"}' \
localhost:28710 \
habits.Habits/GetHabitStatus
Response:

{
"habit": {
"id": "96b72dce-7a2e-43ce-9091-0f9fc447b8a1",
"name": "Read a few pages",
"weeklyFrequency": 5
},
"ticksCount": 1
}

Launching manually the commands can be a bit annoying and repetitive, so


let’s automate this and update the integration test.

First, we can write helper functions as we did previously to tick a habit and to
verify the habit status matches when callingGetHabitStatus . These
functions will both make the call to the API and validate the returned values.
Listing 10.39 integration_test.go: Add TickHabit and GetStatus calls

func tickHabit(t *testing.T, habitsCli api.HabitsClient, id string) {


_, err := habitsCli.TickHabit(context.Background(), &api.TickHabitRequest{
HabitId: id,
})
require.NoError(t, err) #A
}

func getHabitStatusMatches(t *testing.T, habitsCli api.HabitsClient, id string, expected


h, err := habitsCli.GetHabitStatus(context.Background(),
*api.GetHabitStatusResponse) { &api.GetHabitStatusRequest{HabitId: id})
require.NoError(t, err)

assert.Equal(t, expected.Habit, h.Habit) #B


assert.Equal(t, expected.TicksCount, h.TicksCount)
}

The generated ID is needed to call TickHabit and GetHabitStatus endpoints,


we can retrieve it from addHabit helper method.

Listing 10.40 integration_test.go: Update addHabit to retrieve the id

func addHabit(t *testing.T, habitsCli api.HabitsClient, freq *int32, name string) string { #A
resp, err := habitsCli.CreateHabit(context.Background(), &api.CreateHabitRequest{
Name: name,
WeeklyFrequency: freq,
}) #B
require.NoError(t, err)

return resp.Habit.Id #C
}

You can now add the calls to the main test function by calling the two helpers
above.

Listing 10.41 integration_test.go: Call tickHabit and verify the statuses

func addHabit(t *testing.T, habitsCli api.HabitsClient, freq *int32, name string) string {
// add 2 ticks for Walk habit
tickHabit(t, habitsCli, idWalk)
tickHabit(t, habitsCli, idWalk)

// add 1 tick for Read habit


tickHabit(t, habitsCli, idRead)
// check that the right number of ticks are present
getHabitStatusMatches(t, habitsCli, idWalk, &api.GetHabitStatusResponse{
Habit: &api.Habit{
Id: idWalk,
Name: "walk in the forest",
WeeklyFrequency: 1,
},
TicksCount: 2,
})

getHabitStatusMatches(t, habitsCli, idRead, &api.GetHabitStatusResponse{


Habit: &api.Habit{
Id: idRead,
Name: "read a few pages",
WeeklyFrequency: 3,
},
TicksCount: 1,
})

Congratulations!You have built a solid habit tracker backend which can be


easily reused for a frontend application. You can commit and enjoy your new
project!

10.8 Summary
The go generate command is a Go tool that gives the possibility to
generate programs from the source code. The compiler will scan
comments with the specific syntax //go:generate and will execute the
following commands.
gRPC, standing for Google Remote Procedure Call, is a framework to
connect services, devices, applications and more. It is a powerful and
efficient framework for transporting light-weight messages in Protobuf
format.
Protobuf, short for Protocol Buffers, provides serialisation for structured
data while guaranteeing high performance. It comes with its own syntax
composed of Protocol Buffer messages and services written in .proto
files.
Protobuf messages are language-neutral, thanks to the Protobuf compiler
(
protoc ), you can generate interfaces and structures in many
programming languages (Go, Java, C++ and more).
gRPC clients and servers communications are standardised thanks to
status codes defined by the RPC API. A status is composed of an integer
code and a string message. While designing the API, you should pick
the most appropriate return code for your use case and you can always
refer to the documentation
(https://grpc.github.io/grpc/core/md_doc_statuscodes.html).
grpcurl is an open-source CLI tool that enables you to communicate
with gRPC servers easily. It is basically like curl but for gRPC. Human-
friendly, it allows you to define JSON requests instead of unreadable
bytes. It is very handy when you need to test services manually.
Declare small interfaces close to their use. It comes in handy when
testing to mock your dependencies instead of counting on hardcoded
behaviours.
Dependency injection is a technique to give all the needed objects to a
function instead of creating or building them internally. Note that it
comes in very handy to mock the dependencies when testing the
function.
While testing, there are different ways of simulating dependency
behaviours, mock tools are handy for describing expected results. The
main well-known mock tools are mockgen coming with Go,
mockify
and minimock , which we used during the chapter.
context is a standard library which provides the Context type and its
associated methods to carry information along requests and responses.
For example, when a user sends a request, you can store its identity in
the context and retrieve it later in the chain of functions. It is safe to use
methods like WithCancel or WithDeadline across multiple goroutines.
A program should create only one context and provide it to its functions.
Most of the time, the context will be coming from an external caller.
Creating children for a context is perfectly fine and encouraged.
Always pass a context.Context as the first parameter of a function,
even if you are not using it (you can use the blank identifier in this rare
case).
A context will be necessary any time we make a remote call across the
network, whichever protocol or framework is being used. Setting a
deadline for the remote call is a good safety net - otherwise, your
application calls might be hanging forever.
Don’t use a
context.Context as a key-value storage inside an
application.
Mocking functions that use a context is sometimes tricky, especially if
the function being tested creates a context of its own. To solve this,
many libraries offering mocks expose a variable that can be used as a
wildcard for the context.
Generated mocks are programmed to behave as specified. If they receive
a call for a precise set of parameters, they will return the specified
values. However, it is sometimes necessary to override this default
behaviour of always returning something, especially when testing the
behaviour when a deadline is reached.
Using
testing.Short() allows us to know if the -short flag was
passed on thego test command line. Long or expensive tests should be
skipped altogether when this flag is set, by usingt.Skip() .
Appendix A. Installation steps
Any compiled language needs, first and foremost, a compiler.

The Go toolchain was initially written in C. Since Go 1.3, it is written


directly in Go, following the principle of eating your own dog food. As
everything is open source, you can at any time suggest improvements, or at
least look into the standard library’s source code for how other developers
write their Go.

A.1 Install
Start by visiting the Go website. It explains in a simple way (did we tell you
that Go aims for simplicity?) how to download the installer and run it on
either Linux, Mac or Windows. Follow the installation steps and do not
forget to add go to your path.

There is no good reason to pick old versions. Just for the record, we are
writing this book using Go 1.20.

https://go.dev/doc/install

A.2 Check
As mentioned on the online installation guide, you can check the version of
Go that you are using and also verify that Go is properly installed by running
this command in any directory at all:

Listing A.1 Check the installation in your console

$ go version
go version go1.19.0 darwin/arm64

A.3 Go’s environment variables


Go, under the hood, uses several variables without being explicit about it. In
this section, we’ll be looking closely at two of these variables - namely
GOROOTand GOPATH.

If you’ve just installed Go, these variables won’t be set in your sessions. “But
how come Go uses them if they’re not set?” might you ask. A very wise
question. Go is able to use default values for these (and more, as we’ll see)
variables.

A.3.1 The go env command

The go command can access environment variables, just as any program


could. However, Go comes with an extra layer of variables, which aren’t
visible to you from a terminal. These variables can be listed with thego env
command. go env will return all the Go environment variables it can access.
Typically, you will paste the output of this command along with any question
you post online or when you open a bug.

Alternatively, we can pass it a list of the variables we want to retrieve, which


limits the output. Here is an example of the results of this command. Fear not,
should your output differ - after all, we don’t share the same environment.

Listing A.2 Example go env output

$ go env -json GOBIN GOENV GOROOT GOPATH CGO_ENABLED


{
"GOBIN": "",
"GOENV": "/home/user/.config/go/env",
"GOPATH": "/home/user/go",
"GOROOT": "/usr/local/go",
"CGO_ENABLED": "1",
}

We won’t go through the long list of variables displayed by go env , as most


are beyond the scope of this book. For instance, the variables towards the end
of the list are related to CGo, the utility that allows integration of C code
within Go code.

The values returned bygo env here are the default values, which are based
on your machine’s architecture and your installation directory of Go. We
hardly ever need to modify any of these values, but, for your knowledge, they
can be overridden with regular environment variables.

Listing A.3 Overriding a Go env variable with an env variable

# On Linux:
$ CGO_ENABLED=0 go env -json CGO_ENABLED
{
"CGO_ENABLED": "0",
}

# On Windows:
C:\> set "CGO_ENABLED=0" & go env -json CGO_ENABLED

They can also be written in Go’s configuration file (which is pointed by the
GOENVvariable) with the go env -w VARIABLE=VALUE command.

Listing A.4 Overriding a Go environment variable with go env

# On Linux:
$ go env -w GOBIN=/home/user/bin
$ go env GOBIN
{
"GOBIN": "/home/user/bin",
}
# On Windows
C:\> go env -w GOBIN=%LOCALAPPDATA%

A.3.2 The GOBIN variable

The GOBIN variable contains the path of a directory in which Go will


download any tools you install with go install url@version . This is the
standard way of retrieving utilities in Go. More on the next page.

$ go install golang.org/x/tools/cmd/godoc@latest

A.3.3 The GOPATH variable

The GOPATHvariable contains a list of paths to directories in which Go will


resolve its dependencies. Earlier versions of Go used a decentralised
approach - should two projects require the same dependency, that
dependency would be downloaded twice, and stored in the unregretted
vendors directory of each project. This is no longer the case: now, when a
dependency is needed, it is stored locally and any project you have will use
the local version rather than re-download that dependency.

Make sure your workspace is contained in theGOPATHlist of directories. If


you’re working in ${HOME}/go , you’ll be fine. Otherwise, you can use the
following command – Windows users should use a semicolon to add an extra
path:

Listing A.5 Add a directory to your go path - Unix-based system

# On Linux
$ go env -w GOPATH=${GOPATH}:/path/to/workspace
# On Windows
C:\> go env -w "GOPATH=%GOPATH%;C:\path\to\workspace"

A.3.4 The GOROOT variable

The GOROOTvariable points to the directory which contains the installation of


Go. We recommend not changing what is in theGOROOTtree, because
installing a new version of Go would mean discarding any of your changes
there. Similarly, it is not ideal to have any of the other Go environment
variables point to somewhere within the GOROOT .

As part of your installation, you made sure the path${GOROOT}/bin was


included in your PATH environment variable - that’s how we can run go. This
directory contains another executable -gofmt - which is in charge of
formatting code.

A.4 Hello!
The Hello World instructions are detailed on Go’s website, but here is a short
version. You will find, at the very beginning of the first project, in chapter 2,
explanations regarding each line of the typical hello world.

Create a hello folder in your ${GOPATH} with a file named hello.go and paste
the following:

Listing A.6 hello.go

package main

import "fmt"

func main() {
fmt.Println("Hello, World!")
}

To manage dependencies and versions in Go, we use modules. Run the


following command to create your first module:

$ go mod init

A go.mod file appeared. It contains the path to your module and your Go
version.

Then run your code in the same folder and wave at your screen: your
machine is trying to communicate!

Listing A.7 run a go file

$ go run hello.go
Hello, World!

Funny how quickly we personify our computer friends.

A.5 Installing new dependencies


When developing new functionalities, we like to build upon the work of
others. Go has 2 different tools to retrieve existing work, each with its
specific objective: go install and go get . They work in a very similar way,
but are used in different contexts.

Both commands accept the name of a repo, and will retrieve its contents at a
specific version. The main difference is that go get only retrieves go files
from that repo, when go install also compiles the retrieved package into an
executable. Which one you use will depend on what you need - do you want
the sources or the executable ?

go install is rather recent, and some public repositories will still list go get
as the method to install their binaries. If you follow that path, you will be
faced with a message suggesting usinggo install - in these cases, use the
second option of go install listed below.

A.5.1 go install

If you need to retrieve a binary written in Go, use go install . It will fetch
the sources and compile them locally for your machine’s architecture. There
are three different ways of calling the go install tool.

The first option lets you retrieve a specific version of a repository. This is
very useful when writing automation tools, when you want a constant and
deterministic flow.

The second option is very similar, and retrieves the code at the latest version
of the repository, using its main (or master ) branch. This is the most common
way of using go install manually.

The last option will use the contents of your project’s go.mod file to find
which version to download and install. This will only work if you are running
the command from within a Go project.

Listing A.8 go install examples

# Install a specific version.


$ go install golang.org/x/tools/cmd/[email protected]

# Install the highest available version.


$ go install golang.org/x/tools/cmd/godoc@latest

# Install the highest available version.


$ go install golang.org/x/tools/cmd/godoc
missing go.sum entry for module providing package golang.org/x/tools/cmd/godoc; to add:
go mod download golang.org/x/tools

A.5.2 go get
If you need sources that your own code depends upon, go get will update
your module file (see below) and download the sources into
${GOPATH}/pkg/mod . You can then look into them in order to understand
what the code does. Similarly togo install , go get can be used with
different behaviours.

The first option you have is to run go get on an URL, without specifying a
version or anything. This will retrieve the contents of that dependency, and
its own dependencies to yourgo.mod file - you are telling go you need that
repo in your project.

The second option you have is to retrieve the code by explicitly giving the
name of a tag, branch, or commit. This is extremely useful when working on
two projects at the same time, or when working with a project that hasn’t
been merged into main yet. This option will register that new package into
your go.mod file, at the desired version.

Listing A.9 go get examples

# Retrieve the experimental slices package using the version defined in the go.mod file.
$ go get golang.org/x/exp/slices
# Retrieve the experimental slices package, latest commit on branch master.
$ go get golang.org/x/exp/slices@master
# Retrieve the experimental slices package at a specific commit or tag.
$ go get golang.org/x/exp/slices@c99f07

A.6 Code editors


Go is supported by more and more code editors. As always in this situation,
the best tool is the one you know how to use. The official Go website, as we
write, lists three editors:

GoLand, by JetBrains. JetBrains has a long list of editors for various


languages, the most famous being IntelliJ for Java. You can install
GoLand as a standalone editor or add the Go plugin to any other editor
in the list.
Visual Studio Code, by Microsoft, has a Go extension.
vim-go is great if you already know vim
A quick search around the web will easily give you instructions to add Go
support to your usual tool, if it is not already there, e.g. with GoSublime,
Atom with GoPlus, LiteIDE…

Go is now installed on your machine and you can start using it and follow the
book’s instructions. We learned about the Go environment variables and
important paths. Your terminal knows how to greet you in English using Go,
you can move to Chapter 2 and teach it to greet you in other human
languages.
Appendix B. Formatting cheat sheet
Go offers several verbs that are passed to printing functions to format Go
values. In this appendix, we present the most known verbs and special values
that can be passed to these functions. You can refer to these tables all along
the book. The result for each of the following entries was generated by
fmt.Printf("{{verb}}", value) .

Table B.1. Default

Verb Output for fmt.Printf("{{Verb}}", []int64{0, 1}) Description

%v [0 1] Default format

%#v []int64{0, 1} Go-syntax format

%T []int64 Type of the value

Table B.2. Integers

Output for fmt.Printf("


Verb Description
{{Verb}}", 15)

%d 15 Base 10

%+d +15 Always show the sign


␣␣ 15 Pad to 4 characters with spaces, right
%4d
justified

␣␣ Pad to 4 characters with spaces, left


%-4d 15
justified

Pad to 4 characters with prefixing


%04d 0015
zeros

%b 1111 Base 2 (binary)

%o 17 Base 8 (octal)

%x f Base 16, lowercase

%X F Base 16, uppercase

%#x 0xf Base 16 with leading 0x

Table B.3. Floats

Output for
fmt.Printf("
Verb {{Verb}}", Description
123.456)
%e 1.234560e+02 Scientific notation

Decimal point, no exponent. The default


%f 123.456000
precision is 6.

Default width, precision 2 digits after the


%.2f 123.46
decimal point

Width 8 chars, precision 2 digits after the


%8.2f ␣␣ 123.46 decimal point. Default padding character is
space

Width 8 chars, precision 2 digits after the


%08.2f 00123.46 decimal point. Left-padding with specified
character (here,0 )

%g 123.456 Exponent when needed, necessary digits only

Table B.4. Characters

Verb Output for fmt.Printf("{{Verb}}", 'A') Description

%c A Character

%q 'A' Quoted character

%U U+0041 Unicode
%#U U+0041 'A' Unicode with character

Table B.5. Strings or byte slices

Verb Result for "gophers" Description

%s gophers Plain string

%8s ␣␣ gophers Width 8, right justified

%-8s gophers ␣␣ Width 8, left justified

%q "gophers" Quoted string

%x 676f7068657273 Hex dump of byte value

%x 67 6f 70 68 65 72 73 Hex dump with spaces

Table B.6. Booleans

Output for fmt.Printf("{{Verb}}",


Verb Description
true)

Equivalent to %v but only for


%t true booleans

Table B.7. Pointers

Output for fmt.Printf("{{Verb}}",


Verb Description
new(int))

Base 16 notation with leading


%p 0xc0000b2000
0x

Table B.8. Special values

Verb Description

\a U+0007 alert or bell

\b U+0008 backspace

\\ U+005c backslash

\t U+0009 horizontal tab

\n U+000A line feed or newline

\f U+000C form feed


\r U+000D carriage return

\v U+000b vertical tab

%% The % character :fmt.Printf("%05.2f%%", math.Pi) prints 03.14%

All Unicode values can be encoded with backslash escapes and can be used
in string literals.

There are four different formats:

\x followed by exactly two hexadecimal digits : \x64,


\ followed by exactly three octal digits: \144,
\u followed by exactly four hexadecimal digits \u0064,
\U followed by exactly eight hexadecimal digits \U00000064.

The escapes\u and \U represent Unicode code points. Here is an example of


a Unicode value embedded in a string:

fmt.Println("Thy bosom is endear\u00e8d with all hearts")


Appendix C. Zero values
C.1 What is a zero value
Sometimes while coding, you will need the use of a variable without
assigning a value. For example, a variable should be declared before a
condition to exist outside of it:

var counter int


if readline(&buf) {
counter += 1
}
fmt.Println(counter)

In this case, the variablecounter is declared without an explicit initial value


meaning it is given by default its zero value, which, for an integer, is0 . Note
that the initialisation to zero value is done recursively either for a slice, a map
or a structure: each element or field will be set to its zero value according to
its type.

C.2 The zero values of any types


Most zero-values are intuitive, but there are a few that are worth keeping in
mind. Those that you should absolutely remember are:

Booleans have a zero-valuefalse ;


Slices and maps have a zero-value equal to the
nil entity

You can find below a table of examples from the simplest to more complex
types with their zero-values. Feel free to come back to this table through the
book.

Table C.1. Zero values of any types


Variable Observed zero-value
declaration

var r
r == 0
rune

var f
f == 0.
float32

var b
b == false
bool

var i

[] i == nil (*)

int

var a

[2] a == [2]complex64{0+0i, 0+0i}

complex64

var m

map m == nil (*)

[string]int

type person

struct
p has been allocated in memory, it can’t benil ( nil is
{ also not of type person )
age int p.age == 0

name string p.name == ""

var p person

var i

* i == nil

int

type Doer

interface

{
d == nil
Do()

var d Doer

var c

chan c == nil

string

type translate

func
t == nil
(string) string

var t translate
(*) Maps and slices should be declared with themake() function. If not, they
take the zero-value of nil, as described here. There are a few things to know
about slices and maps that can come in handy at any time.

C.3 Slices and maps specificities


Slices and maps have some specificities that should be noticed when
manipulating zero values and nil entries.

The len function can be called onnil slices or maps, and returns the value 0.
In a vast majority of cases, checking the length is better than checking if the
structure is nil . Let’s have a look at an example:

Listing C.1 Checking the length of a slice

func main() {
data := []string{}
fmt.Println(data == nil)
fmt.Println(len(data))
fmt.Println(data[0])
}
======
false
0
panic: runtime error: index out of range [0] with length 0

As you can see in this example, declaring an empty slice doesn’t return nail
slice. In order to be able to check any of its elements, we should always
check the length of a slice.

There is, however, one thing that we can do with uninitialised slices, and this
is appending entries to them. This won’t cause any panic error, and will
simply return a non-nil slice with the new elements, if there were any.

Listing C.2 Appending to a nil slice

func main() {
var data []string
fmt.Println(data == nil)
data = append(data, "hello")
fmt.Println(data)
}
======
true
[hello]

Maps follow the same logic: when declaring one without initialising it, the
map will be nil . The important information is that you can’t write data in
such a map.

Listing C.3 Trying to add elements in a nil map

func main() {
var m map[string]int
m["hello"] = 37
}
======
panic: assignment to entry in nil map

However, accessing items in anil map will return the zero-value of this item
(it’s obviously not present). This is useful information, because sometimes,
you receive a map from a library. It’s safe to check for keys in the map, but
it’s even safer to check for its length first.

Listing C.4 Trying to read elements from a nil map

func main() {
var m map[string]int
count, found := m["hello"]
fmt.Printf("found: %v; count: %d\n", found, count)
}
======
found: false; count: 0

C.4 Benefiting from zero-values


Suppose we want to count the number of different words in a text, and keep
track of their number of occurrences. One simple way of achieving this goal
is to use a map, where the keys will be the different words, and the values
will be their current count, as we iterate through the list of words.
Listing C.5 A structure to count different words in a text

wordCount := make(map[string]int)

When accessing an entry absent from this map - a word that we haven’t seen
so far - the returned value at the index of the new word will be the zero value
of the integer tye: 0. This is extremely convenient, as it means we can
consider words that haven’t been seen so far as words that have been seen
zero times. Recording an occurrence of a word doesn’t need any extra effort
if the word had or hadn’t been registered before: we simply add 1 to the
counter.

Listing C.6 Counting different words in a text

import (
"fmt"
"strings"
)

func countWords(s string) {


wordCounter := make(map[string]int)
for _, word := range strings.Fields(s) {
wordCounter[word]++
}

// print results
for word, count := range wordCounter {
fmt.Printf("We recorded the word %q %d time(s).\n", word, count)
}
}

func main() {
countWords("to be or not to be")
}
======
We recorded the word "or" 1 time(s).
We recorded the word "not" 1 time(s).
We recorded the word "to" 2 time(s).
We recorded the word "be" 2 time(s).
Appendix D. Benchmarking
One of the great tools Go offers is a benchmarking command. Writing
benchmarks to compare the allocation of memory and the execution time is
extremely simple - it’s very similar to writing a test over a function.

We’ll use the type B, defined in the testing package. You’ll never guess
what B stands for…

The type B has one exposed field, and integerN, which counts the number of
iterations the benchmark has executed. When running benchmarks, this field
has an initial value that will allow at least a certain amount of iterations to
ensure we have a steady result - no need to try and set it manually.

Test benchmarking functions follow a convention very similar to test


functions: their name must start with Benchmark .

D.1 StringBuilder (from 4.2.1)


We explained that using concatenation to build long strings is not a good idea
and you should use a Builder. Don’t take our word for it, measure it yourself!

We are building a string that represents thefeedback type, which is a slice of


status es.

Listing D.1 status_internal_test.go: Examples of benchmarks

// Benchmark the string concatenation with only one value in feedback


func BenchmarkStringConcat1(b *testing.B) {
fb := feedback{absentCharacter}
for n := 0; n < b.N; n++ { #A
_ = fb.StringConcat()
}
}

EXERCISE: Instead of having a feedback of one status, write benchmark


functions that will accept longer feedbacks. Since Gordle will mostly be used
with words of 5 characters, that’s probably the length we want to benchmark.

As mentioned earlier, the benchmark can be run using our friend thego test
tool, with specific options. In order to run benchmarks (just as we had for
tests) for all files in subdirectories, we pass the-bench=. option, and, if we
want to display details of memory operations, we can add-benchmem .
Running benchmarks will, however, also run the tests. If we want to avoid
that, and run only the benchmarks, we can add an extra parameter to the
command-line, an indication to help Go find our tests by their name: a
regular expression. We’ll cover this topic a bit more extensively, but, for
now, we’ll use the (very) loose ^$ , which will match all benchmark test
functions.

$ go test ./... -run=^$ -bench=. -benchmem

Listing D.2 Result of the benchmarks

$ go test ./... -run=^$ -bench=. -benchmem

goos: darwin
goarch: arm64
pkg: github.com/ablqk/tiny-go-projects/chapter-04/2_feedback/gordle
BenchmarkStringConcat1-10 174882942 6.850 ns/op 0 B/op 0 allocs/op
BenchmarkStringConcat2-10 15633693 74.28 ns/op 24 B/op 2 allocs/op
BenchmarkStringConcat3-10 8609542 137.1 ns/op 56 B/op 4 allocs/op
BenchmarkStringConcat4-10 5873654 201.1 ns/op 104 B/op 6 allocs/op
BenchmarkStringConcat5-10 4455464 275.2 ns/op 160 B/op 8 allocs/op

BenchmarkStringBuilder1-10 71407850 16.69 ns/op 8 B/op 1 allocs/op


BenchmarkStringBuilder2-10 30721999 38.28 ns/op 24 B/op 2 allocs/op
BenchmarkStringBuilder3-10 27036134 45.64 ns/op 24 B/op 2 allocs/op
BenchmarkStringBuilder4-10 17278803 70.44 ns/op 56 B/op 3 allocs/op
BenchmarkStringBuilder5-10 16189770 73.27 ns/op 56 B/op 3 allocs/op
PASS
ok github.com/ablqk/tiny-go-projects/chapter-04/2_feedback/gordle 13.762s
*/

The output of this command can be a bit scary at first. After all, we only
wrote a 5-line test! Let’s have a look. We can see several lines, and several
columns. Each line corresponds to a function to benchmark that the go tool
found in our code (respecting our earlier loose regexp). The columns
represent metrics that were observed by the test tool during the execution of
the benchmark.

The first column is the name of the function, with a suffix indicating the
number of processors on the machine.
The second column indicates the number of loops that were executed
(the b.N value, if you remember).
The third column indicates the amount of time (usually in nanoseconds)
each operation took.
The fourth column indicates the number of bytes allocated per operation.
The final column indicates the number of memory allocations per
operation.

Some quick maths should show that the benchmark tool gave roughly the
same amount of execution time to each line (second column multiplied by
third column). The benchmark results are interesting as they are pretty simple
to read.

String concatenation is three to four times slower than using the string builder
when we need to append five times. Using thea + b string concatenation
makes a number of memory allocations proportional to the number of strings
to concatenate (which makes sense, since strings are immutable), and these
operations cost more and more memory every time. On the other hand, the
memory allocations of the string builder are scarcer and lighter. This
benchmark confirms we definitely should be using the string builder to
generate feedback!

D.2 Summary
In order to compare the relative efficiency of two implementations, Go’s
test tool allows for a simple implementation of benchmark functions. A
BenchmarkNameOfFunc(b *testing.B) function will be considered as a
benchmarking function and will be ran with go test ./... -
run=NameOfFunc -bench=. -benchmem . The benchmarked function must be
called inside a for n := 0; n < b.N; n++ { loop.
welcome
Thank you for purchasing the Learn Go with Pocket-Sized Projects! We hope
you will have fun and make immediate use of your learnings.

This book is for developers who want to learn the language in a fun and
interactive way, and be comfortable enough to use it professionally. Each
chapter is an independent pocket-sized project. The book covers the
specificities of the language, such as implicit interfaces and how they help in
test design. Testing the code is included throughout the book. We want to
help the reader become a good modern software developer while using the
Go language.

This book also contains tutorials for command-line interfaces, and for both
REST and gRPC microservices, showing how the language is great for cloud
computing. It finishes with a project that uses TinyGo, the compiler for
embedded systems.

Each pocket-sized project is written in a reasonable number of lines. Our goal


is to provide various exercises so any developer who wants to begin with Go
or to explore the specificities of the language can follow the steps described
in each chapter. This is not a book to learn development from scratch and the
chapters are graded.

We encourage you to ask your questions and post the feedback you have
about the content in theliveBook Discussion forum . We want you to get the
most out of your readings to increase your understanding of the projects.

— Aliénor Latour, Donia Chaiehloudj and Pascal Bertrand

In this book

welcome 1 Meet Go 2 Hello, Earth! Extend your hello world 3 A


bookworm’s digest: playing with loops and maps 4 A log story: create a
logging library 5 Gordle: play a word game in your terminal 6 Money
converter: CLI around an HTTP call 7 Caching with generics 8 Gordle as a
service 9 Concurrent maze solver10 Habits Tracker using gRPC
Appendix A. Installation steps Appendix B. Formatting cheat sheetAppendix
C. Zero valuesAppendix D. Benchmarking

You might also like