hllHf%UVj=l8{K6*@fyT~cpSgCDToJ#^G}G5pJx-2BH6+pN#d
zDyqz(&Ap0K&bXHsW9ef5;X+?mDj{mwy}>b3{^rxs@$-}E<==tV=WiUpWj6$`XrJta
zy1MY3{c-iFX_7spGrpwejB|2!=H8|la!X~0ENBJS%(>WPt4}XNDUXZ`p>_uMJ{@n|
zOrPNQ!)YZ0+8|7}yWV1I^4i<&{Yiy`_f7ej9y`LRGLj{703NB4HkBVg4jDaj!od@+
zz}2Q*IX}hw2s`OP4J171%N{@tKUYbEosYK4ls)D{oK$;D(~t&3u6pCrwfV3@&-y7C
z{P-}%zoj=t1>7IWn5tD_wE3(t`El>FPlGi7L#wQoCXSw(8UBr2j$%D~arnwzZB$}Q
z?Xr}Q5?IjEbn@!T;2rGnA={6kimMRdzXTqqWVi@dE~Z?XI6r|{euuN;V!BcF^p|zJ
z4S6$7@^;xpJKy17WPvDn|c^d&M(w9Aua7X
z!jH2$v1as7ND%!FiYK2Z*uZy+^)sOTF%K*Fqy;~JdT1l_yT@9GA!kfL=nr&FpM6{-
zz4+;*f--#lGJpSK05dP~b3tRizL!5{^fP-&5q&WDdo6}CF7i&|2KBB4U
z7CNxKRoL!(!Os`=w1|ftvMnuW!mt*d4rJ}8PEsMLFnkvoZ9$%q)CHyWzVh>|47vp>{;Mr&~f}_an=tC0vy87KXbEN(nYi{(4REfkTCZJQqKDK)m_Hkm-is5GQ
zfwOJmzYg;pboDlfaICSmdo|H->h9S*J<_e{+z8Jc8G??B2*}z^X{x>~%#ZO89lz{s
z@7#2t=sU`azo$<=sds7tBbo@mE@e6k0%HYU2<7*H3G9PIgOrXZ6Bp;db
z*l6m%sSWi~wx@or0trH%*un7^mWdHQ@>n8DL0VvjYBKTrd-{T!&ZrXrSS2M_n@hZEWfeA0Fs>gD0L{I#&oyhcXK7BYW-&Vk3%M(EPu
zz$_tpokDiA6fZbApE4ar#E70*XWgVTAVtXGatp2LfJ~<534J~-QzPNpqA@9N=0zyc*G7|1^GX>SJzVi7(rpo~#
zV}kY}hgG^g*d}-n(u6m{jU!f
zRgs!LlsqiJc2$f)PbnJZC}Qq)P4Jxg0aa|$EL>2y(AF(Jx=;yj&SF8axF@322`DFWxG3cS9|t*Cq{k
zoT|lCbZ;)u>S4Tt)_?6y-1q-Pq~Ry;IyV34Lu)7=p||dd7FUWdzTpFxK1@Gw5_ZQO
zxBIPK&7Xz(h{hY)6JDMOXv8OYf6`5hE3QWGu81s0N&2oiQDbSeo|re69X)(=y#C^|
zCi#|M7aJ`O_lKH5S5FHhL)jvF%nJ1t9`P_$m$Vn{jk&~qT0w>B7q(fjfsTH03@_qB
zd3+NHyBqSsBLoH0s|H>mzo1Y=46(gT9ZY`ONzuiZOcp{Uh(
z_=Zm}-aMeC2Pa10Bfe9{zAlRx6Dag6-2r|BwN>dK?^T2>9CevLXENye>FvrW
zywE=CU3DlIkWT+!I(~pdI}VHL+KTK2x!VX7w)ff=uo7ZJ(c^LTY3Ramh;084|G(Q?
z9~eRC)X1j7aeWjksN?MZs|pTVZfX!CT@VrZx3)!TTyBcY_bLq2Ewkj4>n!Mq{dv{@
zIB#^SJL-PI0Eg)`_Od<$m*Q3o8gwZ0cxt9eSCROdi65|6^@@^!5-~q)23F7M;%r{D
zmfni(_sxF;kI$}zyWqwunYj1`m*|R+;=@+)^?;v%uub)NR01CBU4wPPhC4W*cRg~u
z)86fSeR~pkdGEha^vj8^)Y;Xe6-_
zMvdrP<07v6hVb*9PXoCMDg@$>I<3WyevfUX95}fz|4c+HqB5*6kGNKl?;WdH=BWDY
zl?Jg9SDfZRfF?Ft;a3|KMnpO_>jbzLQ!EB{?NvBlwP2zz4t~28pJv=17cNKW@%3%D
zib956VI%3?7%YykV&Y`7Es}($qh|HEZ`^L5I+6lxoS53IjAUzUOX9h?2
z-Q@<)f6$rPvLXMfbk&-&Po9%wBW*O%q{<7r7dp4XYo?h&1-1&_TTFuC4WT4I!PzvXxDszt0*wT2e^y%*ctL6R96YC=^`5ZMnEe?+pTy*Q$
z<0>{3OfXxksA6xZ-XAr${l+Jh%3dMPX>U+58I}={P@GE{zuMCM$r1dR5ig#Uzl!kZ
zCkkn`SBiX8AAe~mc2SlxSK0aAE}^%`-Tl9^P9K8YJJ4o>S@Q|4bL-_HL-8QdJ7tE%
z`8`bA)7|&8nVotThxcxjR$bV4=LF`AWJI)Tn$xk(AA}x9rs?m(Gqt>zqU_X%hmxe5
z*AM4x{|r?SIX=`Cm;zc7>f(e=x8^3z931^coc-;Fd))4=bxXKh$n2}Jj4og#5)MTa
z<2bQ_3}NT4TPd$YnR(c@a%*~RC(cuxj1BYEl=&|DXFIW8)##Q!9H<|}Ve9O*b%+(1
z1;j1$3VGtz(tm&hGAjG}Gbj`*lWo&KMJ<6R9m;NwEmmKx?7h1!!AN~`2?&dM60UvG
zi}S{Z(#xvDMy{1z;7*U>?>~IG-w?rR@F_h%oa4>HU~l><
z15!oSyeNz5PhdCTeaoJThn24U=QFE8ZzGUy=#SIW)4Gq*oixocmcunot+#}mERq*5
zI|6NbI@v*wh4C(MI8*r{L>bCA`GN``OTA;2vbNpb{vWo#o|
zbw7MkR?^l)xvl2aj025k0jm|BCT;ZYWaQBX51*JzKjPEI5|E6
z+h0S~g|jDO=c0U+TVbVYWAG^+4;`kQOr*;M)~@H2^OROe$%w@Iku8wK!-oz(X=ckH
z)9lF8PzrQ{O;B%3jS$?v4(_1{xyOR>Ki=8dXu7#_j6`^JPa@V8A5lVb=d?6iekA<>_uQ%hK-Zao*
zMnwQP@&T6bQSt3FThC>$Q^SH*AzNvLr#0P%DkWy{Eo2R)a9@PB6Wc7qez2!K*k{+P
z(2Kyq7c-WfFvw|jZj*OvE8q#e;OLy{I<^7kGTT`VS4+q9X`RYj#7fwZ0b-77)2;QxjY{Z
z!XHB+_RgMR5X&BL7{sX>8AHZ%>HGHjqtFUbAF>#Qa-xjL`~1p*kMC@c9a6?KLd2jh
zU$<51m0KD&9({p~*H_q=*t?^>g(HwUI)O(aTgpehOf=(0Q2$=c2EU6n*t-LAMi&
z%Tefz_`fojw+cPjnH&UCV}4KUt~WAr5RfRU-W5a4%i{t760XszF)&d`ijc4X)|2J!
zovgEsZpt|0`VZWH8PC`Jsj63Dvu8T>)|Q@7Q;w>{4dt||6MA%-yn^Ivh4#b!x)_!3
zbUGht1VY{TkLLscly^>|XYyS&HVdY2xN34SaCu+HBr$u-McS+T~Ftd;QcD
zW=`$bi-jec^@|WhYKpqOZcfA;8`ApQ$lKELV}CO_nXkR`=hAciu2$5EiH~fm|JcB?
zOy2KPm&MN78Tb59XktefNf3!aa@3XRS@W
zqj;#l3312y5I5?l*S!(McPPY83a9TTKv^PNWezTWZP#QqoM5S}9%!-zZVxXwX_?wOF^YpLy&~XhV#Ht{de`k9F#@+zIpj*j)yNuD;e}t};bT3<
zUIY>1YlE47P#mg@9|YRf^{p=HuDZa1XQsA+i-_bmsYf`Et0p>_i~^~2=YIc7*^*k;&7KS
zPHf$4TjGr_3VzR>PvVkJ$`2f0QKdR796#%-gSc2saOTWg;W*Qb>EFv=Rz7D2+So>b
zDEg4)!$M#XCsDX(f01bylv1;u{`9L$2H>r=v?&Nf+IhG+lwqCoP{nEUMcwh?evcGW
zwUU^Ei21O14`XqtYr=zQo3niS3*lOPMoFcOK2a9M$VMqZa6w*A(BYv~mztvXKkt4UR`|rj
z^pzoZMx-;7x!k8jLwAUgk6gB=Ha;~lGxJSXm6~dZH+-BfIy#1qmUtXd^J`uNL4{MR
z%KhoKNmNO4Toil@b?IVx=8-cqUpG0aur%`9oAD@l7zgFk6unWUM;}|o+yL^N4NgT8w`gf#X(CNVIm42m1
zLUS$w?LF1#)4JHYh3E}Uz)%>o)8wd`1PQJ<&ndaU=(3=l17A9}fLHIW0FGmjqDa&w
zYc!ZjGk9>1y$pnPezEI<fVYXFBnJN{PuO9SZ~d#oA2_Vfm7DYZYJZ18HO~#s`zdM5WQ^V`ltVS2AR@3yS_h
zBz;XB)B@PCV4(APKU^(VJ_WwB$4HAUdCJ5a#$X3EmztUrTFt3TDXmt-mF^VZbMiJw
zh&-=zN#LSP43MMlLc(u;wZoZIPq=R*#{~3)$8d`O%N7%Nif&Xj5kPnw&3NmFdWJXF
z^G9Xaqei5tC(|>!6Vol|<9pay$bM4bz-0Gh!-a~AdnSAPanJ?xWLv`7keKw;Rt+%1
zVq#wSYNruDQP8?*CcNXjQxT6x?T^QOA{g5MbD${j4-Y8?0@JOKU!lAvkq873u|tY~
zMDE<~3e@F`bOu>U@G3le$XlMhlc`!Rsd5C_+!S+N3a;&RW}e$yiOhVne$e)JMQLOO
zy(FE}hgg6TX%rAt*VL%s(Lw+hmDi)cB0&Z52f
zdMs}!UQF5ax+t`YhP{+xl98rY?My7kAI5akYrRWy7GAR%1&lqQdHGjhA~sd8(v`oG
z1jIk$v0B9lCK>y?m6%ngbG?^<7(G8*+OlGECHjXnIqxMnoZZF(dhXcacE#iDO3&aG
zqfGRDIM_Deh-P5x&+6(zlb)PblJL44W7jx%jx0ItO=j^R}(hC;V%u6ig~AV
zkhCS&jsgeGwdU+5en<=HVv9?y72){?1sQzC=`9W6X{@lzdFzM_99$T<_4jt2-*YJD
zUcI#wifC`ZNJSq5Z*SrlVo8?~3pRR9svkuxapfZk)hoP%gM)HVK0lLM<98w;5Y@0l
z$l<@zyQDB&p52+#e)DF+_pU4-kQL4HH=ed^>F<3JG<5QLH~pNuiwBI*elW`=P02R0
z2Uom5-2?iM(CJOcfh*KsdX-7Ka{>fOX8N0`s{Hm_cPV7$^rH{o`_(t6uh3?2(`gs}
zzC^N12$3T0>ZmY;lbAQ>NaucYWKB<^p|c{?AU9mrVDlA-tGpjVPOFP_+3*7Pc*~2a
z#AzeNs(YC_jX8&XZ62!ft5i_r|B0Cjl$2gPPx7TV$BH^#^rFl#y0ZK?t2{c`Jg((6
zP^?NaK;>q_Jc-9kec(x4x78vOa6#mt>O$X!yknskXxrQQIipV>)teNZyBhmQ@V|BP
zdCY*@27FFD$AN@sRG-4;`4+_ejvb25bLtC;d|jR*z4`MC#R!$I!&MCFQOwUJ`z_hi
z3&)!9sui?#QPuf?xjYlLi3(NHyKc1>-@D~P&xk+peb|^aAI#Jd^{cWEj6Z+qf0
zxUqUyoZIQ3J^11gw7vSO09OS~i7}`SLgi}gewQyib=9SCk4h>z2Ua)W6N
zG9qnEh^5p^B|FzYW=9T|_K8byGDQlQMF>rHcX*vg&q1b0;F%UcKJ2g&w_8vAk7^{`
ztpRup)MwBWK!4obQ`M$0HZf5@J>7!+o8*ejl?nf~=0B{D8?sovFa%3bjf~F{XivAc
zjAElr8-S4ZDLM>GKOY=Sw`zTm7TU`zULBacwO;Q2aLFTDaq5%+n}gMNRYJ6n
ztp#E-8$d2a9ON~L(x9vvhl6(6MuaRMKA;x{zsKzgqq5MecTrgMK2i_wr;+dI^{Sad
zX)5|84t!ULdeB8wgpyTxWY>YS0mf#Zu|^$Vm=(RId;DHHAD#dUMnB;BgDLP=FDye8
z!u0!69hJw>j%Ip5BJh^ORCXR;hAaE+tC#xzvRrs+qzdZ}1wz%mrZ9TV#JfjynsQp2
z(0t|)U5WfZw`BO0KNxin{SaN?(DKVF{SHrKM!DLEqUYL!tt2R3QQh~4+tGZQ{P;;z
zRe*jvi$^idrl@D3kHW6IIFhT;mbpA9{vp{P9=IQcQ+V#E8h&{Z-TM;DG_RJ&p7(
zOUQO_icT3&{f&a+Ec)uRk49)h98TN9fjmi6FHhziPvZRiI%Ff)fgs!UjtA}aD?al$%V^|_9oFA`$^%PIIYB>f!E(C=uV(0?P-0wpkx)kQ?*y1GHhyo(M6=dQHT1*>t{8J(k
zYm8GwrB-QURZjJ64@ZNDF)jpToRGDcvR;~ycB{rdq5@k0jty&L`pV3O=lZIsF-BjZ
z@`t!SN)mTG(stVUQAOdZro`H;(9(vgq-CIU8KR@eE}V~QFuTXS9G+#f;(xh&_}@c8
zf8kYaaY64-PO(
zOseH`^{==8kEZL6hx(8IhqKB$WUub*?d-@NSN5KjaW)ZY*vUAYamdcD><~!^S!bun
zPG%*0ulT)vf4^V<^+jp=P(x^aRzKp{JXR@~V%%&I<}^BcsJ5D4(C
zC3QS?7=Q6WZ{!9=2iP>d=bA#1qmt6R#?25LyM5)@g7f;2X}K!~cM@M0#6Td<
zv)yoC>URSrwU3@=k$j*mAxB>OYbX2bZ^Jsm6j$Q0?b>=jtp=a7ZT#tXW0%5^Hd<#w
z1V^57SwD5*Vxi!7|J5p<c|NLf@~`ba_zp=|JTPuw>+;v
zW?34ZFG>gXNajw2ZY~U@M&=Cn3_m`X#18l$)i<~CGG?4I9Q*qb;E#yCNL-Hj5Ge}Y
zPt(pqOPpfuQ?MVH-&}WGLSRN|`j`?dMYYtC#^%}&>zHo){AZI$#zG2MpelJ9CURPV
zvZM5TDUw#Vm$?z`dgeY8P-xUTFep9>ytnqNg8W(n1?;tS&2fxO7uU7}?DSRsuz^)2
z$uk?(#$=qC#~y70{U~rrGzD?iI4^B+NFyu@{?@yBMp=cE5QL?Q%ren^Vv%BtqhMIP
z-EGnkAi7Vx%0ednrs9UO2ayMBc4e$4iBZws_IewsCfPW&56KF)Yba#pY(M}4N#%OL
zTO0eaTifwXuo?Y=t@whE1k~1t&xv;j0QBo!pGV6iu4n;j?sG%0_ECO`_kIXcgO`vi
z|I#iS)(mDHqtj{a^rprsnq7MC@TtUjDYEbex1F9BhHbQck1jMQ3I(=k0{~qJJU>_X
zPE*f=53Oc(ml@*3Q%Ty`+_l0KMTqj5hvI%xIgQ3NpHzDU>JC~L{=KW_uRGC)Lu*W8
z9`Rls9vxl4WmD8Vi|!(j_UMIbM6smCLNL$JP%Z|7W(VSZv2u76x(4%<+O!P(?1rhTzvMGaJrGIkGqN9nhm>YJ7{hmRZ(}c1x^gux!Qo=wFSXIKv-MR
z%3qe!{Gns@HoR9*K;ujzGZ8OF5`*ggcW|&`+{@#{>+9p+eA7SEU889g{mW!M@<(B8
z1n(D?s9K!ax3^K1l%UVD`yPn=ydz}`$$_>_qOV6CXo@^W<;6e+n0l`<;MW{y7UPZN
zcOG*#VSTuN?zB`lPORK0tOXxKms5Esnb%ZUjf*f4{OJ(a72Oj$ciXtf{{C>z?tWm$
z;}@ayPmLtQLx8$fNqCXO>mA6|h*+=J3UE4&%>a|jX`6y8SM=9n7ZVv|W_^qp@-Y#i
zalclw6Fh=ztAOXK-#yu4zBI6l+P&Ka-=Oq1bD6&u!K|8y*9g-WX26OcuG-y!ElQ4w
z-p)6V%n=Gk^(XuWJgxw%8#c#BHaQG9l~+}BqPbR(u)+QP`sK9C56fR~9boUYsPie_
z0y};$zKv}y_|qaJ$W*pKfbi3P`IS(OSGT}&0_t9kW~bZ_vTNux*XOQ~9#@vic+IHI
z?VcwREFA~7y<6ywI*)?qYfA=ER5IX$fBIp~IEj17j>*V0K=IR-k;GP`y+?O9Gh<9Y
zx_bk~qThyDQex^iV{2)CttbS=t7nO^t1P^ab03S7CF5d-B#%)k-a?R2y|h@+v;_R+
z{BcK==}(zHw?e<=rS!RRj&u!3HPnd6nJ34SA>3Dew&>o3B}i@ZhM5IBQc;pPROpph
zYNW=G`mTX8>pbWIpB@-e5dNIy-SeXU
zYh7Fs2pNW}S`HOAO)cb-nZU%y)KJU&8Mf!Rsl6DwOfQdM&RjS3qkTe3@bDTlO#+j!
zv#v`tYh#62-`>%20u}dw?~8cXZ$uu)n1ESdN+Q2e%ZP9oDL2BEriP40P=Jhs0rNsH
znf-HFEe=5T3rRwixR?NzuFRZrQh9|LV5^8zmiRFj+o#pXX}CC3a|T$jL<{aFVZJyp
zABD+x!hr^7tmg3`H(L6W2V_l}juHq6JsC+*Z8#$QKrD;QQ@!Lo#Z|n
zODA_2vef)LPV^|6I=tg{w)YcgFZ#b`A;sEtcnQhkN=a)llF2YS_1(Wb5
zNLfvo(5}i+tf|MnQ8n$;KeQbD5pw&>^?ptZ(|`EMc>w*%PLuLEAYtnF?kI7NWvYJu
zv`L@HouK)pz}C?{>?>xdIe4t&@;7@JNMD1)LVuu5QoOMqIw3m{Kdr^TIQz=h<7TV?2=R^t|0oP9qKL
zTV_Eze(Qal9q%vt4#i+euXKSIkJWt5iBsK$N$`0nJ0ak7Fn{>8$rY_6RQCcb>TCX$
zsCHpC%{wS9bHl68e>vkVc5#|1>x}?7kW}oUB}PEpmrTkF52A++=!kh7_~S4BcH*^T
zdHS@XX!hCNd6Azek_$YGZ5m^>!>kD-r}#EJ7x~X{*e98%b$z=CVdl4A&c%O{Q+Co6
z=Pryz&(6T#uD`6p`m(lOW
zddpMkaVS2a@d>ypuUS6)>g!op7T_@@am7iA4>9sq1aG}hxyyO6-`W|teQ}U@F~A&k6TGh3i3I`tA|tsN8A|rk=n``^H3!1c
zSB1tf0o8WQ>#w>RBCPTsXS}Elu8;#Z=Ax<-*W~gp#8#v`ZiIg0d(%DMxA7v&+R?Mj|K$LJZzd^T!Q>NpIbPNX4MhY;U@nJ06D#qCX5?V9DN(U6js1j}7
zd9tLh4#J;HGT*<e`FKtunp#^UqBx4!@8X7r
z=rwYVxCyxrH+b3H$a%OSSsF-V0%^9!S;cMlc-BroR);1?5b@fE~ku+TC2r)vt88Mv43`aw7N-A7z1P+pD=xEHIGg|_t7Ph?y`*$WatRc;L
zJa6yvGJ}4f&0^RkFFlRTN)kP&&GuTK2(o9ld2*8V&TdjEg;IEzgt?{p!}=LktWg}f
zn!%p$2d;HYxA<$;Wd$%wfADgxrT;CXw0(41qHtRL((Ey?K))k}Z(q@(*u@SQk<##F
zhu>-b^oSPd<#|@Ay=W=L0sakSymzym%`6^!dwVmHWP8JGSY!_7+U)tpKRI_7@zY5o
z-g{+5G3L*clRhjTwuGi=c%eiB6|9Vm*Zj_aW%!p>2d-q%6-e?v20upX>J=chXMImw
zVDMI^Amij-U5$YtYlcFRXzkWeW!9!T$UXaaD@c1m{fwdM7Z-YE%b
zG+d=2!-;4m;o-ICN(@3b;-G~RC-*QroU}!JyRh7D?Nxec@yLka7
z-ES7lKscPN*L(uxDPo7M_YV!~#vZYO%E@WVp3N#)$U|3sa~J$uwHR4`P$(;S5U>?ZiM+dQ@_Y`
zZUR(g46k7-Jk#|4`JMjVDO)CEyn~++V3O08tzTNiF`ezgsd(rOszF`nykIh6jT?3sx)36M%S!!r$Jt>>7wZfZP)e-O>+
zFp|RhR^Gp4o7rV|M05HW<%U+gj!13~%#WXcbckLi%hFuU=YU{tvx=$nCeun*2h_Jv
ztXC46S&JhZrt6HtCWfd4B>$N5eTxIAN#xL=RkbU
z=xo#e24lS)bg+(_ppB##iua&DjCWvyDE*v3nN)jH>KmU5)wacO+IM
zqHE$US7@X?h>vcX|FU$DyIUzbwUM`{n4Dh59C2yT|JM`xPi*_{MeDys<>|-%%TUhM
z25f6dh|{*sjErSdO59tkDI@y05h+oxtg!5^qWDs`&8M%~aq9A-kU+}B3e{%q>(P0_
z49rnLXMATZQ|>F1fp(!L$CQ0vd{H+XBF?PUInC95*X#j(qj*+#jnliMr$*#?Z(^8|@ATA{-ImbBHURuQyT=DG|Cn|Wokwvt$u#$s%
zzxH>f-w2WK3L&jiOeT(tW6>IAHjW98}bkZ
zy;5(XRri3N9H^fHQgF+G9a&h-3{^^e;SXRmF0q>aCbKuJ6Y{n6`Q7Cdhp}3xwDdzc
zwoDtylOWz6GkL(kg<4$O#KdQ_SzYc`(Q`ecWDZlfUgld&9z<>0tZsyv7@lO@?^;>r
zz#|m{EVl~{WzuUqjqGl-proO?3a|1TY{v)1ALOi9i%xCF4yf<-m(er?Ej1@TRLM>;
zGsO+?@P_~UEJg83Vtn<}husjSD4?5pkOwuC*7|lN^VoALd_X=qj7*!KgWxMok=v;9
zSbPO&7;ZK`)G5nYlG_p`2U(MIyAG(fL!eeJ~CgU8bzN><&F
ze``J_pYemc-{S4vlf5OeIrpPB)pfF@a=p-BJDLc}oG*1
zKCOH%$wv^$#jL80*ZWf4OpKViZz?NNK4Al_RVAAr4!Y!lAM8^)C&Ru9cujMJzIoXt
z{NqO4e3y9_y0sU&OwqH9zeDZI{-!C%9|OW0i|nA={=O^TnQfFo;l^U=+$?DW(mdQ#VnA`dU03k_+h4M5Ik~@Uy
z(;JHyt^-Bq>+T@vpmM;dPiJdK&P~n&MYKpmIBfgwVOrt2JK(pjKpsK(60&(!GtYjF4CNUbKcni1=p!3UcfvPy7w5;#HFmD)^
z`FBOBi}-urix|{0;M#6$t{emz!fBtr3wUC2!Qh{5ZbB>35b1QqP*V=hnZB~`>Up;^
z$0s6R4z?~R06ox7a{{@iZ`<^84G<-u&9&6b#6PTKE$%nZ1?R$JrW1=_C
zn^2c?kj&RQ-@wq(Ry%_^vsMOgSQOoxJwG|m8!WfLhQZguc$uF$YCrTAJD8Gs=x?8x
z1w+5CcN=Z)n3=&{+|~$Jx~9;Ad72l;VpWuoAG5uI5log@)gA@x;T=v`hg>JjDrVaB
zCB=^cRNa-jxhAF;h_skXE|s?^y31y6WO-wtiN$Gbg8%|`*Sg{!5os1Ohyi}LnI}~|
z(7ldY<$I=sn3F4$9k6I)ScBx&QuklOc~n&vOoRdFd2&%dT)v!>xJ0Ra!mL-67LTHal97h*7FsqZ@ii_pjKUgS^eKqJw2&*yIksRs0+$e|MDj^DKDoc|lE>*W%-2LmK
zdS*cN10r#+?vD5TiMb%bgJ`@-In$>TONfFG>ux{J6Tm7l_su#`I&{5ty%v=
zemJlyJk%nXUR@cG_T863kP0B|8B8XUPjvBpDCV|yTi(J(5a;FnrL-sSqJs}ro^dgy8gI28Aa^^znU5>b<(2dlVKu$OF@#|CgtS1Q9
zApOXWzE1=$3A&lYbvP_&zP$W)uHLJ)=w}pUFQ$i9C%ev!to9p(cd?qLo$xTe1#)1xS1Hy)_i?(U1OU{xX!Ds&om8qb
z8$-n^#l%AqAn^p7;jqOKXqKlF?w4Eg58iv`7k3#Saq{Me-`{)F$v0ZHPoLJ5GgpNZ(=ieJBz(7JtcoDYr)YE3l`
zzT9@jQSTmOX4Gq@7oSq{oyy46Uk}0)L6jeE^_9G-X68j4e7AocBlDxe_ccqe!r!uC
zxb$TR#Dl6U+Oe^k8gZ9M#3%+`EEls2YnT=4M0YsU7%4F>VBW?Bp~vhp-*v>ZBz)n5
ze33u(;WnN`jqW=!Y15e>3ynm0Rh9JrbAR55p2RHor_ook!C&0@}+4Uo_`5+oQtT-zH`doA83n^kyAaZ
z$nQ)l0I3vAlI6_wbgFXSC&WbDQ%{||#-`m8N1fH^l6U5KT<067$q;=3>YjDHW&nn7
zP3%s8dDlufJuX_$|5nF58N@W=sa6@z^Bh;tv_}9*V%MoW7~Ekl(3eBX>);O-H2&(k
z&*_(i2S1nN1if;5{t|adL3CXo!3Xe*)sd(
zR>G4FsWtp@Sw9Mse9cQryDrP!0=K)P?#uWW-+=1vvIYo6oqo~9
zX2Npcw{;%Edbl(%o_fBGrg143DQT25mDu?JIHrz!izKCSyCr0
zBR>N71}0`QLsWyUN2&MW#oebYpbAldE*n4A^SzAMTTQLx3`XH-$w7
z`ISXvM?5qO5Oxy-TVnl3gL$vMoIGsfeo3iwoC|P-29|`>EhS$Ef`}X+20e=t)#7bkDje>#TRp@%$PT6cz>*161<*_YNQ^VA-Mf1dt2PN
zi=VYLzyT-h`59*3K^*oIr6o>rTh75myikT6j9c9A1>;8hwA}x=F*fD>Dazd0tV#3p
z1s#t-9su@h%x+*vJb(^@$&e`j5jWz<*=rf&vN)zlORAC-vLnYP;KPLf&@o{7q9E!;
zz__4$8t1Wg;=kX141CA6d2t`=9VWg-qsuFJg$~gF$&9t7?N}d*(&*`!uX2DGg06NG}sE
zr@8tPe|Ho!!>~aMV0Q#!4ifnEJD)6|w*fn*mZs_3cUB$AXN8%Uj1&{6fcSy?G!mZU
zky^f7)R7oBEh2M^IoR=eft2MwYuWx`*^$UM2bgg(3Fh};5Uh`KDPB}hPK`HwFP=vS
zPKT3iOJ{r=of)KYw|h|#OISqeWl1?t7G}M6=iWU(2J!KUdNg4Zc5;AKgu=f|R&6zRs_17+VY%31<^7x&Al6{Hz(VEUF9UPsG4Tll-+)c7OZ
zL=2Uuz$!Gvb^1b>&zL=G9zan0q0(u@2ue!|CCg?K6{8a7_EnPKONKJ3v^BhjCLQ^WB_Z
zD3)9z_W;X!i)=5ntq77zTd`lO+ax|!M!3tjn1aY1LLUj@X}{Y1&5^mQ^_Qb0Sly}%hB%6%&omu>2maAHAr{L;kDOl5{-f|PC8|Gzg9e(t*|+!%
z)9##((fsezpz4E|&}hST0$=^=qbAYz+F`MqUe^Rg!$@#v;X|=fO
zUW|2LCuKFik`v9^MW;=+6Wuklt65$F(^U!sOkfm(&0Bzrubkhne;IKi(fG(-2^p
z(ZKckt-1U+#u{IzdpBg_X@4*r%#HnZzB`09{(Oa1moc`^fo18LI>}Y
z!Y>I4u+{%OTl*PQ$SFKi9t=PizlV(oPJ*OygB(Jr_s@X)N!TEgIORo3V0?&CZm#72
z0t!c0UZYC;8dwc^Grndh3sY%ro%5r=hYr6SujGckN!0w{I=<>>tGmYfVfwM2xqV$K
zZ*{&{m@P_+=GClif6*b3tfdE++~ej{Jr?oAB^R&lbd@tOx9$MkP&9pZ5Jis#$)!I2
zGcGs_?B}*rK0i?!sPF7Yi&>_>dXyTsFv1kfMW#>{usvhO0vJMLgVm_#96>%%y&?z)U}Xs%@_7`JCq2HnG52|%0TRZU8gH3plij^5
zTbQZaUfnQNhkzt8fgPK{|Apbckky2;6Ww?e4JLp7)8B>V%0XhACp~5iSUjtB^Q2a1
zsgT#y>nFG3Z8Nl5xWl<9RclEMwpdG>Bfsa;Pi(ek_@M2KSRwLBSg!j*s$QZ?Cue^5
z4L9VVH8GmtX2yHyh6xtNcAmbQu(5n0y!0}{EKc)l^9#5zzaVE!gXCPt!{C+K`nQhC
z!&L2i*22b)&Dk$MRLV&{{ma1k%OT&;^pPDNiT)wc`0T&{3WxZD^y4UA|5^<9*B)V{
z%61~^HY(u@_UAli?S{VS83-7sQ9ae3UjpeK4?opr9wey{Om`s6*>VC
zrRquh4p*+IVF)0g$qM=HkMDLoe^hJN`RoLKUm!-&s1wc~ix)_h+O3~@Y$j^=R=6l_
z+4~zkmY##cskuCH7xp#DcCBCQBqzPoFBz77<29jajMB`2vq@)$=Hz|k
z;|0Y?{^p-wMRvvZzwdbLlX--?{0dFKH>{U2Wmco!EK9W$yTF~VJEZPbclwS6j$MpG
zrVvN_CiXs%)fFb>QuzK_wKFRG`Z;FyUac6@$k$jdKMcmqv)f~``b6yOUQN$EoX018
zq@9{ky6rWwjkA-eQck2k5@{t4-|#LtSp}3_gr>VO1NLDP+*c<+h-n+Ex%%pTz4+2k%_{G%UE8C6
z8Mvn89jji7A~%Ug%a73~j&>P}d<796V=)mg{Xibaw*fhp^$!dnRx^lzyIT=~a<9_Y
zxYKApt@4)|6Z+ET&-Y__Yc=tLA!S0e1%3g!OpiU{#2c=s0x{~^i+2+dFQAS0`u^*H3sRAV;5&Zv+lYh;tb
z`_;zIpjkf)c;u4vxCF?iOUU{WIBaUqw+cT8(f{Dd$b*?K9d%C-#
z@Pj4a59CPF*A^(tHkD0LrLh_YlFL0DVhmiP#{#{WE>+3M$&c;NgA==pebfhkKT@#0
z4pV*IP}=UAxPW6ReTE1Aa3G~%x4b!?q9arn5(T}!l?a!~E@ME3~g@}NxZ(L>AT_s>dzf0dU_Lbj#z9z)MaKFz9L
zeXN#%8)FU*o4>(Ve(@J0A?;LULKNX3Igj}m?{f2VnU&z})0~U>*?1=gyWi1o+$wo3
z%^M4YxHqStaXo%61G<#Z$gBe_A0_DkaI;zA!ja_0v_n!<{+>SQQ&71(A%|&q0lJd6v(kQ
zRCz$~MILAy?T>26cyzAyE*WMC2Wv_V_^N%`OTnTASaFgut|@HChaEU3Z_+Ep<_Q!g
zCUt6A!mO~oXFNRb+Uz9oesCqLVXR+yi$Cc~um*(lYTJ^O+>*)Bu7$$oT}Vk*|2
z?AtT0j-jcH_j|_Ox<^wzB^c(1tX7)7PhXd;b@pNvd*n!6ZcXLCF8+*cMm&oo<8?wU
zb5udZGnpX%SSdZW3kkp)i*@T%DK&0e2RSMJ>~fh~>*<$!-1TjaIXS*mpi*6@p6tru
z&cY)}gvWn%Iwn*!1q(qvM?=MN#~6lG^cvv^Wmqe89`ox{=HVB4PAl8E2IGZ)t6P3g
zE_B$*@j@oHum0JVfMD@B{(@&d8k8Ci5eVxu)t6*pdmu*1X@^{77q`~JwFB$ikl>}#
z?i{urOWc3U`Cxh&8G733b^X8wn2N9bnM5YMhQH){To0md__?>z1!Y
zeb=F^V76sMHpi8qFWPzxU)LUus&mm1B*A*gs!wWOhMd$qhzqSzFWhJAockkaw-LKT
zt-a!&PLUnya8HJoAIASMm~;E;kKG&6J}P0KPzj+AENxfL9;P6z$nMv)o9KKZg2aK_
za&n_)=q6{{dF1>@-{0%QU2KbQO|BQ_uC{xZ3hXynu7Bs{_?7(1Q^OW?G!ccwzjGRH
zUEX038wM}ME&o!9KtQHckc3NoS@a;pF!bQh{z62!%s(3SQhBd?!jE_d8RyHeMZm`R
zlN&MQgmBmEc{Inv$!rmc88>Rp;H{?F^sM#KxQiMP38%$ixKT>R&J|nhGX4P$bRbw8
z;!4y@mNW?>MJdxH+ckC!Kz?%=hmD8$O1o
z!@UUryp4l~YJ-vdcSb@Y8%~u$cl|Ht3=Rfi5YPl4abS{-SwZ4DEA6M+f8WM_jp7Rls(SE$8ak~(%G%nQv;}g-We>Tt`oU~_rS{_6lLo4S92{_
zF{tKD{2RC<6yTf=%Xw*pZ^i(iv7$DQ@eaI53vqaMb=p7X>?|DM5#xf1J|@4aYj@dL
zV)ArXnbFUZ=VZPpzu^>fVX1a54}-xg(+V2lFV6yC&-DQ>O=d(X3AcJ)R5
zFQ2k2#p6|nlxJC+|7u*e#p|EUF8UO26@!>4Z^>*R)R@JkycrtglB_CkpwM2berfNX
zo7GrqjL|eCR$s>dyrXCw&h*}DUjf1#-h(xSVn<5i*t%RKm0ARxh7U!6-_P0yA{o+mQ+g;G|DtOdzQpioGcAV(u`|jMW(2o#IAl+Otu*qxrKy
z{Na_Hx1*JI&`VKPxS$i;Z94bfAGS9sL831z>Zz8Z>_MxQ6_{D2YshNJz_~LIzAbF;
zvYW(0!eY_T{a%F*Oh(AqPD&oo?rRQu-xen9TjNLVaL
zgQ^6A9pi<7{eoEyb$QyGn!SagQoR3
zO>!g8FjGz~kb(q&<3}nUJ^D~+Md!w0Uhg4+UAgk~!0ls)cmr$2Y@AZlGS!@P)j7>u
z6E`+yciyQ^yoMpYx*T#~^aWPgHs+50*B=wiPlv4dNmXb9K}>57@hDuU9fMb0GEi2S
z!313fjmoTJcfSozS$2w3|NWp$Bj^WuZ}09_r<9JnbswX)H}mD`1cLbE)j_hk?t^mR
z>{IP+*}LU;=ka8$oboK5A;Hr>;#!d%w%=OZ&U4NKR4zoh_eEB?ed%T0X&f~==rSgxy=Tn{sx4Hx>01u`>rb7Av*
z9)B6NHcpR4YtS)7N~Xu-MT6JQ&j6P(EOx)2*mcD@ZrojcNbEkfzR(a<8E#B%I;wcd
z85@$q5-7;yw1FwXMMv^YY%OlS=Z4v^u14^7Cm&dk51qUMQS^y>&08i_;*n`W)<|Za
z63&Yw>Q^}Moze-;zRg&JQj9!XO&(2Whz?B6iL;+5Ujj_Kvj8sH7mESvR(LB-hr5~+
z9OB^NnbdFnl<)4HNK5Fo8Ua>j=RSD&fCxPFBk;y5CsdFt-RBY8>OAyvNB8JBSWuLx>Ts*r!}IeteiG67VqHhuw
zmrHc=Uy9jWatJ?Vyxm^XRn$uteyTqt&rsBL%l`7$QNWYI1E#M`fak3I_I>=Y(U0o#
zsSaA`VjW?kid>$Dx#6&Pz11<6(3bsU*1x|Ds+#E*XOa^kWMB#r*(KMWa^{Rn%E)i|
z@nj-bk#J#;@EoT~q)Z6;>X-|o?|EkjjWaik6Ni!i{N5q<-cEy}By+lD+mcoA9>hh4
zEGYw&*~t5bP$zw{fbSgvQsDVe>0GdYI{y!U
zO`Yj`?E@4JAn!BB*8M6e;5|!=e(bo+2xq&_DM42dLm@}O96NtaE}lQf$s!2QKj$U)
zBp9)&L3sN^+X%#Y0Z9!R3VNon`ifnxcQgE#QAs8+b$nsE?6x(E9XODPb2E0UA`+ha
z9bVR&ZO?LJHa=o$J5$+KEqKi=#1|Sn76OAw#=+HW!^4DMKH8-g;kL=lKIvTl(mBqY
zOmul-T^G&?k>-~GWSD~f==hU63vp8xWMtnR%3!AL&7PHMVW!r64qUGlD3cdUj`3yV
z+zbR1FSou1Os(&qSww2f=!j7-Ayk$@DygOo&`t9)Ja}yevaBnda*HBnpqiPs1>faK
z1)7~T*s|~%OM;ETxHhc4PZjj1Sh8DdsR#;2qkJPbHl7l2gLc`iG8u!h)y3F03P;N1!@Mb^+K&bo1jqS{)*3s{+Y(Px6WQy;#X#7=Co)I
z=4B=zJZN~wnE0O_vbJJqOOgLV{{EQ77WG3gLZvmdcgnWdDl?V!H8KYhIFUNLfGN
z!LOSjr3wTiD%7@EX-3u6;1_azdI@6UO^E7$DKpwnNOr<|bvZ_+EHHJ`F-2G8%Wx+!
zyK7(x!~NP(&FF}597rAHlBl7mv=LTo$1fS=tTe$$JL7jP^FQZFHeeMH|sif2{
z6JmlXa`6&$!N}+c
zZiZ>SWi&Vks4$cLY-Rv1k&6Hp?n;jFto}azVPFh-fAgK82rrRhX5i0yY*Wlfh+fTR
z!u%b-rn#sV`&t=FJ0L(ZJ|;d)onu5%wg`|Ck?Vj9xaWr6V!p5_kX+8vej<(k@k*?@p!c>!0SzWUGeMR`{HX;{
zFjg}F7{Tw+2XcjFK}f}N8ds76M1|T|v_VmpuYCT2BXb-yXZ@Nhq?hbAeNg6INSn$e
znPH{*cxR*}%pJS=E6tvCm{6rd5y5Y7uIFF{xm0tIhXQ4LEm-
zu*l8uqv5v%;4aKhIw<#yDx;h6BR=a_ukI0rsnBSxf>khv9l0R@-lUf=~$k9EfB;c+k5QY-Ha-1%d~);xv~S
zd+v!Hu#q$e8vH2##)e0qN!)!O#pC&f7L>Hd8D7P=0IQq}Lfwm&^>zqNz3%%`b{v$G
zj9%thH(T8ShdA^oyicuY@l0DKoMC;^t;Fr4G75b<4&p|
zU~$C9*i6A0_pXJaq#zkF@*3uqwpWLLX{J4s0h+z08x1HvV#KPHY`yMNB2w@{KVsap
zNnX)AjR|wtx`^f)OdIgg7eLx&cnU6XS#8WD)JG0tGoji$8gKlMXRKm$}yq0OZI
zyc
zFyoRtIGb0VdJSnnxQwsL1)I|fhRScCS-M1F;!=)-HPU>i1&mZ}pklVYelQ
z$yONxKf~tNeBPBAeD*|uxKpn5=E=?vH6HJjm2j~KEgtv!XfTS+P4m9s2;Svq
z3_#)MkP?h%#w#}?JSU%X#o`$e4=@d1Wilb+D%){UHaj#dzV0rEG*yZgf;{=ERAGFP
zK0w$8-hF3o%QEM&u&nvON&JyL1!kjE&vFxj7=z{ox57rRB;2RnT6wWR+ncM{O0%
zwQTw~^>Ut_;I`HvSyG)1PZ{co$`o*O&yaj}b#(9B2G48R4Wh9y_sKz@CLc(=TyG!K
zCE)K~A@T|nEH+^9H1DpTJ4OqQ)BgpWqYH}a%X34oGOijjO2zye+z6GS4TfV>_;+$c
z+)p!(`-nPt*{ld
z5sX6)l_AJ{(Db;H6Fv62^*W#XGCQ>IxJ&npuK%9p;m_3%X##3s;hz|d%CWSqXY!o;
z2qNLk1Qj_TFXPY>t@L@4B&q+=MJL-AD7m(=D1;*43kbHO?i8vpITBc)pHl;DxTL!l
z2Epr~;cu1JE(P_i!(LV)%VZ)t`IJWKBZE&2o8PHC18chWayPySDEEu{UhPHZ=WP%*whGz32J{4
z_fPOyxjGCEpB@CF%2`ld*_9k8gpK8B*)qutPwD2)zT};am-TzPEvW$Drv9j-6c2uw
zAEf9)G@LSHP>VjDE@e0-Vg#_eQg{EkRpRq3sgP}E7E&FeXQL9YRWJfJDl^LLu9)Y6(w&OXVl(^1UiIh0!E?
z{NYX!8#m&&T1gPMQxBFv8lnd4aLIAJq@|r+YX%LF(NNYk@C7e~B&>z3(C2f)+2lVy
zJZs+j?b~ou@z(d7z$y460X>AuqUef0Y;
zeqOy^ojK<@=RCLbem>9V^8VanZ!%ogoy;n@OKwhSH|xkg8Ffjn_S#CjWazP_(>oG5
z%TzsGIrf{A(%_Z)(;PNl8hV`>-=Pn){Y`AmYoUfL+HJEHn
z`1mq7$S}FR`dp)tle!jfpCk?m$v-!7b!XDSU~g~9yqm0eq=etwR1q<>_-(dLVI@%!
z)wDQkxYZv%L|=M1hCf7di>MF-@^84qijaDDRkbXsl9Q&*jiHZ*;@=0(^fo`u79{2v
zRJvdI!so7T8CO|Dwp+=4s0`k5FXA7|5qqP81rHWEe*5(Zla4gYq6?+{EK=JYsq1*o
zEW%@n>Fr^0lFgP%t=Bs)`DU2M%hz$t=0!bLn|?kamG-crs?kU`_l8hQ9!4Cy7+a&N
zDSx_7d9qUd>HcT)e_b<07;v_9&e7Ry$>SvPuW*wTq!zGBKI$Ip;w~*;e7q)i>gajr
z(slI1+}UnV*J8iX+n~k`m1Cw3c+3!5ZTzSr_(kGrZ>
z1#OY}qZQ>VWxd%2CgpPZj8tr^MD;z##y7tJ9rLDo25B$vzdwfX)t_2iTjbc)aWV(8
z#~3CGuvmoXW^vvWNgS^ojUQZ=bn}G7g<(q;j1|r%h)0q)>3q7saBxkg{nDiF8z}?X
z;S!5paXW{-+iR|K5lz~4@m1!$(WHHAeD%tapuNK5NG#{$26nF6oc$7FLcer@W*+y>
zPI1%oy|MIW#GUa18gF<0T$%CN*tLMP?9@LSm2xt37WTbbzke#E-_i31wPnea
zE*|on&zi5#b!)Z$K>ocF$
zrm@xU9h-yIVWwA3O@}Iw{{8E#mwV3LKeD2)xc0Qa+$4*Vt~m?x8KUJ#hh@SDP>3RG7tS48e{l4V
zpXO-IJOi!u)ya1GjGGSDJ0`g`>dz5TWr>0m
z>nOQOd(DhIbw)s0s{5k6OTVkyZMOvj_rKV{7C>mj=H(#Knn@dqB|2Wq@fIKNx_QI(Cu
z0wZ0gFCUST#cq=5#PQaiIgLGiuK&C`+R8&RXp@8_V3noKWglqnHo>R`34m^F(#)BY3vEdwR8cG{QE5jjfez
zjOZptCt^g^EiDx8Hh1-&+y8sFwp$Kt)ks|X%l$$+F%LnIimh<^+T!E}=z#DKHm-J*R7l8V#i>k<76y}-nas*7$hLLkAllfH6?X#0I9OuqZ3`^J
zSiqt8P?w02K7ltc{IgM7xAH5u9&yC4DUPcoE+SJtFL&K0))^O4Liezx@trMc0)&>e
zdBR&UB-|szz4QW!X}UD6d4})N=6+xHODjs6`VQZ-gR8g9ATQwC$&@`FE!xxKV-v7*
zfmcM`-tTQr(U5_W{V|GktS87T#t3QkwsE$aBwFP2s`8q-S*uxTZmo*Y%XI%DChL~L
z&v{^(BQJEZ)?i}uJ~Z}Uw+V}Zpr2hl4D;D&VN7be1v5|C;quheZfzQVoYDa{9Gwgx
z0EJ@ot^=~4pE`X?TShvoMlZZDyR2|)4HE^3fej$)7sN06>CaREQUBwYa5H%(GjH%<
z!>2?}2$$k*_?%I=GcUkJ#2sl=5V>&JWX%JCdMr({`@Gur?VDGpS@wQYYkB_Vu#wUyQ{)u^T)rGmmFdHCA?ZyPpgpe}c1+9vd%s|G*mCQ8|QQPtKA`Gic2@hE7ffkbw|)@9L8qBOUqC9@3FxAK-=!;%Q8i{h2j&KRI**@LS#E?II5|^Svd+L3Y+|n`6-nOO%X92b
zx|VVBvMwUaB_WI0(`0nJz}9k_!93E;ua;Zec0>ba7eeCT{Z+9w{t6`$)^u11F*!vR
z`P)36u_acloH)U_Emc8+K?{JWX$ceg`i!f}k4AG0w{zd!UVB5e;qANr5B=F(O}s`6
zT|pSmlibpt%0@z6s+lh~7|o&G9wwx4#6p89yZ#>8SPDR#8>X11nW29DH$2+K5Oo8p
z0fGt@NDp{*2q-I1TCs&8J&9hmUzqfMcCTEn)E{RAdJs@E4Mvgq5$p3)umrDbP7N
z){Z>bxkJb*(XXZ_%$Ml~QfhOAtUIz&RWse~aMxW^kC-=vZV2dNDDVY+LlspL7l>_t
zwQ4L9$MUHJz!|w5Q%P3qpAtIy1@Z
zg;y46-eTX$_A9zwH%o3EbK((j)fWBT=4shqoJ}hHeW$>R_&)Qw!?1zKCG(`rS0m?K
zzbESx^ssLvQCbkIj$@bk<>kr0jvo5J&R7kD@g1epoz#d9x-3(651U}GAm_Hkec9OG
zMKcZ!`XhK4n;CDrn+;lTBa1ZV*Jns2vh=p2`M!ZCG^#r7kN44mtZF|(ceA8f4}e7qt$D72NVBCm5+dcC
zJ@0-{Vml;o8TPL{#d-l}M<#Ds1iPqSNo0y<#C7Gb9Xp0-+h%Q$ZVf>>K|}(~w5&yn
zvmq`lshs4x^i7NZ0ZXQs?C$>K`t@g~WIZNV=XY=$z~VkT@)PEn(ClFiwv5RchcdAo
z99Fkcn_2tHVSCww$%pKNW>+%2CqrtLM|QK`q9P3cL0HA2{CdQi@om4d@fN>OC*$cK
zM#horvjRx^xOA+5g&d)y)VGtM+$WA`GYn4hpx2ewZ02
zwhon2*4nw|Mi@aQyjaR*-NT2@VR)JPpkKU^r0_(d39W
zuGmEx<3oMkt8}vWgzKUdLGa?wFzRC~iXVgl^J6)7tV~|z*}tMpHlRM&w!@fWEF+pm
z$)$wQ7uSnbR=susLzpPb*QXmg9Olxm$v{oY1B0gpFHhXWL#E&Ly&V(WKJ*uI^?1_@
z6(5(RP9Dq-l+@(>aA1i(9=GQm=J?|;;snnxZaZ@>z5YAZW^C@hrj9~d$oWeRotw+RH3?gh8Y2ow#e6ydsc0xH=Td`8CBqV1Ax7eHuT?gX1<<>Lfvr!Ptez|N
zfS
z)4B@7Ap=dPAhhTz=X&c(7>5b7F=OE{RCpcRW5a?E91_sGj-l$`H!uWNmfkv4fh5RO
zNJcj1H8|Dc{+pWb{u3`fm(DMvmOJt`GQix~Rv0z%XZ&!2jwP#eH-sdukvHQqtzNhs
zk}cpz=s6v~yhq~VJ$*mRK4(?QL$PC1km_m&|CZQ{pG4rUsR{Le+%MRULhQ7HromxHrEzg+aGL
zT8fOEU^^v*@5~i$lJ1w3_yPSl>vKX|T>(ji*tcuIVM96G=!M2SHgDdlh*at_!KNbnqEq!n^op?UN`kOR1N*NM%v|Te4ht+aH7K{<_N!8W$NzZZSYA#B`!E
z#+EGJVrb@GJ2t3wkR5yzncE(El4vFrv--ZC=x+v|E
zsENBOD&ybm60&$$F>qb=-Mo#{43ep!7(4+%R^Lq=o(;EQjWY{Rd}ohIX@y%h*C&8`ks#d*y9yLJ7
zc>-KmmKmFxI5D4Z;`v50AuMZs^ge5LPr$M3qHtJ&q|+#VGWMafS*2f0QiA
zF;eLK+)+1ijRBQRx%~EtV`@1oJD#_9UmR7E7yrXO$pE{o8>iN41V{!_&vC#Oigp1t
z_a(u@CH`#J#H=DbWe5h;rHRH^?o-QRoLe5e4c*H)=%vPreD|S4|vH!~6f_;NtgPWOTaB64T9kXQ?JHaaqp4;2e_tKbtmI
AMF0Q*
literal 0
HcmV?d00001
From bf8c709fef2ebb557e073857c57e69bf54ae240b Mon Sep 17 00:00:00 2001
From: Aton4ST <64774266+aimtyaem@users.noreply.github.com>
Date: Wed, 19 Feb 2025 19:47:18 +0200
Subject: [PATCH 5/9] AWS groundstation data to report
The cloud is supplying the output of reports transmitted to the web application.
---
cloud_carbon_report (1).md | 59 ++++++++++++++++++++++++++++++++++++++
1 file changed, 59 insertions(+)
create mode 100644 cloud_carbon_report (1).md
diff --git a/cloud_carbon_report (1).md b/cloud_carbon_report (1).md
new file mode 100644
index 0000000..8ed19de
--- /dev/null
+++ b/cloud_carbon_report (1).md
@@ -0,0 +1,59 @@
+# Carbon Footprint Report for CloudTech Solutions
+
+## Energy Cost Analysis
+| Energy Source | Cost per Unit |
+|---------------|-------------|
+| electricity | $85.00 |
+| natural_gas | $12.00 |
+| fuel_oil | $0.75 |
+
+## Cloud Infrastructure Profile
+### AWS Region Emissions Factors
+| Region | Emissions Factor (tCO2e) | Data Source |
+|--------|--------------------------|-------------|
+| us-east-1 | 0.000416 | EPA |
+| us-east-2 | 0.000440 | EPA |
+| us-west-1 | 0.000351 | EPA |
+| us-west-2 | 0.000351 | EPA |
+| us-gov-east-1 | 0.000416 | EPA |
+| us-gov-west-1 | 0.000351 | EPA |
+| af-south-1 | 0.000928 | carbonfootprint.com |
+| ap-east-1 | 0.000810 | carbonfootprint.com |
+| ap-south-1 | 0.000708 | carbonfootprint.com |
+| ap-northeast-3 | 0.000506 | carbonfootprint.com |
+| ap-northeast-2 | 0.000500 | carbonfootprint.com |
+| ap-southeast-1 | 0.000409 | EMA Singapore |
+| ap-southeast-2 | 0.000790 | carbonfootprint.com |
+| ap-northeast-1 | 0.000506 | carbonfootprint.com |
+| ca-central-1 | 0.000130 | carbonfootprint.com |
+| cn-north-1 | 0.000555 | carbonfootprint.com |
+| cn-northwest-1 | 0.000555 | carbonfootprint.com |
+| eu-central-1 | 0.000338 | EEA |
+| eu-west-1 | 0.000316 | EEA |
+| eu-west-2 | 0.000228 | EEA |
+| eu-south-1 | 0.000233 | EEA |
+| eu-west-3 | 0.000052 | EEA |
+| eu-north-1 | 0.000008 | EEA |
+| me-south-1 | 0.000732 | carbonfootprint.com |
+| sa-east-1 | 0.000074 | carbonfootprint.com |
+
+### Server Architecture Efficiency
+| Architecture | Power Consumption Range |
+|--------------|--------------------------|
+| Graviton | 0.47-1.69 W |
+| Ivy Bridge | 3.04-8.25 W |
+| Sandy Bridge | 2.17-8.58 W |
+| Haswell | 1.90-6.01 W |
+| Sky Lake | 0.64-4.19 W |
+| Cascade Lake | 0.64-3.97 W |
+| EPYC 2nd Gen | 0.47-1.69 W |
+| Graviton2 | 0.47-1.69 W |
+| Broadwell | 0.71-3.69 W |
+| EPYC 1st Gen | 0.82-2.55 W |
+| Coffee Lake | 1.14-5.42 W |
+
+## Optimization Recommendations
+1. **Region Optimization**: Consider shifting workloads to lower-emission regions like eu-north-1
+2. **Architecture Upgrade**: Migrate to Graviton-based instances for better energy efficiency
+3. **Renewable Energy**: Explore AWS Renewable Energy Programs for carbon offset
+4. **Instance Right-Sizing**: Use compute-optimized architectures for energy-intensive workloads
\ No newline at end of file
From 408424a7a2ac00c5fce36bf4a58298c2fd5bfb9e Mon Sep 17 00:00:00 2001
From: Aton4ST <64774266+aimtyaem@users.noreply.github.com>
Date: Sun, 23 Feb 2025 23:46:19 +0200
Subject: [PATCH 6/9] Update README.md
Datasets
---
README.md | 125 ++----------------------------------------------------
1 file changed, 4 insertions(+), 121 deletions(-)
diff --git a/README.md b/README.md
index 0ea1419..e7cd456 100644
--- a/README.md
+++ b/README.md
@@ -4,124 +4,7 @@
The AI Onboarding webApp aims to address the pressing issue of carbon footprint reduction through innovative technology. By leveraging autonomous small satellites (smallsats) for earth observation, this webApp provides users with personalized insights and recommendations to help them reduce their carbon footprint. The goal is to empower individuals and organizations to take actionable steps towards a more sustainable future.
-## Project Structure
-
-This project includes the following components:
-1. **Wireframe Design**
-2. **Prototype**
-3. **Mockup Design**
-
-## Wireframe Design
-
-### Homepage
-```
-+-----------------------------+
-| Homepage |
-|-----------------------------|
-| Headline |
-| Introduction |
-| [CTA Button] |
-+-----------------------------+
-```
-
-### Onboarding Flow
-```
-+-----------------------------+
-| Onboarding Flow |
-|-----------------------------|
-| Step 1: Introduction |
-| Step 2: Features Overview |
-| Step 3: Set Up Profile |
-| [Start Using the App] |
-+-----------------------------+
-```
-
-### Dashboard
-```
-+-----------------------------+
-| Dashboard |
-|-----------------------------|
-| Carbon Footprint Overview |
-| [Charts & Graphs] |
-| Recommendations |
-+-----------------------------+
-```
-
-### User Profile
-```
-+-----------------------------+
-| User Profile |
-|-----------------------------|
-| Profile Info |
-| Settings & Preferences |
-| Social Media Connections |
-+-----------------------------+
-```
-
-### Resource Center
-```
-+-----------------------------+
-| Resource Center |
-|-----------------------------|
-| Articles & Videos |
-| [Search & Filter] |
-+-----------------------------+
-```
-
-### Community
-```
-+-----------------------------+
-| Community |
-|-----------------------------|
-| Forums & Discussions |
-| User Groups & Challenges |
-+-----------------------------+
-```
-
-## Prototype
-
-The interactive prototype can be found [here](https://aton4st.blogspot.com). It includes detailed mockups and user flows to illustrate the user experience and interactions.
-
-## Mockup Design
-
-### Homepage Mockup
-.jpg)
-
-### Onboarding Flow Mockup
-
-
-### Dashboard Mockup
-
-
-### User Profile Mockup
-
-
-### Resource Center Mockup
-
-
-### Community Mockup
-
-
-## Team Members
-
-- **Project Manager**: Oversees project timelines, coordinates tasks, ensures communication, manages resources.
-- **Frontend Developer**: Designs user interface, implements interactive elements, ensures responsive design.
-- **Backend Developer**: Manages server-side logic, databases, API integration, ensures security.
-- **AI Specialist**: Develops machine learning models, trains AI systems, integrates AI with the web app.
-- **Data Scientist**: Collects and processes data, performs data analysis, ensures data accuracy.
-- **UX/UI Designer**: Designs user-friendly interfaces, creates visual designs, conducts user testing.
-- **Sustainability Expert**: Provides sustainability insights, suggests carbon reduction strategies, validates data.
-- **Marketing Specialist**: Promotes the web app, engages with users, gathers feedback, manages social media.
-
-## Getting Started
-
-To get started with the development of this project, follow the steps below:
-1. Clone the repository.
-2. Install necessary dependencies.
-3. Follow the wireframe and mockup designs to develop the frontend and backend components.
-4. Integrate AI models and data processing modules.
-5. Conduct user testing and gather feedback for improvements.
-
-We hope this project inspires and empowers users to contribute to a sustainable future by reducing their carbon footprints with the help of advanced technology.
-
-For more information, please contact aimt16@hotmail.com.
\ No newline at end of file
+## Datasets
+1. CSV files.
+2. Raster Images.
+3. Manual input.
From ceff690d5dac2d2a079b5bef2c629bf736f3e6d4 Mon Sep 17 00:00:00 2001
From: Aton4ST <64774266+aimtyaem@users.noreply.github.com>
Date: Mon, 24 Feb 2025 00:02:39 +0200
Subject: [PATCH 7/9] Update README.md
---
README.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/README.md b/README.md
index e7cd456..4d2608e 100644
--- a/README.md
+++ b/README.md
@@ -7,4 +7,6 @@ The AI Onboarding webApp aims to address the pressing issue of carbon footprint
## Datasets
1. CSV files.
2. Raster Images.
-3. Manual input.
+3. Manual input.
+4. Cloud carbon report.
+5. Electricity expenses incurred from grid power.
From 50f4df474d37d1001a558c41bd6f55b64915f50d Mon Sep 17 00:00:00 2001
From: Aton4ST <64774266+aimtyaem@users.noreply.github.com>
Date: Mon, 24 Feb 2025 00:04:14 +0200
Subject: [PATCH 8/9] Delete CFPwireframe.md
Diff. Branch
---
CFPwireframe.md | 63 -------------------------------------------------
1 file changed, 63 deletions(-)
delete mode 100644 CFPwireframe.md
diff --git a/CFPwireframe.md b/CFPwireframe.md
deleted file mode 100644
index 9be8453..0000000
--- a/CFPwireframe.md
+++ /dev/null
@@ -1,63 +0,0 @@
-### Homepage
-```
-+-----------------------------+
-| Homepage |
-|-----------------------------|
-| Headline |
-| Introduction |
-| [CTA Button] |
-+-----------------------------+
-```
-
-### Onboarding Flow
-```
-+-----------------------------+
-| Onboarding Flow |
-|-----------------------------|
-| Step 1: Introduction |
-| Step 2: Features Overview |
-| Step 3: Set Up Profile |
-| [Start Using the App] |
-+-----------------------------+
-```
-
-### Dashboard
-```
-+-----------------------------+
-| Dashboard |
-|-----------------------------|
-| Carbon Footprint Overview |
-| [Charts & Graphs] |
-| Recommendations |
-+-----------------------------+
-```
-
-### User Profile
-```
-+-----------------------------+
-| User Profile |
-|-----------------------------|
-| Profile Info |
-| Settings & Preferences |
-| Social Media Connections |
-+-----------------------------+
-```
-
-### Resource Center
-```
-+-----------------------------+
-| Resource Center |
-|-----------------------------|
-| Articles & Videos |
-| [Search & Filter] |
-+-----------------------------+
-```
-
-### Community
-```
-+-----------------------------+
-| Community |
-|-----------------------------|
-| Forums & Discussions |
-| User Groups & Challenges |
-+-----------------------------+
From 9c0d3a44a86a1f62bda44bd3e6798015f8bd4dc0 Mon Sep 17 00:00:00 2001
From: Ahmed Ibrahim Metawee' Youssef
Date: Fri, 10 Oct 2025 16:40:45 +0300
Subject: [PATCH 9/9] Edge Impulse
---
notebooks/python-api-bindings-GHG.ipynb | 1917 +++++++++++++++++++++++
1 file changed, 1917 insertions(+)
create mode 100644 notebooks/python-api-bindings-GHG.ipynb
diff --git a/notebooks/python-api-bindings-GHG.ipynb b/notebooks/python-api-bindings-GHG.ipynb
new file mode 100644
index 0000000..ed27c8f
--- /dev/null
+++ b/notebooks/python-api-bindings-GHG.ipynb
@@ -0,0 +1,1917 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KPtzOzgJ-Ak2"
+ },
+ "source": [
+ "# Edge Impulse Python API Bindings Example"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "sKIz3w8K_dN1"
+ },
+ "source": [
+ "[](https://docs.edgeimpulse.com/docs/tutorials/api-examples/python-api-bindings-example)\n",
+ "[](https://colab.research.google.com/github/edgeimpulse/notebooks/blob/main/notebooks/python-api-bindings-example.ipynb)\n",
+ "[](https://github.com/edgeimpulse/notebooks/blob/main/notebooks/python-api-bindings-example.ipynb)\n",
+ "[](https://raw.githubusercontent.com/edgeimpulse/notebooks/main/notebooks/python-api-bindings-example.ipynb)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gLWb8AoZ_ZyY"
+ },
+ "source": [
+ "The [Python SDK](https://docs.edgeimpulse.com/docs/tools/edge-impulse-python-sdk) is built on top of the [Edge Impulse Python API bindings](https://pypi.org/project/edgeimpulse-api/), which is known as the _edgeimpulse_api_ package. These are Python wrappers for all of the [web API calls](https://docs.edgeimpulse.com/reference/edge-impulse-api/edge-impulse-api) that you can use to interact with Edge Impulse projects programmatically (i.e. without needing to use the Studio graphical interface).\n",
+ "\n",
+ "The API reference guide for using the Python API bindings can be found [here](https://docs.edgeimpulse.com/reference/python-api-bindings/edgeimpulse_api).\n",
+ "\n",
+ "This example will walk you through the process of using the Edge Impulse API bindings to upload data, define an impulse, process features, train a model, and deploy the impulse as a C++ library.\n",
+ "\n",
+ "After creating your project and copying the API key, feel free to leave the project open in a browser window so you can watch the changes as we make API calls. You might need to refresh the browser after each call to see the changes take affect.\n",
+ "\n",
+ "> **Important!** This project will add data and remove any current features and models in a project. We highly recommend creating a new project when running this notebook! Don't say we didn't warn you if you mess up an existing project."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "TFny1qVW99dN",
+ "outputId": "4b0687dc-b39e-4365-e5c7-3644f353e0a4",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Collecting edgeimpulse-api\n",
+ " Downloading edgeimpulse_api-1.75.2-py3-none-any.whl.metadata (1.5 kB)\n",
+ "Requirement already satisfied: requests in /usr/local/lib/python3.12/dist-packages (2.32.4)\n",
+ "Collecting aenum<4.0.0,>=3.1.11 (from edgeimpulse-api)\n",
+ " Downloading aenum-3.1.16-py3-none-any.whl.metadata (3.8 kB)\n",
+ "Requirement already satisfied: pydantic<3,>=1.10.17 in /usr/local/lib/python3.12/dist-packages (from edgeimpulse-api) (2.11.9)\n",
+ "Requirement already satisfied: python_dateutil<3.0.0,>=2.5.3 in /usr/local/lib/python3.12/dist-packages (from edgeimpulse-api) (2.9.0.post0)\n",
+ "Collecting urllib3<2.0.0,>=1.25.3 (from edgeimpulse-api)\n",
+ " Downloading urllib3-1.26.20-py2.py3-none-any.whl.metadata (50 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.1/50.1 kB\u001b[0m \u001b[31m1.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.12/dist-packages (from requests) (3.4.3)\n",
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.12/dist-packages (from requests) (3.10)\n",
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.12/dist-packages (from requests) (2025.8.3)\n",
+ "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (0.7.0)\n",
+ "Requirement already satisfied: pydantic-core==2.33.2 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (2.33.2)\n",
+ "Requirement already satisfied: typing-extensions>=4.12.2 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (4.15.0)\n",
+ "Requirement already satisfied: typing-inspection>=0.4.0 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (0.4.2)\n",
+ "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.12/dist-packages (from python_dateutil<3.0.0,>=2.5.3->edgeimpulse-api) (1.17.0)\n",
+ "Downloading edgeimpulse_api-1.75.2-py3-none-any.whl (1.6 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m21.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading aenum-3.1.16-py3-none-any.whl (165 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m165.6/165.6 kB\u001b[0m \u001b[31m10.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading urllib3-1.26.20-py2.py3-none-any.whl (144 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m144.2/144.2 kB\u001b[0m \u001b[31m10.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hInstalling collected packages: aenum, urllib3, edgeimpulse-api\n",
+ " Attempting uninstall: urllib3\n",
+ " Found existing installation: urllib3 2.5.0\n",
+ " Uninstalling urllib3-2.5.0:\n",
+ " Successfully uninstalled urllib3-2.5.0\n",
+ "Successfully installed aenum-3.1.16 edgeimpulse-api-1.75.2 urllib3-1.26.20\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Install the Edge Impulse API bindings and the requests package\n",
+ "!python -m pip install edgeimpulse-api requests"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "id": "kV6EOSOuC9nV"
+ },
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "import re\n",
+ "import os\n",
+ "import pprint\n",
+ "import time\n",
+ "\n",
+ "import requests"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "id": "IppiSCw4_0eH"
+ },
+ "outputs": [],
+ "source": [
+ "# Import the API objects we plan to use\n",
+ "from edgeimpulse_api import (\n",
+ " ApiClient,\n",
+ " BuildOnDeviceModelRequest,\n",
+ " Configuration,\n",
+ " DeploymentApi,\n",
+ " DSPApi,\n",
+ " DSPConfigRequest,\n",
+ " GenerateFeaturesRequest,\n",
+ " Impulse,\n",
+ " ImpulseApi,\n",
+ " JobsApi,\n",
+ " ProjectsApi,\n",
+ " SetKerasParameterRequest,\n",
+ " StartClassifyJobRequest,\n",
+ " UpdateProjectRequest,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tHum_KkPAfhG"
+ },
+ "source": [
+ "You will need to obtain an API key from an Edge Impulse project. Log into [edgeimpulse.com](https://edgeimpulse.com/) and create a new project. Open the project, navigate to **Dashboard** and click on the **Keys** tab to view your API keys. Double-click on the API key to highlight it, right-click, and select **Copy**.\n",
+ "\n",
+ "\n",
+ "\n",
+ "Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.\n",
+ "\n",
+ "Paste that API key string in the `EI_API_KEY` value in the following cell:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "id": "GpIaKwEJAhpI"
+ },
+ "outputs": [],
+ "source": [
+ "# Settings\n",
+ "API_KEY = \"ei_b42a548790554d9ffa4cd6f624e480573afccf4a670dbfcddf33085c4f4da15f\" # Change this to your Edge Impulse API key\n",
+ "API_HOST = \"https://studio.edgeimpulse.com/v1\"\n",
+ "DATASET_PATH = \"dataset/gestures\"\n",
+ "OUTPUT_PATH = \".\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "W0qE0bWCrNvP"
+ },
+ "source": [
+ "## Initialize API clients\n",
+ "\n",
+ "The Python API bindings use a series of submodules, each encapsulating one of the API subsections (e.g. Projects, DSP, Learn, etc.). To use these submodules, you need to instantiate a generic API module and use that to instantiate the individual API objects. We'll use these objects to make the API calls later.\n",
+ "\n",
+ "To configure a client, you generally create a configuration object (often from a dict) and then pass that object as an argument to the client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "id": "NB0g7vxErNQF"
+ },
+ "outputs": [],
+ "source": [
+ "# Create top-level API client\n",
+ "config = Configuration(\n",
+ " host=API_HOST,\n",
+ " api_key={\"ApiKeyAuthentication\": API_KEY}\n",
+ ")\n",
+ "client = ApiClient(config)\n",
+ "\n",
+ "# Instantiate sub-clients\n",
+ "deployment_api = DeploymentApi(client)\n",
+ "dsp_api = DSPApi(client)\n",
+ "impulse_api = ImpulseApi(client)\n",
+ "jobs_api = JobsApi(client)\n",
+ "projects_api = ProjectsApi(client)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lPOr6bSjqse4"
+ },
+ "source": [
+ "## Initialize project\n",
+ "\n",
+ "Before uploading data, we should make sure the project is in the regular impulse flow mode, rather than [BYOM mode](https://docs.edgeimpulse.com/docs/edge-impulse-studio/bring-your-own-model-byom). We'll also need the project ID for most of the other API calls in the future.\n",
+ "\n",
+ "Notice that the general pattern for calling API functions is to instantiate a configuration/request object and pass it to the API method that's part of the submodule. You can find which parameters a specific API call expects by looking at [the call's documentation page](https://docs.edgeimpulse.com/reference/edge-impulse-api/projects/update_project).\n",
+ "\n",
+ "API calls (links to associated documentation):\n",
+ "\n",
+ " * [Projects / List (active) projects](https://docs.edgeimpulse.com/reference/edge-impulse-api/projects/list_active_projects)\n",
+ " * [Projects / Update project](https://docs.edgeimpulse.com/reference/edge-impulse-api/projects/update_project)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "id": "AFOytMLU_ulh",
+ "outputId": "d41e49a6-97a5-4977-d015-0da34f193a2c",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Project ID: 797297\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Get the project ID, which we'll need for future API calls\n",
+ "response = projects_api.list_projects()\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") == False:\n",
+ " raise RuntimeError(\"Could not obtain the project ID.\")\n",
+ "else:\n",
+ " project_id = response.projects[0].id\n",
+ "\n",
+ "# Print the project ID\n",
+ "print(f\"Project ID: {project_id}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "id": "cWggMwaIqrpS",
+ "outputId": "45af2019-2c8c-464c-ece2-22d0d8379c58",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Project is now in impulse workflow.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Create request object with the required parameters\n",
+ "update_project_request = UpdateProjectRequest.from_dict({\n",
+ " \"inPretrainedModelFlow\": False,\n",
+ "})\n",
+ "\n",
+ "# Update the project and check the response for errors\n",
+ "response = projects_api.update_project(\n",
+ " project_id=project_id,\n",
+ " update_project_request=update_project_request,\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") == False:\n",
+ " raise RuntimeError(\"Could not obtain the project ID.\")\n",
+ "else:\n",
+ " print(\"Project is now in impulse workflow.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "z_GzBa0YBzGo"
+ },
+ "source": [
+ "## Upload dataset\n",
+ "\n",
+ "We'll start by downloading the gesture dataset from https://docs.edgeimpulse.com/docs/pre-built-datasets/continuous-gestures. Note that the [ingestion API](https://docs.edgeimpulse.com/reference/data-ingestion/ingestion-api) is separate from the regular Edge Impulse API: the URL and interface are different. As a result, we must construct the request manually and cannot rely on the Python API bindings.\n",
+ "\n",
+ "We rely on the ingestion service using the string before the first period in the filename to determine the label. For example, \"idle.1.cbor\" will be automatically assigned the label \"idle.\" If you wish to set a label manually, you must specify the `x-label` parameter in the headers. Note that you can only define a label this way when uploading a group of data at a time. For example, setting `\"x-label\": \"idle\"` in the headers would give all data uploaded with that call the label \"idle.\"\n",
+ "\n",
+ "API calls used with associated documentation:\n",
+ "\n",
+ " * [Ingestion service](https://docs.edgeimpulse.com/reference/data-ingestion/ingestion-api)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "id": "InjgAOyRAn6z"
+ },
+ "outputs": [],
+ "source": [
+ "# Download and unzip gesture dataset\n",
+ "!mkdir -p dataset/\n",
+ "!wget -P dataset -q https://cdn.edgeimpulse.com/datasets/gestures.zip\n",
+ "!unzip -q dataset/gestures.zip -d {DATASET_PATH}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "id": "OGMm_7ELHMFb"
+ },
+ "outputs": [],
+ "source": [
+ "def upload_files(api_key, path, subset):\n",
+ " \"\"\"\n",
+ " Upload files in the given path/subset (where subset is \"training\" or\n",
+ " \"testing\")\n",
+ " \"\"\"\n",
+ "\n",
+ " # Construct request\n",
+ " url = f\"https://ingestion.edgeimpulse.com/api/{subset}/files\"\n",
+ " headers = {\n",
+ " \"x-api-key\": api_key,\n",
+ " \"x-disallow-duplicates\": \"true\",\n",
+ " }\n",
+ "\n",
+ " # Get file handles and create dataset to upload\n",
+ " files = []\n",
+ " file_list = os.listdir(os.path.join(path, subset))\n",
+ " for file_name in file_list:\n",
+ " file_path = os.path.join(path, subset, file_name)\n",
+ " if os.path.isfile(file_path):\n",
+ " file_handle = open(file_path, \"rb\")\n",
+ " files.append((\"data\", (file_name, file_handle, \"multipart/form-data\")))\n",
+ "\n",
+ " # Upload the files\n",
+ " response = requests.post(\n",
+ " url=url,\n",
+ " headers=headers,\n",
+ " files=files,\n",
+ " )\n",
+ "\n",
+ " # Print any errors for files that did not upload\n",
+ " upload_responses = response.json()[\"files\"]\n",
+ " for resp in upload_responses:\n",
+ " if not resp[\"success\"]:\n",
+ " print(resp)\n",
+ "\n",
+ " # Close all the handles\n",
+ " for handle in files:\n",
+ " handle[1][1].close()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "id": "8witLfBgH-Ay",
+ "outputId": "af447da7-702b-4bac-fc75-f631cb7b925a",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Uploading training dataset...\n",
+ "{'success': False, 'error': 'An item with this hash already exists (ids: 2287781475)'}\n",
+ "{'success': False, 'error': 'An item with this hash already exists (ids: 2287781500)'}\n",
+ "{'success': False, 'error': 'An item with this hash already exists (ids: 2287781508)'}\n",
+ "Uploading testing dataset...\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Upload the dataset to the project\n",
+ "print(\"Uploading training dataset...\")\n",
+ "upload_files(API_KEY, DATASET_PATH, \"training\")\n",
+ "print(\"Uploading testing dataset...\")\n",
+ "upload_files(API_KEY, DATASET_PATH, \"testing\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8isx_nKdOqSs"
+ },
+ "source": [
+ "## Create an impulse\n",
+ "\n",
+ "Now that we uploaded our data, it's time to create an impulse. An \"impulse\" is a combination of processing (feature extraction) and learning blocks. The general flow of data is:\n",
+ "\n",
+ "> data -> input block -> processing block(s) -> learning block(s)\n",
+ "\n",
+ "Only the processing and learning blocks make up the \"impulse.\" However, we must still specify the input block, as it allows us to perform preprocessing, like windowing (for time series data) or cropping/scaling (for image data).\n",
+ "\n",
+ "Your project will have one input block, but it can contain multiple processing and learning blocks. Specific outputs from the processing block can be specified as inputs to the learning blocks. However, for simplicity, we'll just show one processing block and one learning block.\n",
+ "\n",
+ "> **Note:** Historically, processing blocks were called \"DSP blocks,\" as they focused on time series data. In Studio, the name has been changed to \"Processing block,\" as the blocks work with different types of data, but you'll see it referred to as \"DSP block\" in the API.\n",
+ "\n",
+ "It's important that you define the input block with the same parameters as your captured data, especially the sampling rate! Additionally, the processing block axes names **must** match up with their names in the dataset.\n",
+ "\n",
+ "API calls (links to associated documentation):\n",
+ "\n",
+ " * [Impulse / Get impulse blocks](https://docs.edgeimpulse.com/reference/edge-impulse-api/impulse/get_impulse_blocks)\n",
+ " * [Impulse / Delete impulse](https://docs.edgeimpulse.com/reference/edge-impulse-api/impulse/delete_impulse)\n",
+ " * [Impulse / Create impulse](https://docs.edgeimpulse.com/reference/edge-impulse-api/impulse/create_impulse)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "id": "Djn91Lq-ZpR8"
+ },
+ "outputs": [],
+ "source": [
+ "# To start, let's fetch a list of all the available blocks\n",
+ "response = impulse_api.get_impulse_blocks(\n",
+ " project_id=project_id\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not get impulse blocks.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "id": "nTOB4175asrn",
+ "outputId": "df3ba75f-51ce-48ed-bb8e-ab8b86fa025b",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Input blocks\n",
+ "[\n",
+ " {\n",
+ " \"type\": \"time-series\",\n",
+ " \"title\": \"Time series data\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Operates on time series sensor data like vibration or audio data. Lets you slice up data into windows.\",\n",
+ " \"name\": \"Time series\",\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"image\",\n",
+ " \"title\": \"Images\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Processes discrete images for object detection or classification.\",\n",
+ " \"name\": \"Image\",\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"features\",\n",
+ " \"title\": \"Pre-processed features\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Processes pre-processed features, or non time-series data.\",\n",
+ " \"name\": \"Features\",\n",
+ " \"blockType\": \"official\"\n",
+ " }\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Print the available input blocks\n",
+ "print(\"Input blocks\")\n",
+ "print(json.dumps(json.loads(response.to_json())[\"inputBlocks\"], indent=2))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "id": "7UIhLBJLa2U-",
+ "outputId": "b9bdc649-d720-4e56-ac07-3627b70abd35",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Processing blocks\n",
+ "[\n",
+ " {\n",
+ " \"type\": \"flatten\",\n",
+ " \"title\": \"Flatten\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Flatten an axis into a single value, useful for slow-moving averages like temperature data, in combination with other blocks.\",\n",
+ " \"name\": \"Flatten\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 1,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"image\",\n",
+ " \"title\": \"Image\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Preprocess and normalize image data, and optionally reduce the color depth.\",\n",
+ " \"name\": \"Image\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 1,\n",
+ " \"blockType\": \"official\",\n",
+ " \"namedAxes\": [\n",
+ " {\n",
+ " \"name\": \"Image\",\n",
+ " \"required\": true\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"mfcc\",\n",
+ " \"title\": \"Audio (MFCC)\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Extracts features from audio signals using Mel Frequency Cepstral Coefficients, great for human voice.\",\n",
+ " \"name\": \"MFCC\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 4,\n",
+ " \"blockType\": \"official\",\n",
+ " \"namedAxes\": [\n",
+ " {\n",
+ " \"name\": \"Signal\",\n",
+ " \"description\": \"The input signal to create an MFCC spectrogram from\",\n",
+ " \"required\": true\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"mfe\",\n",
+ " \"title\": \"Audio (MFE)\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Extracts a spectrogram from audio signals using Mel-filterbank energy features, great for both voice and non-voice audio.\",\n",
+ " \"name\": \"MFE\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 4,\n",
+ " \"blockType\": \"official\",\n",
+ " \"namedAxes\": [\n",
+ " {\n",
+ " \"name\": \"Signal\",\n",
+ " \"description\": \"The input signal to create an MFE spectrogram from\",\n",
+ " \"required\": true\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"spectral-analysis\",\n",
+ " \"title\": \"Spectral Analysis\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Great for analyzing repetitive motion, such as data from accelerometers. Extracts the frequency and power characteristics of a signal over time.\",\n",
+ " \"name\": \"Spectral features\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 4,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"spectrogram\",\n",
+ " \"title\": \"Spectrogram\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Extracts a spectrogram from audio or sensor data, great for non-voice audio or data with continuous frequencies.\",\n",
+ " \"name\": \"Spectrogram\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 4,\n",
+ " \"blockType\": \"official\",\n",
+ " \"namedAxes\": [\n",
+ " {\n",
+ " \"name\": \"Signal\",\n",
+ " \"description\": \"The input signal to create a spectrogram from\",\n",
+ " \"required\": true\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"syntiant\",\n",
+ " \"title\": \"Audio (Syntiant)\",\n",
+ " \"author\": \"Syntiant\",\n",
+ " \"description\": \"Syntiant only. Compute log Mel-filterbank energy features from an audio signal.\",\n",
+ " \"name\": \"Syntiant\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": true,\n",
+ " \"latestImplementationVersion\": 1,\n",
+ " \"blockType\": \"official\",\n",
+ " \"namedAxes\": [\n",
+ " {\n",
+ " \"name\": \"Signal\",\n",
+ " \"description\": \"The input signal to create a spectrogram from\",\n",
+ " \"required\": true\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"syntiant-imu\",\n",
+ " \"title\": \"IMU (Syntiant)\",\n",
+ " \"author\": \"Syntiant\",\n",
+ " \"description\": \"Syntiant only. Great for analyzing repetitive motion, such as data from accelerometers. Extracts the frequency and power characteristics of a signal over time.\",\n",
+ " \"name\": \"Syntiant IMU\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 1,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"hr\",\n",
+ " \"title\": \"HR and HRV features\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Process PPG or ECG data into heart rate and heart rate variability features.\",\n",
+ " \"name\": \"HR/HRV\",\n",
+ " \"recommended\": true,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 1,\n",
+ " \"blockType\": \"official\",\n",
+ " \"namedAxes\": [\n",
+ " {\n",
+ " \"name\": \"PPG/ECG\",\n",
+ " \"description\": \"PPG signal to convert to heart rate\",\n",
+ " \"required\": true\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"Accelerometer X\",\n",
+ " \"description\": \"One channel of accelerometer data\",\n",
+ " \"required\": false\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"Accelerometer Y\",\n",
+ " \"description\": \"One channel of accelerometer data\",\n",
+ " \"required\": false\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"Accelerometer Z\",\n",
+ " \"description\": \"One channel of accelerometer data\",\n",
+ " \"required\": false\n",
+ " }\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"raw\",\n",
+ " \"title\": \"Raw Data\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Use data without pre-processing. Useful if you want to use deep learning to learn features.\",\n",
+ " \"name\": \"Raw data\",\n",
+ " \"recommended\": false,\n",
+ " \"experimental\": false,\n",
+ " \"latestImplementationVersion\": 1,\n",
+ " \"blockType\": \"official\"\n",
+ " }\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Print the available processing blocks\n",
+ "print(\"Processing blocks\")\n",
+ "print(json.dumps(json.loads(response.to_json())[\"dspBlocks\"], indent=2))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "id": "MYrjrUB7a7Et",
+ "outputId": "648a17ed-608b-41ea-a550-26f9ddbadf60",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Learning blocks\n",
+ "[\n",
+ " {\n",
+ " \"type\": \"keras\",\n",
+ " \"title\": \"Classification\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Learns patterns from data, and can apply these to new data. Great for categorizing movement or recognizing audio.\",\n",
+ " \"name\": \"Classifier\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-transfer-image\",\n",
+ " \"title\": \"Transfer Learning (Images)\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Fine tune a pre-trained image classification model on your data. Good performance even with relatively small image datasets.\",\n",
+ " \"name\": \"Transfer learning\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-object-detection\",\n",
+ " \"title\": \"Object Detection (Images)\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Fine tune a pre-trained object detection model on your data. Good performance even with relatively small image datasets.\",\n",
+ " \"name\": \"Object detection\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-regression\",\n",
+ " \"title\": \"Regression\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Learns patterns from data, and can apply these to new data. Great for predicting numeric continuous values.\",\n",
+ " \"name\": \"Regression\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-transfer-kws\",\n",
+ " \"title\": \"Transfer Learning (Keyword Spotting)\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Fine tune a pre-trained keyword spotting model on your data. Good performance even with relatively small keyword datasets.\",\n",
+ " \"name\": \"Transfer learning (Keyword Spotting)\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"anomaly-gmm\",\n",
+ " \"title\": \"Anomaly Detection (GMM)\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Find outliers in new data. A Gaussian mixture model (GMM) models the shape of data using a probability distribution. New data that is unlikely according to this model can be considered anomalous.\",\n",
+ " \"name\": \"Anomaly detection (GMM)\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"anomaly\",\n",
+ " \"title\": \"Anomaly Detection (K-means)\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Find outliers in new data. Good for recognizing unknown states, and to complement classifiers. Works best with low dimensionality features like the output of the spectral features block.\",\n",
+ " \"name\": \"Anomaly detection\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-visual-anomaly\",\n",
+ " \"title\": \"Visual Anomaly Detection - FOMO-AD\",\n",
+ " \"author\": \"Edge Impulse\",\n",
+ " \"description\": \"Detect visual anomalies. Extracts visual features using a pre-trained backbone, and applies a scoring function to evaluate how anomalous a sample is by comparing the extracted features to the learned model. Does not require anomalous data.\",\n",
+ " \"name\": \"Visual Anomaly Detection\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\"\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-akida\",\n",
+ " \"title\": \"Classification - BrainChip Akida\\u2122\",\n",
+ " \"author\": \"BrainChip\",\n",
+ " \"description\": \"Learns patterns from data, and can apply these to new data. Great for categorizing movement or recognizing audio. Only works with BrainChip Akida devices\",\n",
+ " \"name\": \"Classifier\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\",\n",
+ " \"supportedTargets\": [\n",
+ " \"brainchip-akd1000\"\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-akida-transfer-image\",\n",
+ " \"title\": \"Transfer Learning (Images) - BrainChip Akida\\u2122\",\n",
+ " \"author\": \"BrainChip\",\n",
+ " \"description\": \"Fine tune a pre-trained image classification model on your data. Good performance even with relatively small image datasets. Only works with BrainChip Akida devices\",\n",
+ " \"name\": \"Transfer learning\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\",\n",
+ " \"supportedTargets\": [\n",
+ " \"brainchip-akd1000\"\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"type\": \"keras-akida-object-detection\",\n",
+ " \"title\": \"Object Detection (Images) - BrainChip Akida\\u2122\",\n",
+ " \"author\": \"BrainChip\",\n",
+ " \"description\": \"Fine tune a pre-trained object detection model on your data. Good performance even with relatively small image datasets. Only works with BrainChip Akida devices\",\n",
+ " \"name\": \"Object detection\",\n",
+ " \"recommended\": false,\n",
+ " \"blockType\": \"official\",\n",
+ " \"supportedTargets\": [\n",
+ " \"brainchip-akd1000\"\n",
+ " ]\n",
+ " }\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Print the available learning blocks\n",
+ "print(\"Learning blocks\")\n",
+ "print(json.dumps(json.loads(response.to_json())[\"learnBlocks\"], indent=2))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "id": "5j-g9mkrLB9k"
+ },
+ "outputs": [],
+ "source": [
+ "# Give our impulse blocks IDs, which we'll use later\n",
+ "processing_id = 2\n",
+ "learning_id = 3\n",
+ "\n",
+ "# Impulses (and their blocks) are defined as a collection of key/value pairs\n",
+ "impulse = Impulse.from_dict({\n",
+ " \"inputBlocks\": [\n",
+ " {\n",
+ " \"id\": 1,\n",
+ " \"type\": \"time-series\",\n",
+ " \"name\": \"Time series\",\n",
+ " \"title\": \"Time series data\",\n",
+ " \"windowSizeMs\": 1000,\n",
+ " \"windowIncreaseMs\": 500,\n",
+ " \"frequencyHz\": 62.5,\n",
+ " \"padZeros\": True,\n",
+ " }\n",
+ " ],\n",
+ " \"dspBlocks\": [\n",
+ " {\n",
+ " \"id\": processing_id,\n",
+ " \"type\": \"spectral-analysis\",\n",
+ " \"name\": \"Spectral Analysis\",\n",
+ " \"implementationVersion\": 4,\n",
+ " \"title\": \"processing\",\n",
+ " \"axes\": [\"accX\", \"accY\", \"accZ\"],\n",
+ " \"input\": 1,\n",
+ " }\n",
+ " ],\n",
+ " \"learnBlocks\": [\n",
+ " {\n",
+ " \"id\": learning_id,\n",
+ " \"type\": \"keras\",\n",
+ " \"name\": \"Classifier\",\n",
+ " \"title\": \"Classification\",\n",
+ " \"dsp\": [processing_id],\n",
+ " }\n",
+ " ],\n",
+ "})"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "id": "NxgDfPVFRxAO"
+ },
+ "outputs": [],
+ "source": [
+ "# Delete the current impulse in the project\n",
+ "response = impulse_api.delete_impulse(\n",
+ " project_id=project_id\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not delete current impulse.\")\n",
+ "\n",
+ "# Add blocks to impulse\n",
+ "response = impulse_api.create_impulse(\n",
+ " project_id=project_id,\n",
+ " impulse=impulse\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not create impulse.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1vuJumLp58U1"
+ },
+ "source": [
+ "## Configure processing block\n",
+ "\n",
+ "Before generating features, we need to configure the processing block. We'll start by printing all the available parameters for the `spectral-analysis` block, which we set when we created the impulse above.\n",
+ "\n",
+ "API calls (links to associated documentation):\n",
+ "\n",
+ " * [DSP / Get config](https://docs.edgeimpulse.com/reference/edge-impulse-api/dsp/get_config)\n",
+ " * [DSP / Set config](https://docs.edgeimpulse.com/reference/edge-impulse-api/dsp/set_config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {
+ "id": "Ht2LegOF1rYb",
+ "outputId": "6a3f7df5-05cf-4d72-cdca-c7676a426c99",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "[\n",
+ " {\n",
+ " \"parameter\": \"scale-axes\",\n",
+ " \"description\": \"Multiplies axes by this number\",\n",
+ " \"currentValue\": \"1\",\n",
+ " \"defaultValue\": \"1\",\n",
+ " \"type\": \"float\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"input-decimation-ratio\",\n",
+ " \"description\": \"Decimate signal to improve effeciency\",\n",
+ " \"currentValue\": \"1\",\n",
+ " \"defaultValue\": \"1\",\n",
+ " \"type\": \"select\",\n",
+ " \"options\": [\n",
+ " \"1\",\n",
+ " \"3\",\n",
+ " \"10\",\n",
+ " \"30\",\n",
+ " \"100\",\n",
+ " \"1000\"\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"filter-type\",\n",
+ " \"description\": \"Type of filter to apply to the raw data. (Example: low is low pass)\",\n",
+ " \"currentValue\": \"none\",\n",
+ " \"defaultValue\": \"none\",\n",
+ " \"type\": \"select\",\n",
+ " \"options\": [\n",
+ " \"low\",\n",
+ " \"high\",\n",
+ " \"none\"\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"filter-cutoff\",\n",
+ " \"description\": \"Cut-off frequency in hertz\",\n",
+ " \"currentValue\": \"3\",\n",
+ " \"defaultValue\": \"3\",\n",
+ " \"type\": \"float\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"filter-order\",\n",
+ " \"description\": \"Number of poles to use in filter. More improves filtering at expense of latency. Use zero to only mask FFT bins and skip filtering.\",\n",
+ " \"currentValue\": \"6\",\n",
+ " \"defaultValue\": \"6\",\n",
+ " \"type\": \"int\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"analysis-type\",\n",
+ " \"description\": \"Type of spectral analysis to apply\",\n",
+ " \"currentValue\": \"FFT\",\n",
+ " \"defaultValue\": \"FFT\",\n",
+ " \"type\": \"select\",\n",
+ " \"options\": [\n",
+ " \"FFT\",\n",
+ " \"Wavelet\"\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"fft-length\",\n",
+ " \"description\": \"Number of FFT points\",\n",
+ " \"currentValue\": \"16\",\n",
+ " \"defaultValue\": \"16\",\n",
+ " \"type\": \"int\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"spectral-peaks-count\",\n",
+ " \"description\": \"Number of spectral power peaks\",\n",
+ " \"currentValue\": \"3\",\n",
+ " \"defaultValue\": \"3\",\n",
+ " \"type\": \"int\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"spectral-peaks-threshold\",\n",
+ " \"description\": \"Minimum (normalized) threshold for a peak, this eliminates peaks that are very close\",\n",
+ " \"currentValue\": \"0.1\",\n",
+ " \"defaultValue\": \"0.1\",\n",
+ " \"type\": \"float\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"spectral-power-edges\",\n",
+ " \"description\": \"Splits the spectral density in various buckets\",\n",
+ " \"currentValue\": \"0.1, 0.5, 1.0, 2.0, 5.0\",\n",
+ " \"defaultValue\": \"0.1, 0.5, 1.0, 2.0, 5.0\",\n",
+ " \"type\": \"string\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"do-log\",\n",
+ " \"description\": \"Apply log base 10 to spectrum\",\n",
+ " \"currentValue\": \"true\",\n",
+ " \"defaultValue\": \"true\",\n",
+ " \"type\": \"boolean\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"do-fft-overlap\",\n",
+ " \"description\": \"When more than one FFT is needed to cover a window, then setting true will reuse the last half of the previous FFT frame. Similar to frame stride.\",\n",
+ " \"currentValue\": \"true\",\n",
+ " \"defaultValue\": \"true\",\n",
+ " \"type\": \"boolean\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"wavelet-level\",\n",
+ " \"description\": \"Decomposition level (must be >= 0)\",\n",
+ " \"currentValue\": \"1\",\n",
+ " \"defaultValue\": \"1\",\n",
+ " \"type\": \"int\"\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"wavelet\",\n",
+ " \"description\": \"Wavelet to use\",\n",
+ " \"currentValue\": \"db4\",\n",
+ " \"defaultValue\": \"db4\",\n",
+ " \"type\": \"select\",\n",
+ " \"options\": [\n",
+ " \"bior1.3\",\n",
+ " \"bior1.5\",\n",
+ " \"bior2.2\",\n",
+ " \"bior2.4\",\n",
+ " \"bior2.6\",\n",
+ " \"bior2.8\",\n",
+ " \"bior3.1\",\n",
+ " \"bior3.3\",\n",
+ " \"bior3.5\",\n",
+ " \"bior3.7\",\n",
+ " \"bior3.9\",\n",
+ " \"bior4.4\",\n",
+ " \"bior5.5\",\n",
+ " \"bior6.8\",\n",
+ " \"coif1\",\n",
+ " \"coif2\",\n",
+ " \"coif3\",\n",
+ " \"db2\",\n",
+ " \"db3\",\n",
+ " \"db4\",\n",
+ " \"db5\",\n",
+ " \"db6\",\n",
+ " \"db7\",\n",
+ " \"db8\",\n",
+ " \"db9\",\n",
+ " \"db10\",\n",
+ " \"haar\",\n",
+ " \"rbio1.3\",\n",
+ " \"rbio1.5\",\n",
+ " \"rbio2.2\",\n",
+ " \"rbio2.4\",\n",
+ " \"rbio2.6\",\n",
+ " \"rbio2.8\",\n",
+ " \"rbio3.1\",\n",
+ " \"rbio3.3\",\n",
+ " \"rbio3.5\",\n",
+ " \"rbio3.7\",\n",
+ " \"rbio3.9\",\n",
+ " \"rbio4.4\",\n",
+ " \"rbio5.5\",\n",
+ " \"rbio6.8\",\n",
+ " \"sym2\",\n",
+ " \"sym3\",\n",
+ " \"sym4\",\n",
+ " \"sym5\",\n",
+ " \"sym6\",\n",
+ " \"sym7\",\n",
+ " \"sym8\",\n",
+ " \"sym9\",\n",
+ " \"sym10\"\n",
+ " ]\n",
+ " },\n",
+ " {\n",
+ " \"parameter\": \"extra-low-freq\",\n",
+ " \"description\": \"Decimate signal to improve low frequency resolution\",\n",
+ " \"currentValue\": \"false\",\n",
+ " \"defaultValue\": \"false\",\n",
+ " \"type\": \"boolean\"\n",
+ " }\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Get processing block config\n",
+ "response = dsp_api.get_dsp_config(\n",
+ " project_id=project_id,\n",
+ " dsp_id=processing_id\n",
+ ")\n",
+ "\n",
+ "# Construct user-readable parameters\n",
+ "settings = []\n",
+ "for group in response.config:\n",
+ " for item in group.items:\n",
+ " element = {}\n",
+ " element[\"parameter\"] = item.param\n",
+ " element[\"description\"] = item.help\n",
+ " element[\"currentValue\"] = item.value\n",
+ " element[\"defaultValue\"] = item.default_value\n",
+ " element[\"type\"] = item.type\n",
+ " if hasattr(item, \"select_options\") and \\\n",
+ " getattr(item, \"select_options\") is not None:\n",
+ " element[\"options\"] = [i.value for i in item.select_options]\n",
+ " settings.append(element)\n",
+ "\n",
+ "# Print the settings\n",
+ "print(json.dumps(settings, indent=2))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "id": "TPEuV3ku3vuN",
+ "outputId": "1e2e6561-b552-4f3c-ff97-b5ea3afcada0",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Processing block has been configured.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define processing block configuration\n",
+ "config_request = DSPConfigRequest.from_dict({\n",
+ " \"config\": {\n",
+ " \"scale-axes\": 1.0,\n",
+ " \"input-decimation-ratio\": 1,\n",
+ " \"filter-type\": \"none\",\n",
+ " \"analysis-type\": \"FFT\",\n",
+ " \"fft-length\": 16,\n",
+ " \"do-log\": True,\n",
+ " \"do-fft-overlap\": True,\n",
+ " \"extra-low-freq\": False,\n",
+ " }\n",
+ "})\n",
+ "\n",
+ "# Set processing block configuration\n",
+ "response = dsp_api.set_dsp_config(\n",
+ " project_id=project_id,\n",
+ " dsp_id=processing_id,\n",
+ " dsp_config_request=config_request\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not start feature generation job.\")\n",
+ "else:\n",
+ " print(\"Processing block has been configured.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dJxMwnVhRrYG"
+ },
+ "source": [
+ "## Run processing block to generate features\n",
+ "\n",
+ "After we've defined the impulse, we then want to use our processing block(s) to extract features from our data. We'll skip feature importance and feature explorer to make this go faster.\n",
+ "\n",
+ "Generating features kicks off a job in Studio. A \"job\" involves instantiating a Docker container and running a custom script in the container to perform some action. In our case, that involves reading in data, extracting features from that data, and saving those features as Numpy (.npy) files in our project.\n",
+ "\n",
+ "Because jobs can take a while, the API call will return immediately. If the call was successful, the response will contain a job number. We can then monitor that job and wait for it to finish before continuing.\n",
+ "\n",
+ "API calls (links to associated documentation):\n",
+ "\n",
+ " * [Jobs / Generate features](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/generate_features)\n",
+ " * [Jobs / Get job status](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_job_status)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {
+ "id": "gdLwkXUS_QMR"
+ },
+ "outputs": [],
+ "source": [
+ "def poll_job(jobs_api, project_id, job_id):\n",
+ " \"\"\"Wait for job to complete\"\"\"\n",
+ "\n",
+ " # Wait for job to complete\n",
+ " while True:\n",
+ "\n",
+ " # Check on job status\n",
+ " response = jobs_api.get_job_status(\n",
+ " project_id=project_id,\n",
+ " job_id=job_id\n",
+ " )\n",
+ " if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " print(\"ERROR: Could not get job status\")\n",
+ " return False\n",
+ " else:\n",
+ " if hasattr(response, \"job\") and hasattr(response.job, \"finished\"):\n",
+ " if response.job.finished:\n",
+ " print(f\"Job completed at {response.job.finished}\")\n",
+ " return response.job.finished_successful\n",
+ " else:\n",
+ " print(\"ERROR: Response did not contain a 'job' field.\")\n",
+ " return False\n",
+ "\n",
+ " # Print that we're still running and wait\n",
+ " print(f\"Waiting for job {job_id} to finish...\")\n",
+ " time.sleep(2.0)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {
+ "id": "dxddUwKWWcj7",
+ "outputId": "e7058157-25e3-4821-ad0b-38cca5016235",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Waiting for job 38711472 to finish...\n",
+ "Job completed at 2025-10-10T13:31:27.149Z\n",
+ "Features have been generated.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define generate features request\n",
+ "generate_features_request = GenerateFeaturesRequest.from_dict({\n",
+ " \"dspId\": processing_id,\n",
+ " \"calculate_feature_importance\": False,\n",
+ " \"skip_feature_explorer\": True,\n",
+ "})\n",
+ "\n",
+ "# Generate features\n",
+ "response = jobs_api.generate_features_job(\n",
+ " project_id=project_id,\n",
+ " generate_features_request=generate_features_request,\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not start feature generation job.\")\n",
+ "\n",
+ "# Extract job ID\n",
+ "job_id = response.id\n",
+ "\n",
+ "# Wait for job to complete\n",
+ "success = poll_job(jobs_api, project_id, job_id)\n",
+ "if success:\n",
+ " print(\"Features have been generated.\")\n",
+ "else:\n",
+ " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "metadata": {
+ "id": "0wk6uWvwAVia",
+ "outputId": "fdf764e6-0b7e-4c7e-946c-1c25ce576c7a",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Go here to download the generated features in NumPy format:\n",
+ "https://studio.edgeimpulse.com/v1/api/797297/dsp-data/2/x/training\n",
+ "https://studio.edgeimpulse.com/v1/api/797297/dsp-data/2/y/training\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Optional: download NumPy features (x: training data, y: training labels)\n",
+ "print(\"Go here to download the generated features in NumPy format:\")\n",
+ "print(f\"https://studio.edgeimpulse.com/v1/api/{project_id}/dsp-data/{processing_id}/x/training\")\n",
+ "print(f\"https://studio.edgeimpulse.com/v1/api/{project_id}/dsp-data/{processing_id}/y/training\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8q3_LLwwEoEA"
+ },
+ "source": [
+ "## Use learning block to train model\n",
+ "\n",
+ "Now that we have trained features, we can run the learning block to train the model on those features. Note that Edge Impulse has a number of learning blocks, each with different methods of configuration. We'll be using the \"keras\" block, which uses TensorFlow and Keras under the hood.\n",
+ "\n",
+ "You can use the [get_keras](https://docs.edgeimpulse.com/reference/python-api-bindings/edgeimpulse_api/api/learn_api#get_keras) and [set_keras](https://docs.edgeimpulse.com/reference/python-api-bindings/edgeimpulse_api/api/learn_api#set_keras) functions to configure the granular settings. We'll use the defaults for that block and just set the number of epochs and learning rate for training.\n",
+ "\n",
+ "API calls (links to associated documentation):\n",
+ "\n",
+ " * [Jobs / Train model (Keras)](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/train_model_-keras)\n",
+ " * [Jobs / Get job status](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_job_status)\n",
+ " * [Jobs / Get logs](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_logs)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "metadata": {
+ "id": "_PtkJ0ikBf9l",
+ "outputId": "49e7342a-9c2f-47d0-cf6a-36eb8bc2a427",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Waiting for job 38711522 to finish...\n",
+ "Job completed at 2025-10-10T13:32:55.022Z\n",
+ "Model has been trained.\n"
+ ]
+ }
+ ],
+ "source": [
+ " # Define training request\n",
+ "keras_parameter_request = SetKerasParameterRequest.from_dict({\n",
+ " \"mode\": \"visual\",\n",
+ " \"training_cycles\": 10,\n",
+ " \"learning_rate\": 0.001,\n",
+ " \"train_test_split\": 0.8,\n",
+ " \"skip_embeddings_and_memory\": True,\n",
+ "})\n",
+ "\n",
+ "# Train model\n",
+ "response = jobs_api.train_keras_job(\n",
+ " project_id=project_id,\n",
+ " learn_id=learning_id,\n",
+ " set_keras_parameter_request=keras_parameter_request,\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not start training job.\")\n",
+ "\n",
+ "# Extract job ID\n",
+ "job_id = response.id\n",
+ "\n",
+ "# Wait for job to complete\n",
+ "success = poll_job(jobs_api, project_id, job_id)\n",
+ "if success:\n",
+ " print(\"Model has been trained.\")\n",
+ "else:\n",
+ " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8LAglLwn6Jma"
+ },
+ "source": [
+ "Now that the model has been trained, we can go back to the job logs to find the accuracy metrics for both the float32 and int8 quantization levels. We'll need to parse the logs to find these. Because the logs are printed with the most recent events first, we'll work backwards through the log to find these metrics."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "metadata": {
+ "id": "y3fb0yfm6ceG"
+ },
+ "outputs": [],
+ "source": [
+ "def get_metrics(response, quantization=None):\n",
+ " \"\"\"\n",
+ " Parse the response to find the accuracy/training metrics for a given\n",
+ " quantization level. If quantization is None, return the first set of metrics\n",
+ " found.\n",
+ " \"\"\"\n",
+ " metrics = None\n",
+ " delimiter_str = \"calculate_classification_metrics\"\n",
+ "\n",
+ " # Skip finding quantization metrics if not given\n",
+ " if quantization:\n",
+ " quantization_found = False\n",
+ " else:\n",
+ " quantization_found = True\n",
+ "\n",
+ " # Parse logs\n",
+ " for log in reversed(response.to_dict()[\"stdout\"]):\n",
+ " data_field = log[\"data\"]\n",
+ " if quantization_found:\n",
+ " substrings = data_field.split(\"\\n\")\n",
+ " for substring in substrings:\n",
+ " substring = substring.strip()\n",
+ " if substring.startswith(delimiter_str):\n",
+ " metrics = json.loads(substring[len(delimiter_str):])\n",
+ " break\n",
+ " else:\n",
+ " if data_field.startswith(f\"Calculating {quantization} accuracy\"):\n",
+ " quantization_found = True\n",
+ "\n",
+ " return metrics"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {
+ "id": "AB47VpTXxwnL",
+ "outputId": "6189a558-ce7c-4e60-9972-c90a972b1ee3",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "ERROR: Could not get training metrics.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Get the job logs for the previous job\n",
+ "response = jobs_api.get_jobs_logs(\n",
+ " project_id=project_id,\n",
+ " job_id=job_id\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not get job log.\")\n",
+ "\n",
+ "# Print training metrics (quantization is \"float32\" or \"int8\")\n",
+ "quantization = \"float32\"\n",
+ "metrics = get_metrics(response, quantization)\n",
+ "if metrics:\n",
+ " print(f\"Training metrics for {quantization} quantization:\")\n",
+ " pprint.pprint(metrics)\n",
+ "else:\n",
+ " print(\"ERROR: Could not get training metrics.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dIuT-Mhp-71J"
+ },
+ "source": [
+ "## Test the impulse\n",
+ "\n",
+ "As with any good machine learning project, we should test the accuracy of the model using our holdout (\"testing\") set. We'll call the `classify` API function to make that happen and then parse the job logs to get the results.\n",
+ "\n",
+ "In most cases, using `int8` quantization will result in a faster, smaller model, but you will slightly lose some accuracy.\n",
+ "\n",
+ "API calls (links to associated documentation):\n",
+ "\n",
+ " * [Jobs / Classify](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/classify)\n",
+ " * [Jobs / Get job status](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_job_status)\n",
+ " * [Jobs / Get logs](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_logs)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "metadata": {
+ "id": "HdEksW2M-7Ob",
+ "outputId": "763c70a8-c0f6-4e0c-8f62-85fd7ce83a40",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Waiting for job 38711606 to finish...\n",
+ "Job completed at 2025-10-10T13:34:39.367Z\n",
+ "Inference performed on test set.\n"
+ ]
+ }
+ ],
+ "source": [
+ " # Set the model quantization level (\"float32\", \"int8\", or \"akida\")\n",
+ "quantization = \"int8\"\n",
+ "classify_request = StartClassifyJobRequest.from_dict({\n",
+ " \"model_variants\": quantization\n",
+ "})\n",
+ "\n",
+ "# Start model testing job\n",
+ "response = jobs_api.start_classify_job(\n",
+ " project_id=project_id,\n",
+ " start_classify_job_request=classify_request\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not start classify job.\")\n",
+ "\n",
+ "# Extract job ID\n",
+ "job_id = response.id\n",
+ "\n",
+ "# Wait for job to complete\n",
+ "success = poll_job(jobs_api, project_id, job_id)\n",
+ "if success:\n",
+ " print(\"Inference performed on test set.\")\n",
+ "else:\n",
+ " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "metadata": {
+ "id": "RYTJl-7GCC65",
+ "outputId": "0137c9c1-d6d3-440d-caa8-5d000eee0ad6",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "ERROR: Could not get test metrics.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Get the job logs for the previous job\n",
+ "response = jobs_api.get_jobs_logs(\n",
+ " project_id=project_id,\n",
+ " job_id=job_id\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not get job log.\")\n",
+ "\n",
+ "# Print\n",
+ "metrics = get_metrics(response)\n",
+ "if metrics:\n",
+ " print(f\"Test metrics for {quantization} quantization:\")\n",
+ " pprint.pprint(metrics)\n",
+ "else:\n",
+ " print(\"ERROR: Could not get test metrics.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "MsEAb8V6MO2B"
+ },
+ "source": [
+ "## Deploy the impulse\n",
+ "\n",
+ "Now that you've trained the model, let's build it as a C++ library and download it. We'll start by printing out the available target devices. Note that this list changes depending on how you've configured your impulse. For example, if you use a Syntiant-specific learning block, then you'll see Syntiant boards listed. We'll use the \"zip\" target, which gives us a generic C++ library that we can use for nearly any hardware.\n",
+ "\n",
+ "The `engine` must be one of:\n",
+ "\n",
+ "```\n",
+ "tflite\n",
+ "tflite-eon\n",
+ "tflite-eon-ram-optimized\n",
+ "tensorrt\n",
+ "tensaiflow\n",
+ "drp-ai\n",
+ "tidl\n",
+ "akida\n",
+ "syntiant\n",
+ "memryx\n",
+ "neox\n",
+ "```\n",
+ "\n",
+ "We'll use `tflite`, as that's the most ubiquitous.\n",
+ "\n",
+ "`modelType` is the quantization level. Your options are:\n",
+ "\n",
+ "```\n",
+ "float32\n",
+ "int8\n",
+ "```\n",
+ "\n",
+ "In most cases, using `int8` quantization will result in a faster, smaller model, but you will slightly lose some accuracy.\n",
+ "\n",
+ "API calls (links to associated documentation):\n",
+ "\n",
+ " * [Deployment / Deployment targets (data sources)](https://docs.edgeimpulse.com/reference/edge-impulse-api/deployment/deployment_targets_-data_sources)\n",
+ " * [Jobs / Build on-device model](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/build_on-device_model)\n",
+ " * [Deployment / Download](https://docs.edgeimpulse.com/reference/edge-impulse-api/deployment/download)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "metadata": {
+ "id": "9kePPtX7OsbM",
+ "outputId": "897d2de3-843b-4a30-9736-3108e0dab425",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "zip\n",
+ "zip-linux\n",
+ "android-cpp\n",
+ "arduino\n",
+ "cubemx\n",
+ "wasm\n",
+ "wasm-browser-simd\n",
+ "wasm-node-simd\n",
+ "tensorrt\n",
+ "ethos-alif-ensemble-e7-hp\n",
+ "ethos-alif-ensemble-e7-he\n",
+ "ethos-nxp-imx93\n",
+ "ethos-alif-ensemble-e7-he-cmsis-pack\n",
+ "ethos-alif-ensemble-e7-hp-cmsis-pack\n",
+ "ethos-himax-wiseeye2\n",
+ "ethos-u85\n",
+ "ethos-u85-cmsis-pack\n",
+ "synaptics-tensaiflow-lib\n",
+ "meta-tf\n",
+ "memryx-dfp\n",
+ "tidl-lib-am62a\n",
+ "tidl-lib-am68a\n",
+ "slcc\n",
+ "disco-l475vg\n",
+ "ambiq-apollo5\n",
+ "arduino-nano-33-ble-sense\n",
+ "arduino-nicla-vision\n",
+ "runner-linux-aarch64-advantech-icam540\n",
+ "espressif-esp32\n",
+ "raspberry-pi-rp2040\n",
+ "raspberry-pi-pico2\n",
+ "raspberry-pi-pico2-w\n",
+ "silabs-thunderboard2\n",
+ "silabs-xg24\n",
+ "himax-we-i\n",
+ "infineon-cy8ckit-062s2\n",
+ "infineon-cy8ckit-062-ble\n",
+ "nordic-nrf52840-dk\n",
+ "nordic-nrf5340-dk\n",
+ "nordic-nrf9160-dk\n",
+ "nordic-thingy53\n",
+ "nordic-thingy53-nrf7002eb\n",
+ "nordic-thingy91\n",
+ "nordic-nrf7002-dk\n",
+ "nordic-nrf9161-dk\n",
+ "nordic-nrf9151-dk\n",
+ "nordic-nrf54l15-dk\n",
+ "sony-spresense\n",
+ "sony-spresense-commonsense\n",
+ "ti-launchxl\n",
+ "renesas-ck-ra6m5\n",
+ "brickml\n",
+ "brickml-module\n",
+ "alif-ensemble-e7\n",
+ "alif-ensemble-e7-he\n",
+ "alif-ensemble-e7-hp-sram\n",
+ "alif-ensemble-e7-devkit\n",
+ "alif-ensemble-e7-he-devkit\n",
+ "alif-ensemble-e7-hp-sram-devkit\n",
+ "seeed-grove-vision-ai\n",
+ "runner-linux-aarch64\n",
+ "runner-linux-armv7\n",
+ "runner-linux-x86_64\n",
+ "runner-linux-aarch64-akd1000\n",
+ "runner-linux-x86_64-akd1000\n",
+ "runner-linux-aarch64-qnn\n",
+ "runner-linux-aarch64-gpu\n",
+ "qualcomm-gstreamer-ml-pipeline-eim\n",
+ "runner-mac-x86_64\n",
+ "runner-mac-arm64\n",
+ "runner-linux-aarch64-tda4vm\n",
+ "runner-linux-aarch64-am62a\n",
+ "particle\n",
+ "iar\n",
+ "runner-linux-aarch64-am68a\n",
+ "particle-p2\n",
+ "cmsis-package\n",
+ "runner-linux-aarch64-jetson-nano\n",
+ "runner-linux-aarch64-rzg2l\n",
+ "runner-linux-aarch64-jetson-orin\n",
+ "runner-linux-aarch64-jetson-orin-6-0\n",
+ "st-aton-lib\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Get the available devices\n",
+ "response = deployment_api.list_deployment_targets_for_project_data_sources(\n",
+ " project_id=project_id\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not get device list.\")\n",
+ "\n",
+ "# Print the available devices\n",
+ "targets = [x.to_dict()[\"format\"] for x in response.targets]\n",
+ "for target in targets:\n",
+ " print(target)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "metadata": {
+ "id": "qInW3vE6OaN6",
+ "outputId": "0147eb66-cf77-4997-f9bc-c29487cb50ca",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Waiting for job 38711667 to finish...\n",
+ "Job completed at 2025-10-10T13:36:41.499Z\n",
+ "Impulse built.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Choose the target hardware (from the list above), engine,\n",
+ "target_hardware = \"zip\"\n",
+ "engine = \"tflite\"\n",
+ "quantization = \"int8\"\n",
+ "\n",
+ "# Construct request\n",
+ "device_model_request = BuildOnDeviceModelRequest.from_dict({\n",
+ " \"engine\": engine,\n",
+ " \"modelType\": quantization\n",
+ "})\n",
+ "\n",
+ "# Start build job\n",
+ "response = jobs_api.build_on_device_model_job(\n",
+ " project_id=project_id,\n",
+ " type=target_hardware,\n",
+ " build_on_device_model_request=device_model_request,\n",
+ ")\n",
+ "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n",
+ " raise RuntimeError(\"Could not start feature generation job.\")\n",
+ "\n",
+ "# Extract job ID\n",
+ "job_id = response.id\n",
+ "\n",
+ "# Wait for job to complete\n",
+ "success = poll_job(jobs_api, project_id, job_id)\n",
+ "if success:\n",
+ " print(\"Impulse built.\")\n",
+ "else:\n",
+ " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 29,
+ "metadata": {
+ "id": "QLU8jDNFpv9T"
+ },
+ "outputs": [],
+ "source": [
+ "# Get the download link information\n",
+ "response = deployment_api.download_build(\n",
+ " project_id=project_id,\n",
+ " type=target_hardware,\n",
+ " model_type=quantization,\n",
+ " engine=engine,\n",
+ " _preload_content=False,\n",
+ ")\n",
+ "if response.status != 200:\n",
+ " raise RuntimeError(\"Could not get download information.\")\n",
+ "\n",
+ "# Find the file name in the headers\n",
+ "file_name = re.findall(r\"filename\\*?=(.+)\", response.headers[\"Content-Disposition\"])[0].replace(\"utf-8''\", \"\")\n",
+ "file_path = os.path.join(OUTPUT_PATH, file_name)\n",
+ "\n",
+ "# Write the contents to a file\n",
+ "with open(file_path, \"wb\") as f:\n",
+ " f.write(response.data)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "klQoJH9yvO2C"
+ },
+ "source": [
+ "You should have a .zip file in the same directory as this notebook. Download or move it to somewhere else on your computer and unzip it. You can now follow [this guide](https://docs.edgeimpulse.com/docs/run-inference/cpp-library/deploy-your-model-as-a-c-library) to link and compile the library as part of an application."
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
\ No newline at end of file