Thursday, 9 November 2017

Jk Papir Moving Average


Outsourcing gir bedrifter friheten til å dumpe noncore, men viktige sektorer av administrasjonen på selskaper som spesialiserer seg på dette området. 1. Outsourcing frigjør tid og ressurser som gjør at du kan fokusere på bedriftens kjernevirksomhet. 2. Outsourcing sparer deg penger i lønnskostnader. Utgiftene til en ansatt bokholder inkluderer lønn, betalt tidsavbrudd, lønnsskatt, arbeidsledighetsskatt, arbeidstakers kompensasjonsforsikring og fordeler. I tillegg må du gi arbeidsplass, kontormøbler, kontorrekvisita, programvare og datamaskiner. Gjennomsnittet Here8217s hvorfor du bør vurdere outsourcing: Bedriftseiere tilbringer fem eller flere timer per uke som administrerer bokføringspersonell. Ved å outsource dine regnskapsfunksjoner, får du tjenester fra en profesjonell til en brøkdel av kostnaden. 3. Outsourcing din bokføring er en mye mer effektiv måte å organisere økonomi for skattemannen på. Det kanadiske Revenue Agency er mye mer sannsynlig å godta uttalelsen av en anerkjent bokføringstjeneste enn en intern selskapsregnskapsbedømmelse. Et profesjonelt bokholderfirma vil kunne organisere poster på en måte som de vil kunne forstå. Kort sagt, bokføring fagfolk snakker språket i CRA. De fleste bedriftseiere ikke, og de har heller ikke tid til å lære det. Spare penger på skatter og spare tid på potensielle revisjoner er en av de største måtene å spare penger som en bedriftseier. Et stort flertall av små bedrifter som mislykkes gjør det under vekten av en skattebyrde sammen med andre utgifter. Outsourced bookkeeping er en sann kostnadsreduksjon for små bedrifter. GHVA presenterer: Flytting av virksomheten i riktig retning med virtuell assistanse Du er invitert til en informasjonsforsterket nettverkssesjon om hvordan virtuelle assistenter kan hjelpe til med å avlaste noe av stress og arbeidsbelastning du møter hver dag kostnadseffektivt Janet Barclay, organisert assistent Laurie Meyer, Suksessfulle kontorløsninger Salma Burney, Virtual Girl Friday, Jacquie Manore, Arbeidsbelastning Solutions Services Inc. Nøkkelhøyttaler: Dave Howlett, grunnlegger og administrerende direktør i RealHumanBeing. org, som presenterer en del av hans presentasjon. Slik kobler du til (som et ekte menneske) Mr . Hilseseminarer har etterlatt tusenvis av mennesker inspirert og fast bestemt på å gjøre det rette for seg selv, deres firmaer og barna sine. Han vil gi en 15 minutters del av hans berømte How To Connect-presentasjon. Kostnad: 20,00 ved døren, eller forhåndsregistrer og spar 5,00 Prisen inkluderer parkering og catering av Pepperwood. God bokføring poster betyr å ha et godt arkivsystem. Uten en har du ikke den andre. Hold bokføringen oppdatert. På salgssiden dersom du ikke gir faktura eller kvittering, får du ikke betalt. Innkjøp skal gjøres månedlig eller kvartalsvis for å matche GST-rapporteringsperioden. Don8217t forlater det årlig bare fordi det er din GST-rapporteringsperiode. Utmerkede grunner til å holde boken oppdatert i min artikkel Bookkeeping8230 Hvorfor bry deg. Når du betaler en regning 8211, registrer du datoen og betalingsmåten. Sjekk om it8217 er betalt med sjekk eller kredittkortet det ble betalt. Hvis it8217 er en delvis betaling - beløpet og datoen for hver betaling. Nå er informasjonen i orden for å komme inn i bøkene dine. It8217 er en enkel ting, men den informasjonen kan være nyttig å ha 6 eller 12 måneder nedover veien. Få alltid kvittering - Kontant kjøp er vanskelig å kreve ellers, og ja, Tim Hortons vil gi deg kvittering hvis du spør. Hvis kvitteringene er så falmede eller krøllete som gjør det så ulæselige 8211 gjett hva 8211 de ikke kommer inn i bøkene. Kredittkort uttalelser er ikke alltid bevis nok. Et produkt som er kjøpt hos Wal-Mart, kan være noe, og det faktum at du kjøpte det med visittkortet ditt, viser ikke noe fradrag. Lag detaljerte innskuddssedler og hold en kopi. Sist jeg sjekket bankene var fortsatt å gi ut gratis innskudd bøker. Eller kjøp en enkel notatbok. Ved å holde detaljerte oversikter over hvert innskudd, hjelper vi oss med å matche kundens betaling til innskuddet på kontoutskriften. Bruk en kalender for å minne deg på forfallsdatoer hvis du sporer noen av følgende skatter 8211 PST, GST, Lønn, WCB, kvartalsinntekt. Å foreta betalinger til rett tid, vil holde deg ut av skattereglene med Canada Revenue Agency. Se mer om dette i min artikkel. Hvordan gikk jeg så dypt inn i skatteavsetninger. Smart Business Folk vet at tiden er penger, ved å planlegge fremover. Organiserte poster vil gjøre livet mye lettere for bokholderen om personen er deg selv eller noen du betaler. I tilfelle du har forhandlinger med Inntekt Canada, vil forretningspersonen med organiserte poster ha en mye enklere tid enn personen som ikke er. I henhold til § 230 i inntektsskatteloven skal enhver person som driver virksomhet i Canada og enhver som er pålagt å betale eller innhente skatter, beholde bøker og arkiver på sitt hjemsted eller bosted, i Canada, i et slikt format eller for å aktivere vurdering og betaling av skatt. De fleste i virksomheten er klar over at det er en riktig måte å beholde bøker på. For de som ikke er klar over det, er det viktig å innse at Inntekt Canada har makt til å kreve at du beholder riktige bøker. God bokføring poster betyr å ha et godt arkivsystem. Uten en har du ikke den andre. Sett opp et arkiveringssystem som du kan følge og bruke det. Dette er trolig det første viktigste skrittet for å holde gode poster. Enkle filsystemer er enkle å sette opp og vedlikeholde. GST Quarterly Filers Din GST-retur for AprilMayJune 2008 skyldes 31. juli 2008. Hvordan vet jeg om I8217m en kvartalsfiler får ut GST-skjemaet ditt, kalt 8220Goods og Services TaxHarmonized Sales Tax (GSTHST) Return for Registrants8221. De viktigste delene av informasjon du trenger er de tre boksene øverst i høyre hjørne på side 1. Den første boksen viser forfallsdatoen for betalingen din, den andre boksen viser ditt kontonummer og den tredje boksen viser rapporteringsperioden. Eller du kan være en årlig fil. Beregningsperioden boksen vil fortelle deg datoperioden for din remitteringsperiode. Hvor mye trenger jeg å betale Organiser dine salgskvitteringer for å beregne GST samlet på salg. Fra 1. januar 2008 er GST-rate 5. Samle inn og organisere bedriftskvitteringene dine for å beregne GST utbetalt ved kjøp. Trekke GSTPurchases fra GSTSales og gi forskjellen til mottakeren generelt. (I8217m forutsatt at salget var større enn kjøp.) Hvis GSTPurchases er større enn GSTSales, kan du få tilbakebetaling, men alt avhenger av det. Det er alltid unntak fra regelen. I dag er det mange måter å foreta betalingen på. Du kan - - sende e-post en sjekk - vis din lokale bank - online bank - GST Netfile cra-arc. gc. camenu-e. html - GST Telefile cra-arc. gc. camenu-e. html Send betalingen din i tide. Mottakeren General er svært uforgivende for latens og vil pålegge straffer og rentebelastninger sammensatt daglig. Klikk på denne linken til Canada Revenue Agency-nettstedet for alt du noen gang ønsket å vite om GST. cra-arc. gc. cataxbusinesstopicsgstmenu-e. html Løft hånden din hvis you8217ve begynte å jobbe i 2008-bokføring. Utmerket Og resten av deg. Hva venter du på Hvorfor vente til 30. april for å se resultatene i årets arbeid. Ved å starte nå kan du opprette en Profit Amp Loss setning som vil vise deg om you8217ve har gjort eller tapt penger og hvordan du brukte det. Denne rapporten er et fantastisk stykke informasjon som kan hjelpe deg mer nå enn senere. Bruker du tjenestene til en bokholder eller gjør du det selv Vi har mange hatter mens du prøver å drive vår virksomhet og kanskje vi har for mange. Hvis du sliter med bokføringen, og jeg vet at det ikke er en hyggelig oppgave, så kanskje du bør vurdere å få litt hjelp. De fleste profesjonelle bookkeepers vil gi outsourcing av arbeidsopplæringen ved bruk av programvaren eller hjelpe deg med å finne ut hvilken utgiftskategori som skal brukes. Utdrag fra en hjemmebasert forretningsartikkel - Don8217t overser managementbookkeeping. Mangel på lederkompetanse er en av de største årsakene til virksomhetssvikt. Ta kurs, søk ekspertråd eller ansett hjelp, men lær kunnskap om grunnleggende ledelse før du begynner. canadabusiness. caservletContentServerpagenameCBSCFEdisplayampcGuideFactSheetampcid1081945277281en Selvfølgelig trenger du en type system for opptak av alt. Dette kan være et regnskapsprogram, regneark eller papirbasert. I kommentarfeltet vennligst gi meg beskjed om hva slags system du bruker til bokføringen din. I8217d virkelig liker å vite. I en fremtidig artikkel vil I8217ll legge inn mine funn sammen med info om de ulike systemene. The Canadian Bookkeepers Association (CBA) er en nasjonal, ikke-for-profit organisasjon forpliktet til fremskritt av profesjonelle bookkeepers. Medlemskap i CBA gir bokholderne ressurser for å lykkes i et stadig skiftende miljø. Vår forening skaper ekspertise gjennom kunnskap og vokser raskt, og representerer en omfattende økonomistyringsmetode for næringslivet for alle størrelser. Vårt medlemskap vokser raskt hver dag og representerer bokførere i flertallet av Canada8217s provinser og territorier. Vår MISSION inkluderer: Å fremme, støtte, sørge for og oppmuntre kanadiske bokholderne. Å fremme og øke bevisstheten om bokføring i Canada som en profesjonell disiplin. Å støtte nasjonalt, regionalt og lokalt nettverk blant kanadiske bokførere. Å gi informasjon om ledende prosedyrer, utdanning og teknologier som forbedrer næringen, samt den kanadiske bokføring profesjonelle. Å støtte og oppmuntre til ansvarlig og nøyaktig bokføring i hele Canada. Vi er forpliktet til vekst som fordeler våre medlemmer og Bokføring i Canada som en profesjonell disiplin. Våre mål inkluderer fremskritt innen fjernundervisning, sertifisering av bokførere og regionale kapitler. Vi setter pris på forslag som forbedrer dette nettstedet og foreningen. Vi lytter og setter pris på dine innspill Vi jobber med å betegne bokholderne i Canada. Betegnelsen vil være 8220Certified Professional Bookkeeper8221 The Canadian Bookkeepers Association var formelt kjent som den kanadiske Bookkeepers Alliance. CBA begynte å akseptere medlemmer tidlig i 2003. 9. februar 2004 ble den canadiske Bookkeepers Association innlemmet som en ikke-for-profit forening. Medlemsveksten har langt overgått det som opprinnelig var forventet. Vi er begeistret over veksten i foreningen. Vi har vokst med hver milepæl i National Non-Profit-organisasjonen vi er i dag med medlemmer i nesten alle provinser og territorier. Bruke nevrale nett for å gjenkjenne håndskrevne sifre Perceptroner Sigmoid-neuroner Arkitekturen av nevrale nettverk Et enkelt nettverk for å klassifisere håndskrevne siffer Læring med gradient nedstigning Implementere nettverket vårt for å klassifisere siffer Mot dyp læring Hvordan backpropagasjonsalgoritmen fungerer Oppvarming: En rask matrisebasert tilnærming til å beregne utgangen fra et neuralt nettverk. De to forutsetningene vi trenger om kostnadsfunksjonen Hadamard-produktet, s odot t De fire grunnleggende ligningene bak tilbakepropagasjon Bevis for de fire grunnleggende ligningene (valgfritt) Bakpropagasjonsalgoritmen Koden for tilbakepropagasjon I hvilken forstand er tilbakepropagasjon en rask algoritme Tilbakepropagasjon: det store bildet Forbedre måten nevrale nettverk lærer Kors-entropi-kostnadsfunksjonen Overfitting og regularisering Vektinitialisering Håndskrift gjenkjenning ition revisited: koden Hvordan velge et neuralt nettverk hyper-parametere Andre teknikker Et visuelt bevis på at nevrale nett kan beregne en hvilken som helst funksjon To forsøk Universitet med ett inngang og en utgang Mange inngangsvariabler Utvidelse utover sigmoid-neuroner Fastsetting av trinnfunksjonene Konklusjon Hvorfor er dype nevrale nettverk vanskelig å trene Den forsvinner gradientproblemet Hva forårsaker det forsvinner gradientproblemet Ustabile gradienter i dype neuralnett Ustabile gradienter i mer komplekse nettverk Andre hindringer for dyp læring Dyp læring Introduksjon av falsjonsnett Konvolutionelle nevrale nettverk i praksis Koden for våre faltningsnettverk Nylig fremgang i bildegjenkjenning Andre tilnærminger til dype neuralnett På fremtiden for nevrale nettverk Tillegg: Er det en enkel algoritme for intelligens Takk til alle supporterne som gjorde boken mulig, spesielt takk til Pavel Dudrenov. Takk også til alle bidragsyterne til Bugfinder Hall of Fame. Deep Learning. bok av Ian Goodfellow, Yoshua Bengio og Aaron Courville I det siste kapitlet så vi hvordan nevrale nettverk kan lære deres vekt og forspenninger ved hjelp av gradient nedstigningsalgoritmen. Det var imidlertid et gap i vår forklaring: vi diskuterte ikke hvordan du skal beregne gradienten av kostnadsfunksjonen. Det er ganske et gap I dette kapittelet kan jeg forklare en rask algoritme for å beregne slike gradienter, en algoritme kjent som tilbakekalling. Bakpropagasjonsalgoritmen ble opprinnelig innført på 1970-tallet, men dens betydning var ikke fullt verdsatt til et kjent 1986-papir av David Rumelhart. Geoffrey Hinton. og Ronald Williams. Det papiret beskriver flere nevrale nettverk hvor tilbakekroping virker langt raskere enn tidligere tilnærminger til læring, noe som gjør det mulig å bruke nevrale nett for å løse problemer som tidligere hadde vært uoppløselige. I dag er backpropagasjonsalgoritmen arbeidshorse for læring i nevrale nettverk. Dette kapitlet er mer matematisk involvert enn resten av boka. Hvis du ikke er gal på matematikk, kan du bli fristet til å hoppe over kapittelet, og å behandle tilbakemelding som en svart boks hvis detaljer du er villig til å ignorere. Hvorfor ta deg tid til å studere disse detaljene Årsaken er selvsagt forståelse. I hjertet av tilbakepropagasjon er et uttrykk for partiell avledet partiell C delvis w av kostnadsfunksjonen C med hensyn til hvilken som helst vekt w (eller bias b) i nettverket. Uttrykket forteller oss hvor raskt kostnaden endres når vi endrer vekter og forstyrrelser. Og mens uttrykket er noe komplekst, har det også en skjønnhet, med hvert element som har en naturlig, intuitiv tolkning. Og slik tilbakekalling er ikke bare en rask algoritme for læring. Det gir oss faktisk detaljert innsikt i hvordan endring av vekter og forstyrrelser endrer nettverksovergripende oppførsel. Det er vel verdt å studere i detalj. Med det sagt, hvis du vil skumme kapittelet, eller hoppe rett til neste kapittel, så er det greit. Ive skrevet resten av boken for å være tilgjengelig, selv om du behandler tilbakepropagasjon som en svart boks. Det er selvfølgelig poeng senere i boken der jeg refererer tilbake til resultatene fra dette kapittelet. Men på disse punktene bør du fortsatt kunne forstå de viktigste konklusjonene, selv om du ikke følger alle resonnementene. Før du diskuterer backpropagation, kan du varme opp med en rask matrisebasert algoritme for å beregne utgangen fra et neuralt nettverk. Vi så egentlig allerede kort denne algoritmen nær slutten av det siste kapitlet. men jeg beskrev det raskt, så det var verdt å se på detaljer. Spesielt er dette en god måte å bli komfortabel med notasjonen som brukes i tilbakemelding, i en kjent sammenheng. La oss begynne med en notasjon som lar oss henvise til vekter i nettverket på en entydig måte. Bruk vel wl til å angi vekten for forbindelsen fra k-neuronen i (l-1) laget til j-neuronen i l-laget. For eksempel viser diagrammet nedenfor vekten på en forbindelse fra den fjerde nevronen i det andre laget til den andre nevronen i det tredje laget av et nettverk: Denne notasjonen er tungvint i begynnelsen, og det tar litt arbeid å mestre. Men med litt innsats finner du notasjonen blir lett og naturlig. En quirk av notasjonen er bestilling av j og k-indeksene. Du tror kanskje at det er mer fornuftig å bruke j for å referere til inngangssignalet, og k til utgangssignalet, ikke omvendt, som faktisk er gjort. Jeg forklarer årsaken til dette quirk nedenfor. Vi bruker en lignende notasjon for nettverksforstyrrelser og aktiveringer. Eksplisitt bruker vi blj for bias av j-neuronen i l-laget. Og vi bruker alj for aktiveringen av j-neuronet i l-laget. Følgende diagram viser eksempler på disse notasjonene som er i bruk: Med disse notasjonene er aktiveringen aj av j-neuronet i l-laget relatert til aktiveringene i (l-1) - laget ved ligningen (sammenligning ligning (4) begynner frac nonumberend og omliggende diskusjon i det siste kapittelet) start aj sigmaleft (sumk wak blj right), tag slutt hvor summen er over alle nevroner k i (l-1) laget. For å omskrive dette uttrykket i en matriksform definerer vi en vektmatrise wl for hvert lag, l. Oppføringene av vektmatrisen wl er bare vektene som forbinder til l-laget av nevroner, det vil si oppføringen i j-raden og k-kolonnen er wl. På samme måte definerer vi for hvert lag l en biasvektor. bl. Du kan sikkert gjette hvordan dette virker - komponentene i biasvektoren er bare verdiene blj, en komponent for hver nevron i l-laget. Og til slutt definerer vi en aktiveringsvektor al hvis komponenter er aktiveringene alj. Den siste ingrediensen vi trenger å omskrive (23), begynner en j sigmaleft (sumk w a k blj right) ikke-nummerering i en matriksform er ideen om vektorisering av en funksjon som sigma. Vi møtte vektorisering kort i det siste kapittelet, men for å samle, er ideen at vi ønsker å bruke en funksjon som sigma til hvert element i en vektor v. Vi bruker den åpenbare noteringssymmaen (v) for å betegne denne typen elementvis anvendelse av en funksjon. Det vil si, komponentene til sigma (v) er bare sigma (v) j sigma (vj). For eksempel, hvis vi har funksjonen f (x) x2, har den vektoriserte formen av f effekten begynnelsen flåt (venstre begynner 2 3 slutter til høyre til høyre) venstre begynner f (2) f (3) slutter høyre venstre begynner 4 9 slutten til høyre, tagenden som er, den vektoriserte f kvadrer bare hvert element av vektoren. Med disse notasjonene i tankene begynner Equation (23) en j sigmaleft (sumk w a k blj right) nonumberend kan omskrives i den vakre og kompakte vektoriserte formen begynne en sigma (wl a bl). tag end Dette uttrykket gir oss en mye mer global måte å tenke på hvordan aktiveringene i ett lag er relatert til aktiveringer i det forrige laget: Vi bruker bare vektmatrisen til aktiveringene, legger deretter til biasvektoren og til slutt bruker sigmafunksjonen Forresten, er dette uttrykket som motiverer quirk i wl notasjonen nevnt tidligere. Hvis vi brukte j for å indeksere inngangssignalet, og k for å indeksere utgangssignalet, må vi bytte ut vektmatrisen i ligning (25) og begynne en sigma (wl a bl) nonumberend ved transponeringen av vektmatrisen. Det er en liten forandring, men irriterende, og vi mister den enkle enkelheten til å si (og tenke) å bruke vektmatrisen til aktiveringene. Den globale utsikten er ofte lettere og mer kortfattet (og involverer færre indekser) enn nevron-til-neuronvisningen vi har tatt til nå. Tenk på det som en måte å rømme indeksen helvete på, mens du forblir nøyaktig om hva som skjer. Ekspresjonen er også nyttig i praksis, fordi de fleste matriksbiblioteker gir raske måter å implementere matriksmultiplikasjon, vektortilsetting og vektorisering på. Faktisk gjorde koden i det siste kapitlet implisitt bruk av dette uttrykket for å beregne nettverksadferdene. Når du bruker ligning (25), begynner en sigma (wl a bl) som ikke er nummerert til å beregne al, beregner vi mellommengden zl ekviv wl a bl langs veien. Denne mengden viser seg å være nyttig nok til å være verdt å navngi: vi kaller zl den vektede inngangen til nevronene i lag l. Vel, bruk den vektede inngangen zl mye senere i kapittelet. Ligning (25) begynner en sigma (wl a bl) nonumberend er noen ganger skrevet i forhold til vektet inngang, som al sigma (zl). Det er også verdt å merke seg at zl har komponenter zlj sumk wl a kblj, det vil si zlj er bare vektet inngang til aktiveringsfunksjonen for neuron j i lag l. Målet med tilbakekopiering er å beregne de delvise derivatene partielle C partielle w og partielle C partielle b av kostnadsfunksjonen C med hensyn til hvilken som helst vekt w eller forspenning b i nettverket. For tilbakekopiering til arbeid må vi lage to hovedforutsetninger om formen for kostnadsfunksjonen. Før du oppgir disse antagelsene, er det imidlertid nyttig å ha et eksempel på kostnadsfunksjon i tankene. Vel bruk den kvadratiske kostnadsfunksjonen fra siste kapittel (jfr. Ligning (6) begynner C (w, b) ekv. Frac sumx y (x) - a2 ikke-nummererende). I notatet til den siste delen har kvadratisk kostnaden skjemaet begynnelsen C frac sumx y (x) - aL (x) 2, tagenden hvor: n er det totale antall treningseksempler summen er over individuelle treningseksempler, xyy (x) er den tilsvarende ønskede utdata L angir antall lag i nettverket og aL aL (x) er vektoren av aktiveringer som sendes ut fra nettverket når x er inntastet. Ok, så hvilke forutsetninger trenger vi å gjøre om vår kostnadsfunksjon, C, slik at tilbakekalling kan brukes. Den første antakelsen vi trenger er at kostnadsfunksjonen kan skrives som en gjennomsnittlig C frac sumx Cx over kostnadsfunksjoner Cx for individuelle treningseksempler, x. Dette er tilfellet for den kvadratiske kostnadsfunksjonen, hvor kostnaden for et enkelt treningseksempel er Cx frac y-aL 2. Denne antagelsen vil også være troverdig for alle de andre kostnadsfunksjonene som er godt møtt i denne boken. Grunnen til at vi trenger denne forutsetningen er at det som gjør det mulig for oss å gjøre, er å beregne delvise derivater delvis Cx delvis w og delvis Cx del b for et enkelt treningseksempel. Vi gjenoppretter deretter delvis C delvis w og delvis C delvis b ved å gjennomsnittlig over treningseksempler. Faktisk, med denne forutsetningen i tankene, velg at treningseksemplet x er blitt løst, og slipp x-abonnementet, skriv kostnaden Cx som C. Vel, setter slutt x tilbake, men for nå er det et notasjonsbelastning som er bedre igjen implisitt. Den andre antakelsen vi gjør om kostnaden er at den kan skrives som en funksjon av utgangene fra det nevrale nettverket: For eksempel tilfredsstiller kvadratisk kostnadsfunksjon dette kravet, siden den kvadratiske kostnaden for et enkelt treningseksempel x kan skrives som start C frac y-aL2 frac sumj (yj-aLj) 2, tag enden og dermed er en funksjon av utgangsaktiveringene. Selvfølgelig avhenger denne kostnadsfunksjonen også av ønsket utgang y, og du kan kanskje lure på hvorfor ikke var omkostnaden også som en funksjon av y. Husk at inngangstreningseksempel x er fast, og så er utgangen y også en fast parameter. Spesielt er det ikke noe vi kan endre ved å endre vekter og forspenninger på noen måte, det vil si det er ikke noe som det neurale nettverket lærer. Og så er det fornuftig å betrakte C som en funksjon av utgangsaktiveringene aL alene, med y bare en parameter som bidrar til å definere den funksjonen. Backpropagationsalgoritmen er basert på vanlige lineære algebraiske operasjoner - ting som vektor tillegg, multiplisere en vektor med en matrise, og så videre. Men en av operasjonene er litt mindre vant. Anta spesielt at s og t er to vektorer av samme dimensjon. Da bruker vi odot t til å betegne elementvis produkt av de to vektorene. Dermed er komponentene av s odot t bare (s odot t) j sj tj. For eksempel, begynn venstrebegin 1 2 ende høyre odot leftbegin 3 4end høyre venstre begynn 1 3 2 4 slutt høyre venstre begynner 3 8 slutten til høyre. tag end Denne typen elementvis multiplikasjon kalles noen ganger Hadamard-produktet eller Schur-produktet. Tenk på det som Hadamard-produktet. Gode ​​matrisebiblioteker gir vanligvis raske implementeringer av Hadamard-produktet, og det kommer til nytte når man implementerer backpropagation. Backpropagation handler om å forstå hvordan endring av vekter og forspenninger i et nettverk endrer kostnadsfunksjonen. Til syvende og sist betyr dette at man beregner partielle derivater delvis C delvis wl og delvis C delvis blj. Men for å beregne disse, innfører vi først en mellomkvantitet, deltalj, som vi kaller feilen i j-neuronen i l-laget. Backpropagation vil gi oss en prosedyre for å beregne feilen deltalj, og da vil relate deltalj til partial C partial wl og partial C partial blj. For å forstå hvordan feilen er definert, forestill deg at det er en demon i vårt nevrale nettverk: Demonen sitter ved j-neuronen i lag l. Når inngangen til nevronet kommer inn, knytter demonen seg til neuronoperasjonen. Det legger til en liten endring Delta zlj til nevronen vektet inngang, slik at i stedet for å utstede sigma (zlj), utfører nevronen segma (zljDelta zlj) i stedet. Denne forandringen forplanter seg gjennom senere lag i nettverket, og til slutt forårsaker den totale kostnaden å endres med et beløp frac Delta zlj. Nå er denne demonen en god demon, og prøver å hjelpe deg med å forbedre prisen, dvs. de prøver å finne en Delta zlj som gjør at kostnaden blir mindre. Anta at frac har en stor verdi (enten positiv eller negativ). Da demonen kan senke kostnaden ganske mye ved å velge Delta zlj for å få motsatt tegn til frac. Derimot, hvis frac er nær null, så kan demonen ikke forbedre kostnaden mye ved å forstyrre vektet inngang zlj. Så langt demonen kan fortelle, er nevronen allerede ganske nær optimal. Dette er bare tilfellet for små endringer Delta zlj, selvfølgelig. Godt anta at demonen er tvunget til å gjøre slike små endringer. Og så er det en heuristisk forstand hvor frac er et mål på feilen i nevronet. Motivert av denne historien, definerer vi feilen deltalj av neuron j i lag l ved å begynne deltalj equiv frac. tag end Som i våre vanlige konvensjoner bruker vi deltal til å angi vektoren av feil knyttet til lag l. Backpropagation vil gi oss en måte å beregne deltal for hvert lag, og deretter knytte disse feilene til mengdene av reell interesse, delvis C delvis wl og delvis C delvis blj. Du lurer kanskje på hvorfor demonen endrer vektet inngang zlj. Det er sikkert mer naturlig å forestille seg demonen som endrer utgangsaktiveringen alj, med det resultat at vi bruker frac som vårt feilmål. Faktisk, hvis du gjør dette, fungerer det ut akkurat som diskusjonen nedenfor. Men det viser seg å gjøre presentasjonen av backpropagation litt mer algebraisk komplisert. Så godt hold deg med deltalj frac som vårt mål på feil I klassifikasjonsproblemer som MNIST blir begrepet feil noen ganger brukt til å bety klassifiseringsfrekvensen. F. eks Hvis det nevrale nettverket korrekt klassifiserer 96,0 prosent av sifrene, er feilen 4,0 prosent. Tydeligvis har dette en ganske annen betydning fra deltavektorer. I praksis burde du ikke ha problemer med å fortelle hvilken mening som er ment for enhver bruk. Plan for angrep: Backpropagation er basert på fire grunnleggende ligninger. Sammen gir disse ligningene oss en måte å beregne både feil deltal og gradienten av kostnadsfunksjonen. Jeg oppgir de fire ligningene nedenfor. Vær advart, skjønt: du bør ikke forvente å assimilere likningene øyeblikkelig. En slik forventning vil føre til skuffelse. Faktisk er backpropagasjonsligningene så rike at det å forstå dem godt krever betydelig tid og tålmodighet som du gradvis dyper dypere inn i ligningene. Den gode nyheten er at slik tålmodighet blir tilbakebetalt mange ganger over. Og så er diskusjonen i denne delen bare en begynnelse som hjelper deg på vei til en grundig forståelse av ligningene. Heres en forhåndsvisning av måtene dype dypere inn i ligningene senere i kapitlet: Jeg gir et kort bevis på ligningene. som bidrar til å forklare hvorfor de er sanne, omstiller ligningene i algoritmisk form som pseudokode, og ser hvordan pseudokoden kan implementeres som ekte, kjører Python-kode og i den siste delen av kapittelet. godt utvikle et intuitivt bilde av hva backpropagation-ligningene betyr, og hvordan noen kan oppdage dem fra bunnen av. Underveis går du tilbake gjentatte ganger til de fire grunnleggende ligningene, og når du fordyper din forståelse, vil disse ligningene virke komfortable og kanskje også vakre og naturlige. En ligning for feilen i utgangslaget, deltaL: Komponentene til deltaL er gitt ved å begynne deltaLj frac sigma (zLj). tag end Dette er et veldig naturlig uttrykk. Første termen til høyre, delvis C delvis aLj, måler bare hvor raskt kostnaden endrer seg som en funksjon av j-utgangsaktiveringen. Hvis C for eksempel ikke avhenger mye av en bestemt utgangssignal, så vil deltaLj være liten, hvilket er hva vi forventer. Den andre termen til høyre, sigma (zLj), måler hvor raskt aktiveringsfunksjonen sigma endrer seg ved zLj. Legg merke til at alt i (BP1) begynner deltaLj frac sigma (zLj) nonumberend er enkelt beregnet. Spesielt beregner vi zLj mens du beregner nettverkets oppførsel, og det er bare en liten ekstra overhead for å beregne sigma (zLj). Den eksakte formen for delvis C-partiell aLj vil selvsagt avhenge av formen for kostnadsfunksjonen. Men forutsatt at kostnadsfunksjonen er kjent, bør det være lite problemer med å beregne delvis C-partiell aLj. For eksempel, hvis brukte den kvadratiske kostnadsfunksjonen, så C frac sumj (yj-aLj) 2, og så delvis C delvis aLj (ajL-yj), som åpenbart er lett å beregne. Ligning (BP1) begynner deltaLj frac sigma (zLj) nonumberend er et komponentvis uttrykk for deltaL. Det er et perfekt godt uttrykk, men ikke den matrisebaserte formen vi vil ha for tilbakekalling. Det er imidlertid lett å skrive om ligningen i en matrisebasert form, som begynner deltaL nablaa C odot sigma (zL). tag end Her er nablaa C definert som en vektor hvis komponenter er partielle derivater partielle C partielle aLj. Du kan tenke på nabla C som uttrykk for forandringshastigheten for C i forhold til utgangsaktiveringene. Det er lett å se at ligninger (BP1a) begynner deltaL nablaa C odot sigma (zL) nonumberend og (BP1) begynner deltaLj frac sigma (zLj) nonumberend er ekvivalente, og derfor begynner vel bruk (BP1) deltaLj frac sigma (zLj) nonumberend utskiftbart for å referere til begge ligningene. For eksempel, i tilfelle av den kvadratiske kostnaden vi har nablaa C (aL-y), og så begynner den fullt matrisebaserte formen av (BP1) deltaLj frac sigma (zLj) nonumberend deltaL (aL-y) odot sigma (Zl). tag end Som du kan se, har alt i dette uttrykket et fint vektor skjema, og kan enkelt beregnes ved hjelp av et bibliotek som Numpy. En ekvation for feil deltalet i form av feilen i det neste laget, delta: Spesielt begynner deltal ((w) T delta) odot sigma (zl), tag slutt hvor (w) T er transponeringen av vektmatrisen w for (l1) laget. Denne ligningen virker komplisert, men hvert element har en fin tolkning. Anta at vi vet feil deltaet på l1 laget. Når vi bruker transponeringsvektmatrisen, (w) T, kan vi intuitivt tenke på dette som å flytte feilen bakover gjennom nettverket, noe som gir oss en slags måling av feilen ved utgangen av l-laget. Vi tar Hadamard-produktet odot sigma (zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017Using neural nets to recognize handwritten digits Perceptrons Sigmoid neurons The architecture of neural networks A simple network to classify handwritten digits Learning with gradient descent Implementing our network to classify digits Toward deep learning How the backpropagation algorithm works Warm up: a fast matrix-based approach to computing the output from a neural network The two assumptions we need about the cost function The Hadamard product, s odot t The four fundamental equations behind backpropagation Proof of the four fundamental equations (optional) The backpropagation algorithm The code for backpropagation In what sense is backpropagation a fast algorithm Backpropagation: the big picture Improving the way neural networks learn The cross-entropy cost function Overfitting and regularization Weight initialization Handwriting recognition revisited: the code How to choose a neural networks hyper-parameters Other techniques A visual proof t hat neural nets can compute any function Two caveats Universality with one input and one output Many input variables Extension beyond sigmoid neurons Fixing up the step functions Conclusion Why are deep neural networks hard to train The vanishing gradient problem Whats causing the vanishing gradient problem Unstable gradients in deep neural nets Unstable gradients in more complex networks Other obstacles to deep learning Deep learning Introducing convolutional networks Convolutional neural networks in practice The code for our convolutional networks Recent progress in image recognition Other approaches to deep neural nets On the future of neural networks Appendix: Is there a simple algorithm for intelligence Thanks to all the supporters who made the book possible, with especial thanks to Pavel Dudrenov. Thanks also to all the contributors to the Bugfinder Hall of Fame. Deep Learning. book by Ian Goodfellow, Yoshua Bengio, and Aaron Courville In the last chapter we saw how neural networks can learn their weights and biases using the gradient descent algorithm. There was, however, a gap in our explanation: we didnt discuss how to compute the gradient of the cost function. Thats quite a gap In this chapter Ill explain a fast algorithm for computing such gradients, an algorithm known as backpropagation . The backpropagation algorithm was originally introduced in the 1970s, but its importance wasnt fully appreciated until a famous 1986 paper by David Rumelhart. Geoffrey Hinton. and Ronald Williams. That paper describes several neural networks where backpropagation works far faster than earlier approaches to learning, making it possible to use neural nets to solve problems which had previously been insoluble. Today, the backpropagation algorithm is the workhorse of learning in neural networks. This chapter is more mathematically involved than the rest of the book. If youre not crazy about mathematics you may be tempted to skip the chapter, and to treat backpropagation as a black box whose details youre willing to ignore. Why take the time to study those details The reason, of course, is understanding. At the heart of backpropagation is an expression for the partial derivative partial C partial w of the cost function C with respect to any weight w (or bias b) in the network. The expression tells us how quickly the cost changes when we change the weights and biases. And while the expression is somewhat complex, it also has a beauty to it, with each element having a natural, intuitive interpretation. And so backpropagation isnt just a fast algorithm for learning. It actually gives us detailed insights into how changing the weights and biases changes the overall behaviour of the network. Thats well worth studying in detail. With that said, if you want to skim the chapter, or jump straight to the next chapter, thats fine. Ive written the rest of the book to be accessible even if you treat backpropagation as a black box. There are, of course, points later in the book where I refer back to results from this chapter. But at those points you should still be able to understand the main conclusions, even if you dont follow all the reasoning. Before discussing backpropagation, lets warm up with a fast matrix-based algorithm to compute the output from a neural network. We actually already briefly saw this algorithm near the end of the last chapter. but I described it quickly, so its worth revisiting in detail. In particular, this is a good way of getting comfortable with the notation used in backpropagation, in a familiar context. Lets begin with a notation which lets us refer to weights in the network in an unambiguous way. Well use wl to denote the weight for the connection from the k neuron in the (l-1) layer to the j neuron in the l layer. So, for example, the diagram below shows the weight on a connection from the fourth neuron in the second layer to the second neuron in the third layer of a network: This notation is cumbersome at first, and it does take some work to master. But with a little effort youll find the notation becomes easy and natural. One quirk of the notation is the ordering of the j and k indices. You might think that it makes more sense to use j to refer to the input neuron, and k to the output neuron, not vice versa, as is actually done. Ill explain the reason for this quirk below. We use a similar notation for the networks biases and activations. Explicitly, we use blj for the bias of the j neuron in the l layer. And we use alj for the activation of the j neuron in the l layer. The following diagram shows examples of these notations in use: With these notations, the activation a j of the j neuron in the l layer is related to the activations in the (l-1) layer by the equation (compare Equation (4) begin frac nonumberend and surrounding discussion in the last chapter) begin a j sigmaleft( sumk w a k blj right), tag end where the sum is over all neurons k in the (l-1) layer. To rewrite this expression in a matrix form we define a weight matrix wl for each layer, l. The entries of the weight matrix wl are just the weights connecting to the l layer of neurons, that is, the entry in the j row and k column is wl . Similarly, for each layer l we define a bias vector . bl. You can probably guess how this works - the components of the bias vector are just the values blj, one component for each neuron in the l layer. And finally, we define an activation vector al whose components are the activations alj. The last ingredient we need to rewrite (23) begin a j sigmaleft( sumk w a k blj right) nonumberend in a matrix form is the idea of vectorizing a function such as sigma. We met vectorization briefly in the last chapter, but to recap, the idea is that we want to apply a function such as sigma to every element in a vector v. We use the obvious notation sigma(v) to denote this kind of elementwise application of a function. That is, the components of sigma(v) are just sigma(v)j sigma(vj). As an example, if we have the function f(x) x2 then the vectorized form of f has the effect begin fleft(left begin 2 3 end right right) left begin f(2) f(3) end right left begin 4 9 end right, tag end that is, the vectorized f just squares every element of the vector. With these notations in mind, Equation (23) begin a j sigmaleft( sumk w a k blj right) nonumberend can be rewritten in the beautiful and compact vectorized form begin a sigma(wl a bl). tag end This expression gives us a much more global way of thinking about how the activations in one layer relate to activations in the previous layer: we just apply the weight matrix to the activations, then add the bias vector, and finally apply the sigma function By the way, its this expression that motivates the quirk in the wl notation mentioned earlier. If we used j to index the input neuron, and k to index the output neuron, then wed need to replace the weight matrix in Equation (25) begin a sigma(wl a bl) nonumberend by the transpose of the weight matrix. Thats a small change, but annoying, and wed lose the easy simplicity of saying (and thinking) apply the weight matrix to the activations. That global view is often easier and more succinct (and involves fewer indices) than the neuron-by-neuron view weve taken to now. Think of it as a way of escaping index hell, while remaining precise about whats going on. The expression is also useful in practice, because most matrix libraries provide fast ways of implementing matrix multiplication, vector addition, and vectorization. Indeed, the code in the last chapter made implicit use of this expression to compute the behaviour of the network. When using Equation (25) begin a sigma(wl a bl) nonumberend to compute al, we compute the intermediate quantity zl equiv wl a bl along the way. This quantity turns out to be useful enough to be worth naming: we call zl the weighted input to the neurons in layer l. Well make considerable use of the weighted input zl later in the chapter. Equation (25) begin a sigma(wl a bl) nonumberend is sometimes written in terms of the weighted input, as al sigma(zl). Its also worth noting that zl has components zlj sumk wl a kblj, that is, zlj is just the weighted input to the activation function for neuron j in layer l. The goal of backpropagation is to compute the partial derivatives partial C partial w and partial C partial b of the cost function C with respect to any weight w or bias b in the network. For backpropagation to work we need to make two main assumptions about the form of the cost function. Before stating those assumptions, though, its useful to have an example cost function in mind. Well use the quadratic cost function from last chapter (c. f. Equation (6) begin C(w, b) equiv frac sumx y(x) - a2 nonumberend ). In the notation of the last section, the quadratic cost has the form begin C frac sumx y(x)-aL(x)2, tag end where: n is the total number of training examples the sum is over individual training examples, x y y(x) is the corresponding desired output L denotes the number of layers in the network and aL aL(x) is the vector of activations output from the network when x is input. Okay, so what assumptions do we need to make about our cost function, C, in order that backpropagation can be applied The first assumption we need is that the cost function can be written as an average C frac sumx Cx over cost functions Cx for individual training examples, x. This is the case for the quadratic cost function, where the cost for a single training example is Cx frac y-aL 2. This assumption will also hold true for all the other cost functions well meet in this book. The reason we need this assumption is because what backpropagation actually lets us do is compute the partial derivatives partial Cx partial w and partial Cx partial b for a single training example. We then recover partial C partial w and partial C partial b by averaging over training examples. In fact, with this assumption in mind, well suppose the training example x has been fixed, and drop the x subscript, writing the cost Cx as C. Well eventually put the x back in, but for now its a notational nuisance that is better left implicit. The second assumption we make about the cost is that it can be written as a function of the outputs from the neural network: For example, the quadratic cost function satisfies this requirement, since the quadratic cost for a single training example x may be written as begin C frac y-aL2 frac sumj (yj-aLj)2, tag end and thus is a function of the output activations. Of course, this cost function also depends on the desired output y, and you may wonder why were not regarding the cost also as a function of y. Remember, though, that the input training example x is fixed, and so the output y is also a fixed parameter. In particular, its not something we can modify by changing the weights and biases in any way, i. e. its not something which the neural network learns. And so it makes sense to regard C as a function of the output activations aL alone, with y merely a parameter that helps define that function. The backpropagation algorithm is based on common linear algebraic operations - things like vector addition, multiplying a vector by a matrix, and so on. But one of the operations is a little less commonly used. In particular, suppose s and t are two vectors of the same dimension. Then we use s odot t to denote the elementwise product of the two vectors. Thus the components of s odot t are just (s odot t)j sj tj. As an example, begin leftbegin 1 2 end right odot leftbegin 3 4end right left begin 1 3 2 4 end right left begin 3 8 end right. tag end This kind of elementwise multiplication is sometimes called the Hadamard product or Schur product . Well refer to it as the Hadamard product. Good matrix libraries usually provide fast implementations of the Hadamard product, and that comes in handy when implementing backpropagation. Backpropagation is about understanding how changing the weights and biases in a network changes the cost function. Ultimately, this means computing the partial derivatives partial C partial wl and partial C partial blj. But to compute those, we first introduce an intermediate quantity, deltalj, which we call the error in the j neuron in the l layer. Backpropagation will give us a procedure to compute the error deltalj, and then will relate deltalj to partial C partial wl and partial C partial blj. To understand how the error is defined, imagine there is a demon in our neural network: The demon sits at the j neuron in layer l. As the input to the neuron comes in, the demon messes with the neurons operation. It adds a little change Delta zlj to the neurons weighted input, so that instead of outputting sigma(zlj), the neuron instead outputs sigma(zljDelta zlj). This change propagates through later layers in the network, finally causing the overall cost to change by an amount frac Delta zlj. Now, this demon is a good demon, and is trying to help you improve the cost, i. e. theyre trying to find a Delta zlj which makes the cost smaller. Suppose frac has a large value (either positive or negative). Then the demon can lower the cost quite a bit by choosing Delta zlj to have the opposite sign to frac . By contrast, if frac is close to zero, then the demon cant improve the cost much at all by perturbing the weighted input zlj. So far as the demon can tell, the neuron is already pretty near optimal This is only the case for small changes Delta zlj, of course. Well assume that the demon is constrained to make such small changes. And so theres a heuristic sense in which frac is a measure of the error in the neuron. Motivated by this story, we define the error deltalj of neuron j in layer l by begin deltalj equiv frac . tag end As per our usual conventions, we use deltal to denote the vector of errors associated with layer l. Backpropagation will give us a way of computing deltal for every layer, and then relating those errors to the quantities of real interest, partial C partial wl and partial C partial blj. You might wonder why the demon is changing the weighted input zlj. Surely itd be more natural to imagine the demon changing the output activation alj, with the result that wed be using frac as our measure of error. In fact, if you do this things work out quite similarly to the discussion below. But it turns out to make the presentation of backpropagation a little more algebraically complicated. So well stick with deltalj frac as our measure of error In classification problems like MNIST the term error is sometimes used to mean the classification failure rate. F. eks if the neural net correctly classifies 96.0 percent of the digits, then the error is 4.0 percent. Obviously, this has quite a different meaning from our delta vectors. In practice, you shouldnt have trouble telling which meaning is intended in any given usage. Plan of attack: Backpropagation is based around four fundamental equations. Together, those equations give us a way of computing both the error deltal and the gradient of the cost function. I state the four equations below. Be warned, though: you shouldnt expect to instantaneously assimilate the equations. Such an expectation will lead to disappointment. In fact, the backpropagation equations are so rich that understanding them well requires considerable time and patience as you gradually delve deeper into the equations. The good news is that such patience is repaid many times over. And so the discussion in this section is merely a beginning, helping you on the way to a thorough understanding of the equations. Heres a preview of the ways well delve more deeply into the equations later in the chapter: Ill give a short proof of the equations. which helps explain why they are true well restate the equations in algorithmic form as pseudocode, and see how the pseudocode can be implemented as real, running Python code and, in the final section of the chapter. well develop an intuitive picture of what the backpropagation equations mean, and how someone might discover them from scratch. Along the way well return repeatedly to the four fundamental equations, and as you deepen your understanding those equations will come to seem comfortable and, perhaps, even beautiful and natural. An equation for the error in the output layer, deltaL: The components of deltaL are given by begin deltaLj frac sigma(zLj). tag end This is a very natural expression. The first term on the right, partial C partial aLj, just measures how fast the cost is changing as a function of the j output activation. If, for example, C doesnt depend much on a particular output neuron, j, then deltaLj will be small, which is what wed expect. The second term on the right, sigma(zLj), measures how fast the activation function sigma is changing at zLj. Notice that everything in (BP1) begin deltaLj frac sigma(zLj) nonumberend is easily computed. In particular, we compute zLj while computing the behaviour of the network, and its only a small additional overhead to compute sigma(zLj). The exact form of partial C partial aLj will, of course, depend on the form of the cost function. However, provided the cost function is known there should be little trouble computing partial C partial aLj. For example, if were using the quadratic cost function then C frac sumj (yj-aLj)2, and so partial C partial aLj (ajL-yj), which obviously is easily computable. Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend is a componentwise expression for deltaL. Its a perfectly good expression, but not the matrix-based form we want for backpropagation. However, its easy to rewrite the equation in a matrix-based form, as begin deltaL nablaa C odot sigma(zL). tag end Here, nablaa C is defined to be a vector whose components are the partial derivatives partial C partial aLj. You can think of nablaa C as expressing the rate of change of C with respect to the output activations. Its easy to see that Equations (BP1a) begin deltaL nablaa C odot sigma(zL) nonumberend and (BP1) begin deltaLj frac sigma(zLj) nonumberend are equivalent, and for that reason from now on well use (BP1) begin deltaLj frac sigma(zLj) nonumberend interchangeably to refer to both equations. As an example, in the case of the quadratic cost we have nablaa C (aL-y), and so the fully matrix-based form of (BP1) begin deltaLj frac sigma(zLj) nonumberend becomes begin deltaL (aL-y) odot sigma(zL). tag end As you can see, everything in this expression has a nice vector form, and is easily computed using a library such as Numpy. An equation for the error deltal in terms of the error in the next layer, delta : In particular begin deltal ((w )T delta ) odot sigma(zl), tag end where (w )T is the transpose of the weight matrix w for the (l1) layer. This equation appears complicated, but each element has a nice interpretation. Suppose we know the error delta at the l1 layer. When we apply the transpose weight matrix, (w )T, we can think intuitively of this as moving the error backward through the network, giving us some sort of measure of the error at the output of the l layer. We then take the Hadamard product odot sigma(zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017

No comments:

Post a Comment