Neuralnetworkapproachestofractalimagecompression
anddecompression
K.T.Sun*,S.J.Lee,P.Y.Wu
InstituteofComputerScienceandInformationEducation,NationalTainanTeachersCollege,
Tainan700,Taiwan
DepartmentofMathematicsandSciencesEducation,NationalTainanTeachersCollege,
Tainan700,Taiwan
Accepted7November2000
Abstract
Inimagecompressiontechnologies,fractalimagecompression/decompressionhasthead-vantagesofahighcompressionratioandalowlossratio.However,itrequiresagreatdealofcomputation,whichlimitsitsapplications,andsofar,noparallelprocessingtechniquehasbeendesignedandimplemented.Inthisstudy,weuseneuralnetworkstoperformalargenumberofcomputationsinfractalimagecompressionanddecompressioninparallel.Thesimulationresultsshowthatthequalityofimagesgeneratedbyneuralnetworksissimilartothatproducedusingtraditionalmethods,whichveri\"esthehighvalueofourresearch,whichhasshownthattheneuralnetworktechnologyisusefulande$cientwhenappliedtofractalimagecompressionanddecompression.2001ElsevierScienceB.V.Allrightsreserved.
Keywords:Fractalimagecompression/decompression;Neuralnetworks;Parallelprocessing
1.Introduction
Recently,graphicalrepresentationincomputershasbeenwidelyappliedinmanyapplicationsbecausesuchrepresentationsaremeaningfultohumanbeings.However,thisapproachrequireslargestorageandlongtransmissiontime.
Thetechniqueofimagecompression/decompressionisusefulandimportantforreducingthestoragespaceandtransmissiontime.Ingeneral,thesecompressiontechnologiescanbedividedintotwotypes*lossycompressionandlosslesscompres-sion*whetherthedecompressedimageisthesameastheoriginaloneornot.Ifthe
*Correspondingauthor.
E-mailaddresses:ktsun@ipx.ntntc.edu.tw(K.T.Sun),pywu@ipx.ntntc.edu.tw(P.Y.Wu).0925-2312/01/$-seefrontmatter2001ElsevierScienceB.V.Allrightsreserved.PII:S0925-2312(00)00349-0
92K.T.Sunetal./Neurocomputing41(2001)91}107
properlossratioisallowable,thelossycompressionmethodscanachievehighercompressionratios[11].
Threetechnologiesareusuallyusedinlossycompression:vectorquantization(VQ),discretecosinetransformation(DCT)andfractalimagecompression.TheVQmethodpartitionsanimageintonumeroussub-imagesand\"ndssomerepresentativesasacodebookfromthem[6,23].TheDCTmethodconvertsthegraylevelsofanimageintoothercoordinates(e.g.,frequency),andthenquantizesandstoresthem[11,24].Bytheself-similaritycharacteristicsinanimage,animagewillconvergetoanacceptablestatusafterfractalimagedecompression[3,9].
UnlikeVQ,fractalimagecompressiondoesnotrequireacodebookforthedecompressionprocedure[6].Fractalimagecompressionisalsoattractivebecauseofitshighcompressionratioandlowlossratioproperties[24].Someresultshavebeenobtainedusingthistechnique:theHutchinsonmetrichasbeenproposedtoprovetheconditionofconvergence[1,8,21],andMandelborthasgeneratedimagesbasedonfractaltheory[21].Bydevelopingacollagetheoremanditeratedfunctionsystem(IFS),Barnsleyproducedahighcompressionratio(10:1}10:1)fractalcode,andthismotivatedmanyrelatedresearches[1}3,7,10,19].However,fractalcodecannotbegeneratedautomaticallyusingIFS[4,14,15,19,24,33].Jacquinproposedapartitionediteratedfunctionsystem(PIFS)toimproveIFSsothatthefractalcodecanbedeterminedautomatically[14,15].However,agreatdealofcomputationisalsorequired.
Neuralnetworktechnologyisnewanduseful,andhasbeensuccessfullyusedinmanyscopes[5,12,13,16}18,20,22,25,27}32].Starkwasthe\"rsttoproposeapplica-tionoftheneuralnetworkstoIFS[5,27,28].Hismethod,basedontheHop\"eldneuralnetwork,solvesthelinearprogressiveproblemandobtainstheHutchinsonmetricquickly[27,31].However,hisneuralnetworkapproachonlyworkswiththeIFSdecompressionprocedure.
Inthisstudy,weappliedneuralnetworktechnologyinPIFSsothatthefractalcodecouldbegeneratedautomatically.Inourmethod,aneuronisusedtorepresentapixelinanimage,andtheweightsandthresholdsareusedasthefractalcode.Inthisway,properweightsandthresholdscanbeobtainedinthecompression(training)proced-ure,andtheoriginalimagecanbeconstructedinthedecompression(retrieving)procedure.InSection2,wewillintroducePIFStheoryandtheideaofcompres-sion/decompressionusingneuralnetworktechnologies.InSection3,theimagecompressionappliedtotwodi!erentmodelsusingneuralnetworkswillbeintroduc-ed.Then,thedecompressionmethodwillbeexplainedinSection4.Section5willpresentsomesimulationresults.Finally,abriefconclusionwillbegiveninSection6.
2.Reviewofresearchonthepartitionediteratedfunctionsystem2.1.Basicconceptsoffractalimagecompression
Thebasicideaoffractalimagecompressionistousethecharacteristicsofself-similarityinanimage.InFig.1(a),thetrianglecanbedividedintothreesub-images,as
K.T.Sunetal./Neurocomputing41(2001)91}10793
Fig.1.Threetransformationfunctionsshowself-similarityinanimage.(a)Theoriginalimage.(b)Partitioninto3similarsub-imagesafteroneiteration.(c)Partitioninto9similarsub-imagesaftertwoiterations.
Fig.2.Thedecompressionprocedureforafractalimage.(a)Theinitialimage.(b)Afteroneiteration.(c)Aftertwoiterations.(d)After\"fteeniterations.
showninFig.1(b).Allofthesesub-imagesarethesameastheoriginalimageexceptthatthesizehasbeenreduced75%,andtheycanbepartitionedintostillsmallerpartsasshowninFig.1(c).Thesmallerpartsarealsosimilartothesub-images.Theserelationshipsexistcontinuouslybetweensub-imagesaspartitionoperationsareperformedrepeatedly.Then,weonlydeterminethetransformationfunctionswhichweneedinordertomaptheoriginalimagetothesub-images.Forexample,inFig.1,threetransformationfunctionsareusedtoreduceanimageintothreesub-imageswithonequarterthesizeoftheoriginalimage.Andthenonesub-imageisputontheupperside,oneonthelowerright-handside,andoneonthelowerleft-handsideoftheoriginalimage,respectively.Therefore,theoriginalimage(thetriangle)canbedecom-pressedusingthesetransformationfunctions.
Whenthetransformationfunctionshavebeenobtained,anyimagecanbeusedastheinitialimageforthedecompressionprocedure,andthentheoriginalimagecanbegeneratedaftermanydecompressioniterations.Forexample,wecanusethe`fernaimageastheinitialimage,andthetriangle(originalimage)canbegeneratedafter15iterationsbyapplyingthetransformationfunctions(showninEq.(1)andFig.2).
94K.T.Sunetal./Neurocomputing41(2001)91}107
ThreemappingtransformationsforFig.1areshowninEq.(1),andthisprocedureiscalledtheiteratedfunctionsystem(IFS)
xyyyxx\"\"\"
xyyyxx
\"\"\"
0.50000.50.5
000
0.50.50.5
xyyyxx###
00,0.50,0.250.5
(1)
.
Here,(x,y)isthecoordinateoftheoriginalimage,and(x,y)isthecoordinateofthetransformedimage.Asaresult,onlythreetransformationfunctions,+,,,
(alsocalledfractalcode),arestoredinsteadoftheimagedata.2.2.Partitionediteratedfunctionsystem(PIFS)
ForFig.3(a),itisalmostimpossiblefor\"ndingthefractalcodeofIFS.However,wecan\"ndsomesimilaritiesbetweenblocksofsub-images.Therearetwopairsofblocks,whicharesimilartoeachother,asshowninFig.3(b).Onepairispartofahatandpartofashoulder,andtheotherpairisthesmallerpartandalargepartoftheface.Whenweconsiderthegraylevelofanimage,anadditionaldimensionisadded.ThetransformationfunctionwillbecomethatinEq.(2)
eG
y\"y0y#f,(2)
GGzzSzo
GG
wherezandzarethegraylevels,a,b,canddarecoordinatesofthistransforma-GGGG
tion,(e,f)istheo!setofthetransformation,andsandorepresentcontrastand
GGGGbrightness,respectively.
x
x
aG\"c
G0
bGdG0
0
x
Fig.3.Therearesomesimilaritiesbetweenthesub-imagesintheimageLena.(a)TheoriginalimageofLena.(b)Twopairsofblockssimilarinshape.
K.T.Sunetal./Neurocomputing41(2001)91}10795
Fig.4.TheconceptofPIFS.(a)Overlappingandlargersub-images.(b)Nonoverlappingandsmallersub-images.
Fig.4showstheconceptofPIFS.Twoidenticalimagesarepartitionedandcompared.Eachnon-overlappingsub-imagein4(b)willneedatotransformalarger
G
andsimilarsub-imagefrom4(a)to4(b).
Ifwechooseasizeforthesub-imagesinFig.4(a)thatis4times(twotimesthelengthofheightandwidth,respectively)ofthesub-imagesinFig.4(b),thenEq.(2)willbecomeEq.(3):
Thesetransformationfunctionsarethefractalcodesthatareusedtorepresentthecompressedimageandwillbeusedindecompressingprocess.Basedontheconceptofquadtreepartitioning[9,26],thestepsinthePIFSmethod[14,15]areasfollows:
(1)Setathresholdvalueefortheerrorandaminimumsizerforranges.(This
errorisde\"nedastheaverageoftheabsolutedi!erenceofgraylevelsofpixelsbetweentherangeandthecorrespondingdomain.Inthedomain,thegraylevelsofeveryfourpixelsareaveragedandcomparedwiththegraylevelofthecorrespondingsinglepixelintherange.Thelowerthevalueofe,thehigherthe
similarity.)
(2)Dividethewholeimageinto4non-overlappingsub-images(ranges)withone
quarterthesizeoftheoriginalimage.
(3)Foreachrangei,adomainj(4timesthesizeofi)withtheleasterror(4e)is
foundfromalldomains.Then,atransformationfunctionisdeterminedforthe
G
rangei.Bycomputingthedi!erentialequationforthetransformationfunction,thecontrastsandbrightnessocanbedeterminedsoasprovideaminimum
GG
errorforthetransformationfunction.
G
xz
xyz
y\"
G
0.50
00.50
00SG
\"0
x
eG
y#f.
G
zo
G
(3)
96K.T.Sunetal./Neurocomputing41(2001)91}107
(4)Iftherangeicannotprovideasimilardomainj(i.e.,theerrorbetweeniandjis
greaterthanthethresholdvaluee),thentherangeiisdividedinto4equalsized
sub-images,andthesizeofeachoneisgreaterthanorequaltor.Gotostep(3)
to\"ndthetransformationfunctionforeachdividedsub-image(range).
(5)Ifthesizeofrangeiisequaltor,andifnosimilardomaincanbefound,then
therangeiisnotdividedcontinuously,andadomainjwiththeleasterrorisselected.Inthiscase,thetransformationfunctionisalsodeterminedevenifthe
G
errorbetweeniandjisgreaterthanthethresholdvaluee.
Usingtheconceptofquadtreepartitioning,thePIFScane!ectively\"ndthetransformationfunctionsforimagecompression.However,thisisasequentialapproachtosolvethedi!erentialequationforthetransformationfunction.Inthispaper,wewillproposeaneuralnetworkapproachthatcangenerateacom-pressedimagethatissimilarquality.Thisapproachisveryattractivetotheparallelprocessing.
3.Applyingneuralnetworkstofractalimagecompression
Weproposeuseoftwodi!erentneuralnetworkmodelstoimplementfractalimagecompressionanddecompression.Thearchitecturesofthesetwomodelsaresimilarexceptforthetransformationfunctions,asillustratedinFig.5.Eachpixelofanimageisprocessedbyaneuron,andthegraylevelofthepixelisrepresentedbythestateoftheneuron.Animageisduplicated,creatingtwoimages,eachoneisdividedintomanysub-images,calleddomainsandranges,eachpixelinadomaincorrespondstoaninputneuron,andeachpixelinarangecorrespondstoanoutputneuron.Toeachoutputneuron,fourinputneuronsareconnected.Therefore,eachoutputneuronjisconnectedtofourinputneuronsi,i#1,i#2andi#3.Theoutputvaluezof
H
Fig.5.Thearchitectureoftheproposedneuralnetworkforimplementingfractalimagecompression.
K.T.Sunetal./Neurocomputing41(2001)91}10797
neuronjisdeterminedbythevaluesZ,Z,Z,Z,thecorrespondingweights
GG>G>G>
=,=,=,=andthethreshold.HHG>HG>HG>H
Twodi!erentactivationfunctions,thelinearmodelandnonlinearmodel,ofneuronsarede\"nedinEqs.(4)and(5),respectively
z\"OHH
1G>
w;z!z\"O
HIIHHHBIG
G>
w;z!
HIIHIG
linearmodel,(4)
nonlinearmodel,(5)
whereBisthemaximumvalueofthegraylevels(e.g.,Bisassignedto255inthispaper).
Thelearningprocedureintheneuralnetworkapproachisbasedonquadtreepartitioning[9,26],whichisalsousedinPIFS.Thedetailedstepsareasfollows:(1)Setathresholdvalueefortheerrorandaminimumsizerfortheranges.
(2)Dividetheimageintomanynon-overlappingsub-imageswith32;32setasthe
initialsizeoftheranges.
(3)Foreachrangei,\"ndadomainj(4timesthesizeofi)wheretheerrorbetween
iandjislessthanorequaltothethresholdvaluee.Then,determineatrans-
formationfunctionfortherangei.Updatetheweightsw,∀pixels3range
GGH
iand∀pixels3domainj,oftheneuralnetworkusingthedeltalearningruletotunethecontrastsandbrightnessointhetransformationfunctionsothatthe
GGG
errorcanbereduced.
(4)Iftherangeicannotprovideasimilardomainj(i.e.,theerrorbetweeniandjis
greaterthanthethresholdvaluee),thendividetherangeiinto4equalsizedsub-
images(ranges),wherethesizeofeachoneisgreaterthanorequaltor.Goto
step(3)to\"ndthetransformationfunctionforeachdividedrange.
(5)Ifthesizeofrangeiisequaltor,andifnosimilardomaincanbefound,then
therangeiisnotdividedcontinuously,andadomainjwiththeleasterrorisselected.Inthiscase,thetransformationfunctionisalsodetermined,evenifthe
G
errorbetweeniandjisgreaterthanthethresholdvaluee.
Usingthedeltalearningruleofneuralnetworks,asetoftransformationfunctionscanbeobtainedforeachrange,andtheyprovideahighPSNRvalueforimagecompres-sion.
3.1.Thelinearmodel
ComparingEqs.(3)and(4),thevalueZinEq.(4)canbeviewedasthegraylevel
I
ZofapixelinEq.(3).Theweight=andthethresholdinEq.(4)canbealsoviewed
HIH
asonequarterofthecontrastSandthenegativevalueofthebrightnessOinEq.(3),
GG
respectively.Then,thelinearneuralnetworkapproach(Eq.(4))canbeusedto
98K.T.Sunetal./Neurocomputing41(2001)91}107
Fig.6.Theactivationfunctionofneuroninthelinearmodel.
performcomputationofPIFS(Eq.(3)),andtheactivationfunctionOisde\"nedinthe
H
followingequation:
xwhen04x4255,
O(x)\"H0otherwise.
(6)
Fig.6showsagraphicrepresentationofEq.(6).
Accordingtotheoutputvaluesoftheneuronsandtheoriginalgraylevelsofthepixels,wecancomputethedi!erencebetweenthemforeachneuronjusingthe
H
followingequation:
\"z!z,(7)HHH
wherezistheoutputvalueoftheactivationfunctionandzistheoriginalgray
HH
levelofpixelj.Then,theupdatedweight=betweentheoutputneuronjandthe
HI
fourcorrespondinginputneuronsk,k\"i&i#3,canbederivedbyEq.(8)=\";/z,k\"i&i#3,(8)HIHI
whereisalearningrateparameter,whichcanbeusedtospeeduptheconvergingrateand\"ndabettersolution.Theupdatedvalueofthethreshold,isthende\"ned
H
as
\";.(9)HH
Thelearningprocedureisrepeateduntiltheoutputvaluesoftheproposedneuralnetworkareacceptable.3.2.Thenonlinearmodel
Inthenonlinearmodel,theactivationfunctionOisde\"nedasinEq.(10),whichis
H
acompositionfunctionO(x)\"DeNor(Sigmoid(x)),H
where
1
Sigmoid(x)\",
(1#e\\V)
DeNor(x)\"K;(x!).
(10)
(11)(12)
K.T.Sunetal./Neurocomputing41(2001)91}10799
Fig.7.Theactivationfunctionofneuroninthenonlinearmodel.
ThevaluesofKandinEq.(12)areconstantsandarede\"nedas
B
K\",
(Sigmoid(;pper)!Sigmoid(¸ower))\"Sigmoid(¸ower).
(13)(14)
Wede\"neaninputrange,2R,inordertopreventtheoutputofEq.(11)frombeingtrappedintosaturationstates.Then,thedi!erencebetweenthemaximumvalueandtheminimumvalueofxis2R(i.e.,Upper}Lower).Therefore,theoutputrangeofthesigmoidfunctionisequalto[Sigmoid(Lower),Sigmoid(Upper)].Fig.7showstheserelationships.
Foreachneuronj,wede\"nethedi!erencebetweentheoutputoftheproposedneuralnetworkandtheoriginalgraylevelofthepixelsinthefollowingequation:
z(B!z)(z!z)
HHH.\"HHB
(15)
InEq.(15),allthevaluesofz,Bandzareintherange[0,255].Then,Eq.(15)can
HH
bedividedbyBtokeeptheoutputvalueintherange[!1,1].Similarly,theupdatedweight=canbederivedbyEq.(16)
HI
;;z
HI,k\"i&i#3.w\"
HIB
(16)
Thestepsforlearningweightsarerepeateduntiltheoutputvaluesoftheneurons
areacceptable.
4.Applyingtheneuralnetworktofractalimagedecompression
ThearchitectureofourneuralnetworkforperformingimagedecompressionisshowninFig.8,whichissimilartoFig.5exceptthattheoutputsoftheneuronsintheoutputlayerwillfeedbacktothecorrespondingneuronsintheinputlayer.Thetrainedweightsandthreshold(i.e.,thefractalcodeofPIFS)weredeterminedduringfractalimagecompression.
100K.T.Sunetal./Neurocomputing41(2001)91}107
Fig.8.Thearchitectureoftheneuralnetworkforfractalimagedecompression.Theoutputofeachneuronintheoutputlayerfeedsbacktothecorrespondingneuronwiththesamepositionindexintheinputlayeratthenextiteration.
TheoutputstatezRforimagedecompressionisde\"nedinEq.(17).
H
G>
=;zR!definedinlinearmodel,
HIIHH
IGzR\"(17)
H1G>
=;zR!definedinnonlinearmodel.O
HIIHHBIG
Atthenexttimet#1,thestatezR>ofneuronjintheinputlayercanbeobtained
H
fromtheoutputvaluezRofneuronjintheoutputlayer.Thisisde\"nedinEq.(18)
H
O
zR>\"zR.HH
(18)
Then,thestatesoftheneuronsarechangedrepeatedlyuntilthesystemreachesastablestate.
5.Performanceevaluations
Someimageswerecompressedanddecompressedusingourneuralnetworkap-proach.Thethresholdvalueefortheerrorbetweentwosub-imageswassetto2.To
\"ndthesimilaritycharacteristicsinthesub-images,thesizesofthesub-imagesarerangesfrom;to8;8inthedomainand32;32to4;4intherange(i.e.,r\"4;4).Themaximumcomplexityoftheneuralnetworkwas(;)(input
layer);(32;32)(outputlayer).ThevalueofPSNRwascalculatedandusedtoevaluatethesystemperformance.ThevalueofPSNRwasde\"nedas
B
,PSNR\"20log
rms
(19)
K.T.Sunetal./Neurocomputing41(2001)91}107101
whereBisthemaximumvalueofthegraylevel(setto255inthispaper),andrmsistherootmeansquareofthedistancebetweentheoriginalimageandthedecompressedimage.rmsisde\"nedas
rms\"
,(z!z)
G,GGN
(20)
wherezisthegraylevelofpixeliintheoriginalimage,zisthegraylevelofpixel
GG
iinthedecompressedimageandNisthetotalnumberofpixelsinthisimage.Then,thelargerPSNRis,thebetteristhequalityoftheimage.5.1.Thelinearmodel
TheattributesoftheLenaimageareshowninTable1.Di!erentlearningrateswereselectedtoevaluatethequalityofcompressedimageusingalinearmodel.TheexperimentalresultsareshowninFig.9andTable2.
AccordingtotheresultsshowninFig.9andTable2,weobtainedthebetterqualityandsmallercompressedimageswhenthelearningrateoftheneuralnetworkwas0.1.5.2.Thenonlinearmodel
Thevaluesoftheinputrangeandlearningratea!ectthequalityofimagesduringimagedecompressioninthenonlinearmodel.Fourdi!erentinputranges(0.1,0.2,0.5,
Table1
TheattributesoftheLenaimage
Fig.9.TherelationshipbetweenthelearningrateandPSNRinthelinearmodel.
102K.T.Sunetal./Neurocomputing41(2001)91}107
Table2
Thedecompressedimages,sizes,andvaluesofPSNRunderdi!erentlearningratesforthelinearmodel.(1)Thedecompressedimage.(2)Thevalueofthelearningrate.(3)Thesizeoftheimageafterencoding(compressing)(unit:byte).(4)ThevalueofPSNR.
0.9)andvariouslearningrateswereselectedforsimulationsandresultsareshowninFig.10.
Fig.10showsthatbetterqualityofimageswereobtainedbyselectinglearningratesbetween0.2}0.3,andthatthePSNRsofimageswerelesssensitivewhentheinputrangesaresmaller(i.e.,0.1or0.2).
5.3.Comparisonofthelinearmodelandnonlinearmodel
Basically,thelinearmodelissimilartoIFS(orPIFS)mapping(showninEq.(3)).Foreachtransformationfunction,thelinearmodel\"ndsthecontrastsand
GG
K.T.Sunetal./Neurocomputing41(2001)91}107103
Fig.10.TherelationshipofthelearningrateandPSNRunderdi!erentinputrangesforthenonlinearmodel.
Table3
Thecomputationtime(second)forimagecompressionanddecompressionfortheLenaimage(Table1)usingthelinearmodel,nonlinearmodelandtraditionalmethod(executedonaPentiumII-166PCwithMRAM)
Linearmethod
CompressionDecompression
5791.81
Nonlinearmethod35421.92
Traditionalmethod361.26
brightnessobyupdatingtheweightsusingthelineargradientdescentmethod,which
G
isless#exibleandrobustthanthenonlineargradientdescentmethod.Inaddition,thelinearmodelprovideslowerPSNRvaluesrelativetothenonlinearmodel.Figs.9and10showthatthenonlinearmodelgeneratedhigherPSNR(i.e.,betterimagequality)valueduetoits#exibilityandrobustness.However,thelinearmodelissimplerthanthenonlinearmodel,andthelinearmodelrequiredlesscomputationtime(asshowninTable3).Table3showsthatthetimeneededforimagecompressionbyourmethodwasmuchgreaterthanthatneededbythetraditionalmethod,butthatthetimeneededforimagedecompressionbythedi!erentmethodsissimilar.However,thetraditionalmethod\"ndstheminimumerrorbyapplyingthedi!erentialequationtothetransformationfunctionforthewholesub-image,buttheneuralnetworkap-proachupdatestheweightsto\"ndagoodtransformationfunction.Therefore,thetraditionalmethodperformscomputationsinasequentialmode,buttheneuralnetworkapproachdoessoinparallel.Asaresult,fora32;32sub-imageinagivenrange,thecomputationtimecanbespeededup32;32timesusingtheneuralnetworkapproachifthereareenoughcomputingelementsintheparallelprocessingsystem.Inthisway,thecompressiontimecanbegreatlyreducedusingourmethod.Sixdi!erentimageswerecompressedanddecompressedusingthesetwoproposedapproachesandthetraditionalPIFSmethod[15].TheresultsareshowninTable4.ThesizesofthecompressedimagesusingtheproposednonlinearmodelareapproachingtothatusingthetraditionalPIFSmethod.Thisveri\"esthatourmethodsareusefulfortheimagecompressionanddecompression.
104K.T.Sunetal./Neurocomputing41(2001)91}107
Table4
Comparisonofthreefractalimagecompressionmethods
K.T.Sunetal./Neurocomputing41(2001)91}107105
6.Conclusion
Inimagecompressionanddecompression,fractaltheorycanobtainahighcom-pressionratioandalowlossratio.However,itislimitedbythetremendousnumberofcomputationsrequiredtodeterminethefractalcodeneededtoperformimagedecom-pression.Inthispaper,wehaveproposedneuralnetworkapproachestoapplyPIFStoimagecompressionanddecompression.Experimentresultsshowthatourneuralnetworkapproachescanobtainhigh-qualitydecompressedimages,andthatthecompressionratioisasgoodasthatobtainedbythetraditionalPIFSmethod.Inaddition,theproposedneuralneworkapproachescanbeoperatedinparallel.Asaresult,imagecompressionanddecompressioncanbeperformedquicklyonaparal-lelcomputingsystem.Ourmethodscanbeveryusefulforimagecompressionanddecompressionusingparallelprocessingtechniques.
Acknowledgements
ThisresearchwassupportedbytheNationalScienceCouncilofTaiwan,ROC,underthegrantNSC88-2213-E-024-001.
References
[1]M.F.Barnsley,FractalsEverywhere,AcademicPress,NewYork,1992.
[2]M.F.Barnsley,A.Jacquin,L.Reuter,A.D.Sloan,Harnessingchaosforimagesynthesis,Comput.
Graphics22(4)(1988)131}141.
[3]M.F.Barnsley,A.D.Sloan,Abetterwaytocompressimages,Byte13(1)(1988)215}223.
[4]S.F.Chen,Fractal-basedimageanalysis,MasterThesisofInstituteofInformationScience,National
TsingHuaUniversity,Taiwan,1991.
[5]A.J.Crilly,R.A.Eamshaw,H.Jones,FractalsandChaos,Springer,London,1991.
[6]G.M.Davis,Awavelet-basedanalysisoffractalimagecompression,IEEETrans.ImageProcess.7(3)
(1998)141}154.
[7]S.Demko,L.Hodges,B.Nayloy,Constructionoffractalobjectswithiteratedfunctionsystems,
Comput.Graphics19(3)(1985)271}278.
[8]K.J.Falconer,TheGeometryofFractalSets,CambridgeUniversityPress,Cambridge,UK,1985.[9]Y.Fisher,FractalImageCompression,Springer,NewYork,1994.
[10]Z.H.Fu,Chaosandfractalswithapplicationonimagecompression,MasterThesisofInstituteof
ElectricalEngineering,NationalTaiwanUniversity,Taiwan,1992.
[11]R.C.Gonzalez,R.E.Woods,DigitalImageProcessing,Addison-Wesley,Reading,MA,1993.
[12]J.J.Hop\"eld,D.W.Tank,Neuralcomputationofdecisionsinoptimizationproblems,Biol.Cybemet.
52(1985)141}152.
[13]Y.H.Huang,Neuralnetworkforimagecompressionandseismicsignalprocessing,MasterThesisof
InstituteofInformationScience,NationalChiaoTungUniversity,Taiwan,1992.
[14]A.E.Jacquin,Imagecodingbasedonafractaltheoryofiteratedconstructiveimagetransformations,
IEEETrans.ImageProcess.1(1)(1992)18}30.
[15]A.E.Jacquin,Fractalimagecodingareview,Proc.IEEE81(10)(1993)1451}1465.
[16]C.P.Lai,AcomputersystemforlocatingsequencesofCTliverboundaryusingneuralnetworks
andfractalgeometry,MasterThesisofInstituteofElectricalEngineering,NationalChengKungUniversity,Taiwan,1994.
106K.T.Sunetal./Neurocomputing41(2001)91}107
[17]S.J.Lee,P.Y.Wu,K.T.Sun,AstudyonFractalimagecompressionusingneuralnetwork,
Proceedingsofthe1997NationalComputerSymposium(NCS'97),TunghaiUniversity,Taiwan,1997,pp.B-151}156.
[18]S.J.Lee,P.Y.Wu,K.T.Sun,Fractalimagecompressionusingneuralnetwork,Proceedingsofthe
1998IEEEInternationalJointConferenceonNeuralNetworks(IJCNN'98),Anchorage,Alaska,1998,pp.613}618.
[19]Q.Y.Lin,Fractalanditsapplicationtoimagecompression,MasterThesisofInstituteofElectrical
Engineering,NationalTaiwanUniversity,Taiwan,1993.
[20]R.P.Lippmann,Anintroductiontocomputingwithneuralnets,IEEEAccoust.SpeechSignal
Process.Mag.4(1987)4}23.
[21]B.Mandelbort,TheFractalGeometryofNature,Freeman,SanFrancisco,CA,1982.
[22]M.Mougeot,R.Azencott,B.Angeniol,ImageCompressionwithbackpropagationusingdi!erent
costfunctions,NeuralNetwork4(4)(1991)467}476.
[23]F.G.B.D.Natale,G.S.Desoli,D.D.Giusto,G.Vernazza,Polynomialapproximationandvector
quantization:aregion-basedintegration,IEEETrans.Commun.43(2}4)(1998)198}206.[24]M.Nelson,TheDataCompressionBook,2ndEdition,M&TBooks,NewYork,1996.[25]R.Rojas,NeuralNetworks:ASystematicIntroduction,Springer,NewYork,NY,1996.
[26]E.Shusterman,M.Feder,Imagecompressionviaimprovedquadtreedecompressionalgorithm,IEEE
Trans.ImageProcess.3(6)(1994)207}215.
[27]J.Stark,AneuralnetworktocomputetheHutchinsonmetricinfractalimageprocessing,IEEE
Trans.NeuralNetwork2(1)(1991)156}158.
[28]J.Stark,Iteratedfunctionsystemsasneuralnetwork,NeuralNetwork4(5)(1991)679}690.
[29]K.T.Sun,H.C.Fu,Aneuralnetworkimplementationforthetra$ccontrolproblemoncrossbar
switchnetworks,Int.J.NeuralSystems3(2)(1992)209}218.
[30]K.T.Sun,H.C.Fu,Ahybridneuralnetworkmodelforsolvingoptimizationproblems,IEEETrans.
Comput.42(2)(1993)218}227.
[31]D.W.Tank,J.J.Hop\"eld,Simpleneuraloptimizationnetworks:anA/Dconverter,signal
decisioncircuitandalinearprogrammingcircuit,IEEETrans.CircuitsSystems33(5)(1986)533}541.
[32]L.Zhang,B.Zhang,G.Chen,Generatingandcodingoffractalgraphicsbyneuralnetwork
andmathematicalmorphologymethods,IEEETrans.NeuralNetworks7(2)(1996)400}407.
[33]G.Zorpette,Fractal:notjustanotherprettypicture,IEEESpectrum25(10)(1988)29}31.
K.T.SunreceivedtheB.S.degreeininformationsciencefromTunghaiUniversityin1985andtheM.S.andPh.D.degreesincomputerscienceandinformationengineeringfromNationalChiao-TungUniversityin1987and1992,respectively.From1992}1996,hewasaResearchAssociateattheChungShanInstituteofScienceandTechnology.Since1996,hehasbeeninvolvedinthecomputerscienceandinformationeducationatNationalTainanTeachersCollege,Taiwan,ROC,whereheiscurrentlyanAssociateProfessorandtheDirectoroftheDepartmentofComputerScienceandInformationEducation.Hiscurrentresearchinterestsareneuralnetwork,geneticalgorithm,fuzzysettheory,computer-assistedinstruc-tion/learningdesign,andeducationalmeasurement.
Dr.SunwontheDragThesisAward(Ph.D.)grantedbytheAcerCo.in1992.
K.T.Sunetal./Neurocomputing41(2001)91}107107
S.J.LeereceivedtheM.S.degreeincomputerscienceandinformationeducationfromNationalTainanTeachersCollege,Tainan,Taiwan,ROC,in1998.Since1998,hehasbeenaprimaryschoolteacherinTaipei.Hiscurrentresearchinterestsareneuralnetwork,fractalimagecompression,andeducationresearch.
P.Y.WureceivedtheB.S.degreeinmathematicsfromNationalKaohsiungNormalUniversityin1974,theM.S.degreeinmathematicsfromNationalTsingHuaUniversityin1976,theM.S.degreeinelectricalengineeringfromNationalChengKungUniversityin1986andthePh.D.degreeincomputerscienceandinformationengineeringfromNationalTaiwanUniversityin1994.Since1996,hehasbeeninvolvedinthemathematicseducationatNationalTainanTeachersCollege,Taiwan,ROC,whereheiscurrentlyanAssociateProfessorintheDepartmentofMathematicsEducation.Hiscurrentresearchinterestsarefractalimaging,parallelprocessing,arti\"cialintelligence(neuralnetwork,geneticalgo-rithm,fuzzysettheory,etc.)andcomputer-assistedinstruction/learningdesign.
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- huatuo9.cn 版权所有 赣ICP备2023008801号-1
违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务