every  ten  seconds  then  centralized  insert  into  a 
database  to  ensure  the  integrity  of  the  data  and 
reduce  the  additional  burden  on  the  database  while 
reducing  continuous  to  occupy  on  the  network 
bandwidth  ,to  make  the  data  acquisition  and 
synchronization systemsthe work in  high efficiency. 
the process as shown in Figure 6. 
 
 
Figure 6: Storage optimization diagram 
6  SYNCHRONOUS 
MAINTENANCE 
With the help of SQL Server data synchronization 
technology and a series of  strategies  and  measures 
we  have  adopted  which  to  ensure  synchronization 
stability  as  a  whole,  we  can  ensure  the  consistency 
of enterprise data and cloud platform data as well as 
the robustness of the whole system, though with the 
acquisition  parameters  increase  or  decrease  the 
inevitably  encounter  need  to  add  or  reduce  the 
synchronization  of  the  table  or  change  the  table 
structure , in the past the above situation can only be 
solved  by  withdraw  the  corresponding  subscription 
and  release,that  is  to  re-establish  the  corresponding 
synchronization  after  the  completion  of  the  change 
which  will    result  in  the  data  difference  between 
publishing  and  subscribing.  However,  the  missing 
data  can  be  filled  after  the  synchronous 
reconstruction,this  process  depends  on  the  Visual 
Studio  database  comparison  tool  and  the  SSIS 
solution.The attendant drawbacks are that the larger 
the  amount  of  data,  the  more  time  it  takes  to  trap, 
and  also  more  of  the  occupancy  of  hardware  and 
bandwidth ,But it at least avoids the loss of data. In 
this  process,  we  just  fill  in  missing  data  during  the 
publish subscription process, so do not use snapshots 
to reinitialize the subscription, which will delete the 
data  that  has  been  synchronized  to  the  cloud 
platform,this is what we do not want to see. 
Of course, the above approach is a viable way to 
ensure the  smooth running of  data  synchronization; 
however, we often hope that the addition, deletion or 
change of the structure of the table without deleting 
the  existing  subscription  ,after  deeper  exploration 
and  learning,  that  can  be  achieved  by  change  its 
table  structure  directly  in  the  publishing  side  when 
need to change for the table on 
synchronization,because the transactional replication 
is usually begin to work from  issued the database 
object  and  the  data  snapshot,  after  the  initial 
snapshot is created and then the changes on data or 
schema  of  released  are  usually  (in  near  real 
time)passed to the subscriber. 
Data changes will be applied to subscribers in the 
order  in  which  they  originate  on  the  publisher  and 
the  boundaries  of  the  transactions,  so  transactional 
consistency  can  be  guaranteed  within  the 
publication; adding or deleting a table in the node of 
synchronization  process  requires  execute  SQL 
statements  about  procedure  sp_addarticle  and 
sp_addmergearticle  ,and  also  can  be  set  in  the 
properties  of  the  synchronization  node,  it  only 
displays  all  the  table  information  contained  in  the 
node  when  select  the  "project"  option,  then  only 
need to remove the "only display list selected items 
",it  will  displaying  all  tables  that  have  primary  key 
in the  database  that the  node belongs to,  choice  the 
new table  which  need to  be  released  to  achieve the 
release of the new table, without having to stop the 
data synchronization when add a table (or another 
object)  of  a  released  node,  the  above  method, 
empathy can also delete the release table. 
7  RESUME FROM BREAKPOINT 
Through  the  series  of  measures  above,  most  of  the 
obstacles  that  may  be  encountered  in  the  data 
synchronization  process  have  been  cleared 
up.However,  as  the  field  data  is  continuously 
inserted  into  the  enterprise  database,  the 
corresponding data files and log files of the database 
will be larger and larger.We can take measures such 
as  file  partitioning  to  improve  the  database  access 
rate,but  we  can  not  ignore  the  hidden  dangers 
brought by the data insertion efficiency and success 
rate  reduce,this  paper  proposes  to  create  a  cache 
database  on  the  data  acquisition  computer  to  store 
the  data  that  fail  to  insert  into  enterprises 
database,and  then  insert  these  temporary  data    into 
the  enterprise  database  when  have  low 
presure.Which used to  cache data table_SendBuffer 
table,  the  structure  mainly  includes  two  columns: 
one  is  the  time  column,and  the  second  is  SQL 
statement about the  fail insertion data,and make  the