tag:blogger.com,1999:blog-22930410360100046222024-03-19T04:50:09.245+02:00Cloud ComputingTechnical stuff mainly about computing and not only Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.comBlogger51125tag:blogger.com,1999:blog-2293041036010004622.post-58731222479018776202017-08-03T12:37:00.000+03:002017-08-14T02:17:35.028+03:00Use mongoose or native Nodejs MongoDB driver - that's the question<i>To mongoose or not mongoose</i> - <a href="https://en.wikipedia.org/wiki/To_be,_or_not_to_be" target="_blank">that's the question</a> coming all the time and time again among Node/MEAN stack coders.<br />
Usually I don't take sides on this kind of arguments because discussion can become real hot and quite inflammatory leading to ad-hominem attacks, but sometimes I am asked and have to provide my professional opinion. So when I was asked again recently this Shakespearean question I answered it with a short Shakespearean answer: "thou shalt avoid <a href="http://mongoosejs.com/" target="_blank">mongoose</a>".<br />
Surprisingly the discussion didn't go any further, may be I was convincing or probably my opinion was rejected silently.<br />
Any way I feel I must give a more detailed answer (though not complete), I owe this to the <a href="https://www.mongodb.com/" target="_blank">mongoDB</a> ecosystem and to mongoose maintainers who have done a fantastic job writing/maintaining mongoose code base.<br />
The complete answer is it all depends and there are a lot of factors like:<br />
<br />
<ul>
<li><b>learning curve:</b><br />mongoose can save you from the steep learning curve of the native driver, also will protect you from making critical design mistakes like opening/closing a mongoDB connection on each and every request served instead of reusing a connection from a connection pool - a common bad practice. </li>
<li><b>use case:</b><br />things are different if you are developing a small POV app, then taking shortcuts offered by mongoose can be beneficial but if you are working on a large scale production system mongoose can be an obstacle you have to fight all the time in order to improve efficiency.</li>
<li><b>point in time:</b><br />there were times when mongoose data validation was really helpful and it was a real plus, not not so match anymore as currently mongoDB <a href="https://docs.mongodb.com/manual/core/document-validation/" target="_blank">supports validation natively</a> </li>
<li><b>support/features:</b><br />As mongoose is not directly supported by mongoDB you can expect it to be a step behind relative new features, bug fixing etc.</li>
<li><b>ODM:</b>If you feel you absolutely need the ODM layer provided by mongoose then go for it.</li>
</ul>
<div>
<b>TL;DR</b>: mongoose is good in making sensible design decisions for you that are good enough most of the time and certainly much better than what a newbie coder could possibly make, but if you want to get the extra mile in efficiency/features/support etc... you better use native driver.</div>
<b>PS</b>: Let me quote:<br />
<br />
<ul>
<li>guys who wrapped a thin layer around official mongo js driver:<br /><a href="https://www.npmjs.com/package/document-ts" target="_blank">"Mongoose and many other ODMs are ridden with bugs (no offense) when you push them beyond the basics"</a></li>
<li><a href="https://twitter.com/matteocollina" target="_blank">@matteocollina</a> who managed to <a href="https://twitter.com/matteocollina/status/894488535595614208" target="_blank">summarize</a> this argument in < 140 chars.</li>
</ul>
<br /><br />
<br />Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-88329488762105228482016-03-13T20:20:00.000+02:002016-03-17T11:30:33.370+02:00MongoDB v 3.2 + silently breaks backward compatibility affecting behavior of capped collections<div style="text-align: justify;">
Capped collections are collection with some very special characteristics, although these are used internally by MongoDB for replication (oplog) are also offered as a feature for general use, their limitations and advantages are well documented.<br />
Because of those advantages developers are using capped collections widely for many purposes, and this usage it is not a hack or undocumented feature exploitation because capped collections are well documented and a well known feature of mongoDB.<br />I have seen capped collections used as :<br />
<br />
<ol>
<li>FIFO buffers</li>
<li>Trigger-like mechanisms<br />(as a matter of fact a <a href="https://www.meteor.com/" target="_blank">well known framework is based on this feature</a>) </li>
<li>PubSub architectures</li>
<li>Message queues<br /><i>Yes you can implement a message queue with mongoDB, although it will not be as fast as <a href="http://zeromq.org/" target="_blank">ZeroMQ</a> or some other tools, it has some attractive benefits as persistence, built in redundancy, limiting stack complexity etc. (<a href="http://miloncdn.appspot.com/docs/mongoUtils/mongoUtils.pubsub.html" target="_blank">My implementation of a PubSub/message queue</a>) </i></li>
</ol>
Except that you can't delete a document in a capped collection the other major limitation used to be that a document couldn't grow up in size as a result of an update. This is well understood by the developers and could be addressed by various techniques as:<br />
- pre-filing field(s) with dummy values<br />
- un-setting a field during an update operation to make space for a new field(s) etc.<br />
<br />
Now all of a sudden MongoDB from version 3.2.0-rc0 + decided to introduce one more limitation "the document <b>can't shrink in size"</b> as a result of an update.<br />
This <b>breaks backward compatibility</b> of a well published and widely used feature and although I understand the technical reasons behind this decision as described <a href="https://jira.mongodb.org/browse/SERVER-20529" target="_blank">here</a>, still I can't accept that such a decision can be taken so lighthearted without consulting with the developers/ecosystem. If you follow the discussion in the above ticket in Jira you get the impression that the only thing that they really cared about is how this can possibly affect their internal use of the feature and since they found no side-effect they went ahead with this.<br />
To keep adding salt to injury the breaking change is not yet published in the <a href="https://docs.mongodb.org/manual/core/capped-collections/" target="_blank">manual</a> as of today, neither I can find any reference in <a href="https://docs.mongodb.org/manual/release-notes/3.2-changelog/" target="_blank">change logs</a>.<br />
So it is left to developers to find it out the hard way when their code breaks after an update.<br />
I filled a <a href="https://jira.mongodb.org/browse/DOCS-7407" target="_blank">ticket</a>, then I realized a <a href="https://jira.mongodb.org/browse/DOCS-7373" target="_blank">related ticket</a> already exists but there is no action taken yet.<br />
Of course those tickets deal with the documentation only since the code breaking change is there and we have to live with it.<br />
Too bad many developers didn't realize this change was planed, that's why a compatibility breaking change should get as much publicity as possible so that developers don't get caught of guard, since is impossible for a developer to follow each and every ticket in Jira.<br />
<br />
This is a sad story that I hope will not be repeated in future since some of the things that made <a href="https://www.mongodb.com/" target="_blank">mongoDB</a> so successful IMO are:<br />
a) developers are not caught up by surprises<br />
b) high quality of manuals<br />
<br />
<br />
<br />
<br /></div>
Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-48379393641896696752015-12-09T02:55:00.001+02:002015-12-09T03:03:54.663+02:00MongoDB 3.2: Now Powered by PostgreSQL ?<span style="text-align: justify;">Having said that <a href="http://gaengine.blogspot.gr/2015/09/mongodb-is-listening-to.html" target="_blank">MongoDB is listening to developers/ecosystem</a> some months ago, today I have read an article </span><a href="https://www.linkedin.com/pulse/mongodb-32-now-powered-postgresql-john-de-goes" target="_blank">MongoDB 3.2: Now Powered by PostgreSQL</a><span style="text-align: justify;"> by <a href="https://twitter.com/jdegoes" target="_blank">@jdegoes</a> that contradicts my own experience.</span><br />
<div style="text-align: justify;">
I will <b>not</b> categorize this article as one more <a href="https://twitter.com/nickmilon/status/650090776802660353" target="_blank">mongophobic</a> article that makes headlines from time to time here and there like: </div>
<div style="text-align: justify;">
</div>
<ul>
<li><a href="https://twitter.com/nickmilon/status/650090776802660353" target="_blank">MongoDB has 'architectural' problems and has done nothing to improve</a> </li>
<li><a href="http://thehackernews.com/2015/07/MongoDB-Database-hacking-tool.html" target="_blank"> 600TB MongoDB Database 'accidentally' exposed on the Internet</a></li>
</ul>
Although I understand writer has some personal interest as he has invested in mongoDB Analytics, I have to admit that he is making a case that technically looks very sound to me "flattening out the data and using a different database to execute the SQL" is not the way to go for MongoDB BI solutions.<br />
<div>
I also agree with much of his arguments when he tries to describe of what is going wrong with mongoDB's ecosystem.<br />
<div>
<br /></div>
</div>
<div>
Still I am not as pessimistic as the writer and I hope that:</div>
<div>
<ul>
<li>a) This connector to BI tools is only a temporary quick fix solution and there are better tools coming in the pipeline.</li>
<li>b) MongoDB will listen to his arguments and comes with a revised policy regarding its partners and the ecosystem at large.<br />The story of how <a href="https://www.mongodb.com/blog/post/revisiting-usdlookup" target="_blank">$lookup</a> ended up been part of community edition makes me believe that mongoDB can do that.</li>
</ul>
</div>
Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com1tag:blogger.com,1999:blog-2293041036010004622.post-42433275856002173802015-09-01T02:53:00.001+03:002015-09-01T03:00:05.958+03:00MongoDB is listening to developers/ecosystem<div style="text-align: justify;">
Today MongoDB 3.1.7 is released and shell includes the new CRUD API.<br />
I am happy for this feature to be implemented and get into production so fast and feel really glad that I am the one who has triggered the introduction of this new API.<br />
It all started 5 months ago when PyMongo 3.0 was introduced and <a href="https://twitter.com/jessejiryudavis">A. Jesse Jiryu Davis</a> the coauthor of <a href="http://api.mongodb.org/python/current/api/pymongo/index.html">pymongo</a> wrote about new CRUD API in his blog where I posted a <a href="http://emptysqua.re/blog/announcing-pymongo-3/#comment-1955330570">comment</a> complaining that this is a step backword unless it is implemented in the shell API as well.<br />
Jesse engulfed the idea opened a ticket at mongoDB's <a href="https://jira.mongodb.org/browse/SERVER-17953">Jira</a> then things started rolling.
Today I was reading again his <a href="http://emptysqua.re/blog/mongo-shell-crud-api/">blog</a> and was excited to realise suggestion was in production.
I am also thankful to Jesse for his kind words and attribution and feel obliged to repeat those here:
<br />
<blockquote>
<i>The official announcement focuses on bug fixes, but I'm much more excited about a new feature: the mongo shell includes the new CRUD API! In addition to the old insert, update, and remove, the shell now supports insertMany, replaceOne, and a variety of other new methods.
Why do I care about this, and why should you?
MongoDB's next-generation drivers, released this spring, include the new API for CRUD operations, but the shell did not initially follow suit. My reader Nick Milon commented that this is a step in the wrong direction: drivers are now less consistent with the shell. He pointed out, "<b>developers switch more often between a driver and shell than drivers in different programming languages.</b>" So I proposed the feature, Christian Kvalheim coded it, and Kay Kim is updating the user's manual.
It's <b>satisfying</b> when a <b>stranger's suggestion is so obviously right that we hurry to implement it</b>.<br />
......
<br />
I'm so glad we took the time to implement the new CRUD API in the shell. It was a big effort building, testing, and documenting it—<b>the diff for the initial patch alone is frightening</b>—but it's well worth it to give the next generation of developers a consistent experience when they first learn MongoDB. Thanks again to <b>Nick Milon</b> for giving us the nudge.</i>
</blockquote>
In an other occasion I requested a feature this time from pymongo's team a few days ago, that was a trivial one and easy to implement that was to name the threads that pymongo is creating for debugging purposes, of course I could have done it myself and request a pull from github but it involved naming conventions for which I was not sure,
so I prefered to post a feature request in <a href="https://jira.mongodb.org/browse/PYTHON-975">jira</a>, Bernie Hackett responded "it seems a good idea" and to my suprise I saw this implemented in next release few days later. <br />
What this story tells us developers is that we can request for features/fixes and can expect to see those implemented in reasonable time even if it is a major effort provided that:
<br />
<ul>
<li> our requests are reasonable and technically sound.</li>
<li> we document those properly.</li>
<li> we use proper channels to communicate.</li>
<li> the company/organization has a culture/history of listening to developers/ecosystem and those 2 examples proove that <a href="https://www.mongodb.org/">mongoDB</a> is one of those.</li>
</ul>
</div>
Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com1tag:blogger.com,1999:blog-2293041036010004622.post-7693718251196006152011-05-18T03:35:00.011+03:002011-05-21T01:44:27.282+03:00New App Engine Pricing policy, the good the bad and the ugly.<div style="text-align: justify;">Not many good news from the cloud recently, Amazon’s AWS had a long downtime 3 weeks ago and Google announced a much controversial <a href="http://www.google.com/enterprise/appengine/appengine_pricing.html">new pricing model</a> for App Engine during Google I/O 2011. I will concentrate on the later since a lot of developers keep asking what this really means to them.<br />
To start with, the announcement was immature and got developers by surprise. Up to now many details are sketchy and a lot of things remain to be defined (remember - devil hides in the details). I understand it was made in a hurry in order to catch up with I/O 2011, but this is not a good enough excuse. Google could just announce the basics and wait for an official announcement when they were really ready to present a well defined pricing policy, preferably after some more consultation with developers and other platform stakeholders. Yes there was a survey last February but no results were published and if I judge from <a href="https://groups.google.com/forum/#!topic/google-appengine/eBdE0hyVPhE/discussion">users comments</a> at the time it seems their concerns are not resolved by the new pricing policy. <br />
<b><br />
</b><br />
<b>Good things first :</b><br />
Sure there are some good things announced so let me name a few:<br />
<br />
<ul><li>App Engine is leaving Preview status so it becomes a mainstream product thus offsetting some of the worries that Google would possibly discontinue the platform.</li>
<li>It will come soon with a <a href="http://code.google.com/appengine/sla.html">99.95% uptime service level agreement</a> for paying customers, which means it is mature enough for enterprise level applications.</li>
<li>“Go” language is added to the stack along with the python and java.</li>
<li>Back end (always on) servers are available now.</li>
<li>High Replication Datastore prices got a haircut, probably as an incentive for developers to move their applications from Master-Slave Datastore.</li>
<li>Blobstore is available now to free applications as well as back ends and other APIs. This is great since it allows new comers to experiment with all available tools and APIs before they commit to the platform.</li>
<li>New interesting features are added to road-map as well as promises for some badly needed tools (sockets etc..)</li>
</ul><br />
<b>Bad things :</b><br />
<br />
<ul><li>Free application usage is much more restricted with new limits applicable to Datastore API (max 50k operations per day), email recipients per day drastically reduced, and XMPP and channel API quotas are trimmed down drastically.</li>
<li>On-demand Frontend Instances (max 24 Instance Hours). This although on paper looks good compared to 6.50 CPU hours of current quota, until we take into consideration that the new unit is instances with a minimum charge of 15 minutes per instance per use as was disclosed in the forums. This combine with the new datastore quotas makes absolutely prohibitive for any free (especially python which lacks a multi threading capability) applications to serve reliably 24 hours a day even for minimum amount of traffic and defies the promised 5000000 requests / month free (well ... that was the promise made 3+ years ago when app engine came to life) .</li>
<li>Datastore departs from actual CPU cycles used model and joins a not yet defined model that charges per varius datastore operations. Although I understand the motives here (more transparency they claim and they are right, cpu usage it is hard to be understood by enterprise accountants) I do not see how they can make it measurable and transparent given the many different types of datastore operations (reads, writes, key only fetches, deletes etc. etc.). For example how will they charge for a normal fetch of 1000 entities vs an enumerated fetch where a coder trades memory for execution speed. This and many more questions remain answered by the released Pricing and Features preview table.</li>
<li>Pricing based in CPU usage is over, new billing will be on a per live instance per hour basis with minimum 15 minutes (reasons given: again more transparency and inability to charge for memory used by an instance while serving). Well this is the issue that created a lot of backlash among the developer community and with very good reason. How in the world we are moving from ms pricing granularity to 15 minutes, it is beyond my imagination. This is against the long standing App Engine motto “pay as you go”. Now you got to pay going or not going, if you want to secure an instance on standby in case needed or if 2 requests happen to come at same time in a python application, your application can serve those consuming just 100ms of cpu time still you got to pay for 15 minutes of instance time while the instance will probably sit there idling for 899900 ms. This is not green computing.</li>
<li>Google’s answer is that the new scheduler will take care of those things to some extend, which I really doubt, but even if this comes true still why should we have to be charged acording to schedulers efficiency which we do not control ?</li>
<li>Reserved instances with a reduced pricing (an idea borrowed from AWS ?) is a new toy, we have to see how it works out but still it makes application utilization planing and billing much more complicating.</li>
</ul><b>Ugly things :</b><br />
<br />
<ol><li>The way the new scheme was introduced created a lot of confusion to app Engine advocates, while helped its enemies to spread a lot of FUD around. All in all it was close to a PR disaster. Some app engine engineers writing in the forums and talking in I/O 2011 helped with calming down the crowds for the time been but, I am not sure for the end result at the end of the day.</li>
<li style="text-align: left;">The tactics used left the developer community with the impression that GAE is concentrating on enterprise and abandoning developers and small business. This may be unfounded but if you take a look at the url pointing to the new pricing list you can see it written - loud and clear : “<a href="http://www.google.com/enterprise/appengine/appengine_pricing.html"><span class="Apple-style-span" style="color: #cc0000;"><b>enterprise</b></span>/appengine/appengine_pricing</a>”.</li>
<li>It looks like after abandoning App Engine for business, Google tried to accommodate that project into existing App Engine platform by squeezing some of the breathing space used by existing developers.</li>
<li>New pricing model looks more like IaaS than a PaaS service which GAE claims to be.</li>
</ol><br />
So how really the <a href="http://googleappengine.blogspot.com/2011/05/year-ahead-for-google-app-engine.html">Years Ahead for Google App engine</a> look like ? <br />
I have no simple answer to that and I do not want to jump to premature conclusions until the dust raised by the latest announcements settles down and more concrete pricing policy details emerge. Unfortunately this is going to take sometime, meanwhile I feel that unless there are some changes to the policies just announced and some pleasant surprises by GAE team when all this gets finalized GAE’s future looks grim.<br />
Do not read me wrong I am an early adopter and advocate of App Engine and I want it to succeed, but I am an grown up man and can’t be turned into a fun boy applauding everything that comes out of Googleplex.<br />
I am disappointed but I hope things will turn better than what look like now and I do see some signs on the horizon that tell me this is happening already as engineers are trying to take back their baby from the poor sighted accountants and GAE4B group who hijacked the <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZlgxdsIPrkdUrga5QTveX78Uixwlffi50D6YI6zP7gNfqhKOlY3DMtxOCi3l1QFwjZu-PP0BdfwMHi3E79ruETuhNTzUHze-CSJIspHq_M0OSVnHtrsZUgW6jg-ZceN6zibkTTDx8V9A/s175/appengine_lowres.png">plane</a>.<br />
I fully understand that accountants do have a place in managing this business and help make it sustainable and profitable, something that will benefit Google’s shareholders and developers alike. My objection is that they do not really understand the product, its strengths and virtues, so it is up to the engineers to communicate those to them and only then finalize a pricing policy. From what I read in the <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/8a0de279f3efbf37/394a62031625b3e9#394a62031625b3e9">groups</a> and social media developers are ready to support the product and willing to pay double or even triple the price they are paying now, what they really do not like are new policies that ruin their work and time they spend trying hard to optimize their code. <br />
App engines main attractions are:<br />
<br />
<ul><li>Automatic unlimited scalability. Do not spoil this by introducing policies that fight that. I am referring to reserved instances and passing responsibility of fine adjusting the scheduler to developers.</li>
<li>Pay as you go. Billing by the instance especially at this high granularity is against this principle, further more it violently repositions App engine as more of VS kind of thing and closer to an <a href="http://www.katescomment.com/iaas-paas-saas-definition/">IaaS</a> rather than a <a href="http://silverlighthack.com/post/2011/02/27/IaaS-PaaS-and-SaaS-Terms-Explained-and-Defined.aspx">PaaS</a> service. I believe this is also a bad marketing policy because App engine can’t never compete with IaaS offers like AWS and the vast ecosystem that exists around these products. Of course App engine’s people argue that a managed environment can’t be compared to an unmanaged one but their actions make this differentiation a very difficult thing.</li>
<li>Start up and small business friendly. That used to mean a smooth gradual transition path between free and per use paying system. New policies destroy this by drastically lowering the quota on free package and steeply raising the entrance fee of a paying account. This gap has to be bridged somehow. I am not talking about the $9 per month fee which is reasonable but the accumulating costs of instances running, datastore operations quota etc.. Perhaps a step to this direction that can be considered is the introduction of an intermediate pricing level between free and fully paying applications for developers who are not really ready for prime time and do not need an SSL certificate neither an SLA contract.</li>
</ul><br />
Steering GAE’s ship to the enterprise is not inherently wrong for most of developers, since it provides opportunities for them. But putting most of the effort to the enterprise while individual developers feel - rightly or wrongly does not really matter - abandoned is wrong and is not going to work in the long run. Not many fortune 500 type customers will join unless they know there exists a healthy and growing ecosystem around the product. I do not have the numbers but Google says there are around 100k active developers, although this is not a small number still can not be considered a game changer. So even if product development strategy is looking to big enterprise customers the timing is wrong, priority at this point in time should be given to developing the ecosystem. <br />
<br />
I want to believe all this is a nightmare that will pass soon as GAE’s team start to understand what is happening to their ecosystem and steer clear of trouble and my <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZlgxdsIPrkdUrga5QTveX78Uixwlffi50D6YI6zP7gNfqhKOlY3DMtxOCi3l1QFwjZu-PP0BdfwMHi3E79ruETuhNTzUHze-CSJIspHq_M0OSVnHtrsZUgW6jg-ZceN6zibkTTDx8V9A/s175/appengine_lowres.png">plane</a><b> keeps on "flying into the clouds"</b>.<br />
<hr /><b>P.S. </b><br />
<b>Update May 18, 2011</b><br />
<div style="text-align: left;">Google's Gregory D'alesandre has posted a <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/a1bfa432e0c002a7/739169f799d8e69a">"FAQ for out of preview pricing changes"</a> where he tries to answer some of the questions, and clear up some of the mesh new policy has created. Also there are some definitions there of what consists what in new App Engine speak, looks like we got to study a new science and a brand new terminology before we can proceed.</div>IMHO this is a sisyphean task, new policy has opened Pandora's box with questions popping up from it in a much faster rate that can be answered.<br />
<b>Update May 19, 2011</b><br />
Lots of talk and fighting in the forums with developers comparing App Engine vs AWS vs Rackspace vs any_other_VPS_service_on_earth.<br />
There was not such talk before, coz App Engine looked different from those products both in terms of pricing as well as features.<br />
Now thanks to latest news it managed to be transformed to Yet_An_Other_VPS overnight and I can not blame developers for those comparisons.<br />
To tell you the truth I have seen that coming ever since I show SQL databases on the roadmap.<br />
IMHO these are signs that we are on the wrong path, but .... then again who am I to give advice ? <br />
or ... if I can quote <a href="http://twitter.com/#!/saidimu">@saidimu</a> : <i>"A true mark of a dysfunctional platform: in-fighting among developers who formerly only sang praises of the platform. #AppEngine"</i><br />
<b>Update May 20, 2011</b><br />
Plenty of new questions waiting for replies.<br />
A real interesting one by <a href="http://groups.google.com/groups/profile?enc_user=Ut4cQRIAAACxYKOAdObw8GunOrEi2q3f8rhlH0Pnl47z4AZhN98BFg">Raymond C</a> :<i>"<a href="http://groups.google.com/group/google-appengine/browse_thread/thread/d74be76b02238e9e">Is MapReduce still a flexible solution on AppEngine under the new pricing model ?</a>"</i><br />
My answer : probably not, new pricing model makes mapreduce operations a no - no. Price will be prohibitive for such operation especially ones that depend on many instances to run a job fast, unless those used to take hours rather than minutes to complete. So I guess the team can drop the "reduce" part and query based mapreduce things from roadmap, new model renders those irrelevant for most use cases. Also drawing a "danger - high $$$" icon as a precaution next to copy/delete model buttons on control panel would be a good idea.<br />
You can read a great, summary of what new changes bring to App engine by <a href="http://groups.google.com/groups/profile?enc_user=IzqpDhIAAAAPahlszaHM5147L9N2wnLd8rhlH0Pnl47z4AZhN98BFg">johnP</a> <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/d74be76b02238e9e/f6d5ba65cd595d58#f6d5ba65cd595d58">here</a><br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com8tag:blogger.com,1999:blog-2293041036010004622.post-46903744573093792982011-05-18T01:51:00.001+03:002017-08-03T12:37:30.119+03:00Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-77430455479913764212011-04-09T01:29:00.000+03:002011-04-09T01:29:29.808+03:00Google Maps API rate limiting and Google App Engine again<div style="text-align: justify;">@bFlood : Maps Premium is an expensive service usually used by not public-facing, password protected Web sites, while what we are talking about here is the free Google Maps service, where G is penalizing all Apps running on top of GAE since they have to share the ip addresses pool of GAE ip adresses. <br />
@Ikai : what you write applies to old maps (V2) API where you can obtain an application authorization key and applications are rate limited based on this key, although I feel that some kind of ip based limitations exists also.<br />
Maps V3 which is the way to go especially for mobile appls, do not require an application key, instead there are rate limitations based just on originating IP addresses, this puts GAE based appls on a disadvantage since we have to share this with all other GAE based appls using the service.<br />
(see : post <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/8d4183f8488177a9">publised in App Engine group</a>)<br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-20559223389128215482011-03-25T02:51:00.003+02:002011-03-25T03:15:21.417+02:00App Engine's Email service future<div style="text-align: justify;">From time to time I have seen posts in app engine's groups of people complaining for Email service glitches. Some of those real some not so. Sometimes coming from people who did not bother to read the documentation or just ignorant of what an Email service is all about.<br />
App Engines team response alarmed me a little.<br />
IMHO it is not a good policy for GAE to abandon(?) services middle way <br />
instead of improving - enhancing those. <br />
so I wrote the following <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/6b3857fc12a23621">post</a> and now I am waiting to see how the conversation evolves.<br />
<br />
<i>@Icai <br />
I agree on most of what you write above, and I understand that you prefer to focus on more important things, also having run Email services for enterprises in the past I do know it is not trivial.<br />
But ....<br />
still I believe Email service is a major asset for GAE and dropping it (or anything to that effect) will constitute a major blow to App Engine.<br />
Gae offers a limited subset of services compared to what a LAMP box or a IaaS box can offer but been a PaaS provides trouble free operation and automatic scalability.<br />
Email service usually is part of any web operation so by dropping it out you farther limit the number of potential applications that fit well into what GAE offers.<br />
Of course, developers can look into alternative options but this makes our life difficult since we have to integrate several other third party services in order to make a working web solution i.e. setting up multiple accounts, feed traffic back and forth to other services, having to monitor and deal with one more possible point of failure. All this defies to some extend the benefits of GAE as a PaaS. <br />
Also, dropping a service in a time when competition is adding services, will sent the wrong signal to App Engine's developers/users echo system and having in mind that G is associated with the best email service can possibly turn into a PR disaster.<br />
<br />
Further more, been a regular reader of the groups and following App Engine since the very early days I do not see that Email service has raised a lot of issues. I believe for most people who know what they are doing and do not abuse the service it works quite smoothly. Some of the issues raised (mainly spam flagging) <br />
a) happen to the best of Email services b) are addressed by well known techniques and practices described by others here and elsewhere.<br />
In conclusion:<br />
I would welcome any measure taken to fight service abuse like using GAE primarily as a mail server - we all understand that this is not what GAE is all about. <br />
Instead of dropping the service I would prefer to consider: <br />
a) put false positive spam flagging issues under the responsibility of developers.<br />
b) exclude the service or part of it (like delivery assurances) from the future SLA offer. <br />
c) think about the technical possibility to integrate it to gmail which is the *most* reliable email service in town.</i><br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-77522494123687688252011-02-15T01:53:00.003+02:002011-02-15T02:01:17.468+02:00Selecting distinct entities across a large table<div style="text-align: justify;">I have faced <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/c3025e6cc32e29e4">this</a> kind of problem some time ago.<br />
I tried some of the solutions suggested below (in memory sort and filtering, encoding things into keys etc. and I have benchmarked those for both latency and cpu cycles using some test data around 100K entities) <br />
An other approach I have taken is encoding the date as an integer (day since start of epoch or day since start of year, same for hour of day or month depending on how much detail you need in your output) and saving this into a property. This way you turn your date query filter into an equality only filter which does not even needs to specify an index) then you can sort or filter on other properties.<br />
Benchmarking the latest solution I have found that when the filtered result set is a small fraction of the unfiltered original set, is 1+ order of magnitude faster and cpu-eficient. Worst case when no reduction of the result set due to filtering the latency and cpu usage was comparable to the previous solutions) <br />
<br />
Hope this helps, or did I missed something ?<br />
Happy coding-:)<br />
</div><a name='more'></a> <br />
<p><br />
On Feb 14, 11:51 pm, Stephen Johnson <onepagewo...@gmail.com> wrote:<br />
> Okay I think I got something that might work. Reverse the StartDate and<br />
> CarId for the key from what I said above so the key would look like this:<br />
> 2011:02:14:17:13:33:123 and the KEYS ONLY query then is:<br />
> <br />
> select __key__ where __key__ >= MAKEKEY(StartDate + CarId) && __key__ <=
> MAKEKEY(EndDate + CarId) order by __key__ DESC<br />
> <br />
> Now, you can use the Async query to start processing. You're going to get<br />
> entries that you're not interested in but you're only getting the key field<br />
> back and not the whole CarJourney entry and this key/id has the Date and Car<br />
> ID, so the first time you hit a Car ID for each Car then you have the ID for<br />
> the latest CarJourney for that car. Now, once you've found all car ID's your<br />
> looking for you can abort the query or you'll reach the end of the query<br />
> results. Now, as you're looping, store the KEYs of the entries your looking<br />
> for and then do a batch GET on memcache to retrieve as many Car (you've got<br />
> the car id) and CarJourney (you've got the carjourney id) entries that might<br />
> be stored there and then for any that you didn't get from memcache, you can<br />
> do a batch GET on the datastore using the keys/ids that you have.<br />
> <br />
> I think that if you memcache things appropriately and use the batch gets for<br />
> memcache and datastore then this might just work for you.<br />
> <br />
> Let me know what you think. It's an interestng problem,<br />
> Stephen<br />
> <br />
> On Mon, Feb 14, 2011 at 2:12 PM, Stephen Johnson <onepagewo...@gmail.com>wrote:<br />
> <br />
> <br />
> <br />
> <br />
> <br />
> <br />
> <br />
> > Or maybe it blocks on different result sets just not on getting the next<br />
> > fetch block?? Hmmm. Sounds like a tough problem.<br />
> <br />
> > On Mon, Feb 14, 2011 at 2:09 PM, Stephen Johnson <onepagewo...@gmail.com>wrote:<br />
> <br />
> >> Are you using .asList (which I think blocks like you describe), but I<br />
> >> thought asIterable or asIterator wasn't suppose to. (if you're using Java).<br />
> <br />
> >> On Mon, Feb 14, 2011 at 12:38 PM, Edward Hartwell Goose <
> >> ed.go...@gmail.com> wrote:<br />
> <br />
> >>> Hi Calvin & Stephen,<br />
> <br />
> >>> Thanks for the ideas.<br />
> <br />
> >>> Calvin:<br />
> >>> We can't do the filtering in memory. We potentially have a car making<br />
> >>> a journey (the car analogy isn't so good...) making a journey every 3<br />
> >>> seconds, and we could have up to 2,000 cars.<br />
> <br />
> >>> We need to be able to look back up to 2 months, so it could be up to<br />
> >>> 1.8 billion rows in this table.<br />
> <br />
> >>> Stephen:<br />
> >>> That's an interesting idea. However the Asynchronous api actually<br />
> >>> fires the requests synchronously, it just doesn't block. (Or at least,<br />
> >>> that's my experience).<br />
> <br />
> >>> So, at the moment we fire off 1 query (which Google turns into 2) for<br />
> >>> each site. And although the method call returns instantly, it still<br />
> >>> takes ~5 seconds in total with basic test data. If each call takes<br />
> >>> 12ms, we still have to wait 24 seconds for 2,000 sites.<br />
> <br />
> >>> So, the first call starts at time 0, the second call starts at 0+12,<br />
> >>> the third at 0+12+12... etc. With 2,000 sites, this works out about 24<br />
> >>> seconds. Once you've added in the overheads and getting the list of<br />
> >>> Cars in the first place, it's too long.<br />
> <br />
> >>> If we could start even 100 queries at the same time of time 0, that'd<br />
> >>> be superb. We thought we could do it with multithreading, but that's<br />
> >>> not allowed on App Engine.<br />
> <br />
> >>> Finally - I've also posted this on StackOverflow -<br />
> <br />
> >>>http://stackoverflow.com/questions/4993744/selecting-distinct-entitie...<br />
> <br />
> >>> I'll try and keep both updated.<br />
> <br />
> >>> Any more thoughts welcome!<br />
> >>> Ed<br />
> <br />
> >>> On Feb 14, 6:47 pm, Calvin <calvin.r...@gmail.com> wrote:<br />
> >>> > Can you do filtering in memory?<br />
> <br />
> >>> > This query would give you all of the journeys for a list of cars within<br />
> >>> the<br />
> >>> > date range:<br />
> >>> > carlist = ['123','333','543','753','963','1236']<br />
> >>> > start_date = datetime.datetime(2011, 1, 30)<br />
> >>> > end_date = datetime(2011, 2, 10)<br />
> <br />
> >>> > journeys = Journey.all().filter('start >', start_date).filter('start<br />
> >>> <',
> >>> > end_date).filter('car IN', carlist).order('-start').fetch(100)<br />
> >>> > len(journeys)<br />
> >>> > 43 # <- since it's less than 100 I know I've gotten them all
> <br />
> >>> > then since the list is sorted I know the first entry per car is the<br />
> >>> most<br />
> >>> > recent journey:<br />
> <br />
> >>> > results = {}<br />
> >>> > for journey in journeys:<br />
> >>> > ... if journey.car in results:<br />
> >>> > ... continue<br />
> >>> > ... results[journey.car] = journey<br />
> <br />
> >>> > len(results)<br />
> >>> > 6<br />
> <br />
> >>> > for result in results.values():<br />
> >>> > ... print("%s : %s" % (result.car, result.start))<br />
> >>> > 753 : 2011-02-09 12:38:48.887976<br />
> >>> > 1236 : 2011-02-06 13:59:35.221003<br />
> >>> > 963 : 2011-02-08 14:03:54.587609<br />
> >>> > 333 : 2011-02-09 10:40:09.466700<br />
> >>> > 543 : 2011-02-09 15:28:53.197123<br />
> >>> > 123 : 2011-02-09 14:09:02.680870<br />
> <br />
> >>> --<br />
> >>> You received this message because you are subscribed to the Google Groups<br />
> >>> "Google App Engine" group.<br />
> >>> To post to this group, send email to google-appengine@googlegroups.com.<br />
> >>> To unsubscribe from this group, send email to<br />
> >>> google-appengine+unsubscribe@googlegroups.com.<br />
> >>> For more options, visit this group at<br />
> >>>http://groups.google.com/group/google-appengine?hl=en.<br />
</p>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-83336543677911804102010-11-24T01:07:00.000+02:002010-11-24T01:07:40.158+02:00Re: Policy for instance startup<div style="text-align: justify;">Not been a Googler can't help much with this.<br />
Having said that, I suspect there is a kind of build in algorithm that<br />
does some kind of application profiling taking into acount QPS,<br />
response times, and other parameters which adjusts instance life time,<br />
number of instances to start etc..<br />
This could possibly explain the difference in behaviour between your<br />
staging and production appls.<br />
<br />
happy coding;-)<br />
</div><a name='more'></a><br />
<a href="http://groups.google.com/group/google-appengine/browse_thread/thread/7e6757f5cc4f370e"></a><br />
<p>On Nov 23, 11:58 am, Tomas Alaeus <<a href="mailto:tala...@gmail.com">tala...@gmail.com</a>> wrote:<br />
> I'm curious when exactly instances are started. I have two<br />
> applications running on GAE, one of them have billing enabled. The one<br />
> with billing enabled have been stress tested and have at most started<br />
> 100 simultaneous instances. The other is just for testing and staging<br />
> purposes and have never handeled much traffic.<br />
><br />
> What I experience is that the staging server never starts more<br />
> instances than needed. If a single person views pages it will never<br />
> load more than a single instance. The other one however seems to start<br />
> about 5 instances before anyone can get hot responses, and it will<br />
> continue to start up to about 10 before realizing that ~1 QPS isn't<br />
> that much traffic (the requests finish in about 100ms each).<br />
><br />
> So, why does GAE boot up lots of instances even though 1 instance can<br />
> serve the incoming traffic without a problem (the requests doesn't<br />
> even overlap, so no waiting is needed)?<br />
><br />
> I realize that this isn't a very big issue, since when it gets lots of<br />
> traffic it will indeed need all the instances. I'm just curious why it<br />
> happens.Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-27716274759167643252010-11-14T23:32:00.000+02:002010-11-14T23:32:19.328+02:00World Countries and IP geocoding API for App Engine<div style="text-align: justify;">I have been writing here <a href="http://gaengine.blogspot.com/2010/07/app-engine-google-geocoding-service-ii.html">again</a> and <a href="http://gaengine.blogspot.com/2010/05/google-maps-api-quotas-and-app-engine.html">again</a> about the inherent problem that App Engine based Applications have in using third party API’s with quota limits based on ip addresses since all of those are served from the same block of ips alocated by Google © and therefore have to share those quotas with other applications hosted on App Engine using the same service.<br />
<br />
Today I am happy to offer a free IP geocoding and world countries information service API and hope this solves to some extend the problem of Server side IP geocoding for fellow App Engine developers.<br />
<br />
Detailed service description and more information is provided here : <a href="http://www.geognos.com/geo/en/world-countries-API.html">"World countries API"</a><br />
<br />
Happy coding:-)<br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-53733088664380229982010-10-27T17:59:00.000+03:002010-10-27T17:59:01.752+03:00maximum number of requests that may be handled in a single process lifetime<div style="text-align: justify;">I was doing some load tests on app engine today when I noticed a new Info message in the logs: <b><i>"After handling this request, the process that handled this request reached the maximum number of requests that may be handled in a single process' lifetime, and exited normally."</i></b><br />
<br />
So what that supposed to mean ?<br />
Up to know we new that application instances are automatically terminated after some inactivity time out. If I understand this message well now we know that a process can be terminated after handling so many requests. How many exactly ? is this a new magic number ? Lets hope we will have some definite answers from the always helpful App Engines team.<br />
<br />
I am doing some latency optimization (lazy imports) in this app based on the assumption that an instance will stay live as long as there are more requests coming in and I was scared this new number spoils my optimization logic. After some more testing I have found out that this is not true since whatever this number is it must be in the order of thousands requests - so no big trouble in my use case. Still I think App Engine's developers deserve to know more about such parameters, it helps both developers and the platform. </div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-80109268714638975322010-10-23T22:43:00.000+03:002010-10-23T22:43:54.199+03:00Downloan app code feature or bug ? - download_app<div style="text-align: justify;">I think App Engines developers community has by a vast majority rejected the idea of code downloading at least as default (opt out) option.<br />
I also do not like the idea of a payable service, since it :<br />
<ul><li>will complicate the pricing model</li>
<li>will attract criticism against the platform and </li>
<li>help guys who are in the business of doing unreasonable GAE vs S3 vs whatever_looks_like_cloud comparisons happy.</li>
</ul>But ... then again who am I to tell mother G what to do ? -:)<br />
<a href="http://groups.google.com/group/google-appengine/browse_thread/thread/b6d2bb7c23eadddd">( see my post at google-appengine group )</a><br />
<a name='more'></a><br />
On Oct 23, 10:00 pm, "A. Stevko" <<a href="mailto:andy.ste...@gmail.com">andy.ste...@gmail.com</a>> wrote:<br />
> IMO, I think source code download is a great disaster recovery option that<br />
> should have a $$$ price tag associated with it.<br />
> On Sat, Oct 23, 2010 at 5:08 AM, Tim Hoffman <<a href="mailto:zutes...@gmail.com">zutes...@gmail.com</a>> wrote:<br />
> > Hi<br />
><br />
> > This was nearly introduced, and the community overwhelmingly rejected<br />
> > the proposal.<br />
> > There are a number of issues that such a facility introduces.<br />
><br />
> > Using a shared fileservice or source code control (actually a much<br />
> > better strategy) is what<br />
> > you should be using.<br />
><br />
> > I don't think Ikai was being humorous. It might be worth reviewing<br />
> > this thread to see just how negative the facility was received.<br />
><br />
> > Rgds<br />
><br />
> > Tim Hoffman<br />
><br />
> > On Oct 23, 9:43 am, mykhal <<a href="mailto:michal.bo...@gmail.com">michal.bo...@gmail.com</a>> wrote:<br />
> > > On Oct 18, 10:26 pm, "Ikai Lan (Google)" <<a href="mailto:ikai.l%2Bgro...@google.com">ikai.l+gro...@google.com</a><<a href="mailto:ikai.l%252Bgro...@google.com">ikai.l%2Bgro...@google.com</a>><br />
><br />
> > > wrote:<br />
><br />
> > > > Have you looked into Dropbox?<br />
><br />
> > > ><a href="https://www.dropbox.com/">https://www.dropbox.com/</a><br />
><br />
> > > > There is a free offering. </div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-48614065683755585362010-10-15T15:07:00.000+03:002010-10-15T15:07:58.440+03:00Instances console in App engine's Admin Tools<div style="text-align: justify;">It seems a new feature has just been rolled out in App Engine's production Admin tools - instances - it shows a view of total running instances along with QPS, Latency and memory used Memory as well as averages for the above values.<br />
There is also a summary of the above in application's dashboard.<br />
It is a very useful feature that will help a lot with application performance monitoring and resolving issues on when and how many new instances are started and killed.<br />
Well done dev team. :-)<br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-55966520131571522382010-09-17T04:22:00.000+03:002010-09-17T04:22:05.589+03:00Yahoo claims about spam filtering<div style="text-align: justify;"><a href="http://techcrunch.com/2010/09/16/live-from-yahoos-product-runway-event/">From Techcrunch :</a><br />
Yahoo claims -55% less spam than Gmail. 40% less than hotmail. <br />
I am sure that if Yahoo have not been an MS partner the numbers would have been reversed.<br />
Something like <b>55% less spam than hotmail. 40% less than Gmail.</b><br />
Any way ... lets wait to hear what MS have to say about those figures.<br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com1tag:blogger.com,1999:blog-2293041036010004622.post-88672785012075759262010-09-16T01:06:00.000+03:002010-09-16T01:06:18.499+03:00App Engine scalability issues<div style="text-align: justify;">There is an issue going on for sometime now about App Engine's scaling capability.<br />
We know that all App Engine's user facing HTTP Requests should be processed within a time-frame of 30 seconds, other wise will throw an exception. That is a rule that started with App Engine's introduction some two and half years ago, and until some time it was the only law applicable as far as I know. Then some months ago a new magic number has been introduced in group posts. It started as 1000ms within which an application has to respond other wise it could not scale properly (No new threads will be started to cope with increasing demand) later this number was lowered to 800ms. <br />
I have raised this issue a month ago <a href="http://groups.google.com/group/google-appengine-python/browse_thread/thread/6353b6232e8851aa/3f907598c0826ca0?q=#3f907598c0826ca0">here</a> : <br />
" This "800ms" rule started as 1000ms some time ago, now it moved to 800ms and 400ms enters the scene. I am afraid it has became a moving target approaching 0ms too fast. Somebody <b>must stop</b> the bar somewhere."<br />
<b>But I got no reply.</b><br />
In that thread even "sub-400ms" as optimal number is mentioned by a <a href="http://groups.google.com/groups/profile?enc_user=GuejVxEAAABpq3LE9LlvXst-Z8GVcBllkdEasx1kiYTQavV7mdW13Q">Ikai L</a>.<br />
Today it turns out nobody stopped the bar, instead this magic number has decreased even further to <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/e56772cb5c2a3060">700ms</a> which turns my above mentioned prophecy of approaching 0ms a step closer to true. I do agree with all people in this group who complain that this is troublesome and concerning. Main advantage of App Engine is <b>scalability</b>, but this number kills scalability in most practical application scenarios.<br />
It is understandable to raise the bar a little to reflect for advances in datastore latency but talking about 700 and sub 400 ms renders App Engine's scalability irrelevant for most if not all practical use cases.<br />
There are other issues here that I am sure concern a lot of us poor App Engine's developers:<br />
<br />
<b>1) The always changing numbers.-</b>I think it is reasonable to expect that an application put into production a year ago should have the same or better scalability behaviour today or a year from now. I know we operate on a Beta platform so we have to take some risk and follow up and improve our applications when Google changes the parameters under which we operate but we will never be able to catch up with this kind of drastic changes.<br />
<b>2) Transparency - </b>We should be informed about those limits, so we know in advance if what we have in mind is doable with App Engine. I understand there can be some trade secrets for engineering and/or marketing reasons, but still we must be in the loop and at least know basic things that affect our Appls.<br />
Having complained about this I want also to be fair to App Engine and its team. Up to know the usual practice is that most of those limits that affect our applications (quotas etc.) are improving over time not vice versa. This is an exception and I think it must be rectified. <br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-78956948235196963182010-09-09T18:31:00.004+03:002010-09-09T18:49:52.634+03:00Bing search returns a stack trace<div class="separator" style="clear: both; text-align: left;"><a href="http://3.bp.blogspot.com/_kzCpvbAy3Rc/TIj_qe0PBiI/AAAAAAAAAOo/oYYxkxnXU80/s1600/Screenshot-where+are+highest+mountains+in+the+world+-+Bing+-+Google+Chrome.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="151" src="http://3.bp.blogspot.com/_kzCpvbAy3Rc/TIj_qe0PBiI/AAAAAAAAAOo/oYYxkxnXU80/s200/Screenshot-where+are+highest+mountains+in+the+world+-+Bing+-+Google+Chrome.png" width="200" /></a></div>Haven't seen that before ! bing search returned (just once) following stack trace after a manual search for: <b>where are highest mountains in the world</b>. So you see it is not only us small individual developers that can force a stupid piece of code to execute, <b><i>bi(n)g </i></b> guys are quite capable to make it happen too.<br />
<a name='more'></a><br />
Server Error in '/search' Application.<br />
<br />
An invalid reloadable resource was passed.<br />
<br />
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. <br />
<br />
Exception Details: System.ArgumentException: An invalid reloadable resource was passed.<br />
<br />
Source Error: <br />
<br />
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.<br />
<br />
Stack Trace: <br />
<br />
<br />
[ArgumentException: An invalid reloadable resource was passed.]<br />
Microsoft.Search.Frontend.Configuration.CompositeConfigStoreRequestInitializer..ctor(IConfigStore configStore, ReloadableResourceStore reloadableResourceStore) in e:\bt\986564\private\frontend\snr\Core\Configuration\CompositeConfigStoreRequestInitializer.cs:24<br />
BuildUp_Microsoft.Search.Frontend.Configuration.CompositeConfigStoreRequestInitializer(IBuilderContext ) +609<br />
Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Strategies\BuildPlan\BuildPlanStrategy.cs:40<br />
Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Strategies\StrategyChain.cs:86<br />
<br />
[BuildFailedException: The current build operation (build key Build Key[Microsoft.Search.Frontend.Configuration.CompositeConfigStoreRequestInitializer, null]) failed: An invalid reloadable resource was passed. (Strategy type BuildPlanStrategy, index 3)]<br />
Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Strategies\StrategyChain.cs:112<br />
Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Builder.cs:61<br />
Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainer.cs:463<br />
<br />
[ResolutionFailedException: Resolution of the dependency failed, type = "Microsoft.Search.Frontend.Configuration.IConfigStoreRequestInitializer", name = "". Exception message is: The current build operation (build key Build Key[Microsoft.Search.Frontend.Configuration.CompositeConfigStoreRequestInitializer, null]) failed: An invalid reloadable resource was passed. (Strategy type BuildPlanStrategy, index 3)]<br />
Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainer.cs:475<br />
Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainer.cs:155<br />
Microsoft.Practices.Unity.UnityContainerBase.Resolve() in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainerBase.cs:466<br />
Microsoft.Search.Frontend.CoreUX.MvcApplication.<configurecontainer>b__a(IUnityContainer c) in e:\bt\986564\private\frontend\Serp\app\Global.asax.cs:775<br />
Microsoft.Practices.Unity.StaticFactory.<>c__DisplayClass1`1.<registerfactory>b__0() in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity.StaticFactory\StaticFactoryExtension.cs:42<br />
Microsoft.Practices.Unity.StaticFactory.FactoryDelegateBuildPlanPolicy.BuildUp(IBuilderContext context) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity.StaticFactory\FactoryDelegateBuildPlanPolicy.cs:36<br />
Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Strategies\BuildPlan\BuildPlanStrategy.cs:40<br />
Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Strategies\StrategyChain.cs:86<br />
<br />
[BuildFailedException: The current build operation (build key Build Key[ConfigStoreService.IConfigStoreRequest, null]) failed: Resolution of the dependency failed, type = "Microsoft.Search.Frontend.Configuration.IConfigStoreRequestInitializer", name = "". Exception message is: The current build operation (build key Build Key[Microsoft.Search.Frontend.Configuration.CompositeConfigStoreRequestInitializer, null]) failed: An invalid reloadable resource was passed. (Strategy type BuildPlanStrategy, index 3) (Strategy type BuildPlanStrategy, index 3)]<br />
Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Strategies\StrategyChain.cs:112<br />
Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\ObjectBuilder\Builder.cs:61<br />
Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainer.cs:463<br />
<br />
[ResolutionFailedException: Resolution of the dependency failed, type = "ConfigStoreService.IConfigStoreRequest", name = "". Exception message is: The current build operation (build key Build Key[ConfigStoreService.IConfigStoreRequest, null]) failed: Resolution of the dependency failed, type = "Microsoft.Search.Frontend.Configuration.IConfigStoreRequestInitializer", name = "". Exception message is: The current build operation (build key Build Key[Microsoft.Search.Frontend.Configuration.CompositeConfigStoreRequestInitializer, null]) failed: An invalid reloadable resource was passed. (Strategy type BuildPlanStrategy, index 3) (Strategy type BuildPlanStrategy, index 3)]<br />
Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainer.cs:475<br />
Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainer.cs:155<br />
Microsoft.Practices.Unity.UnityContainerBase.Resolve() in e:\Builds\Unity\UnityTemp\Compile\Unity\Src\Unity\UnityContainerBase.cs:466<br />
Microsoft.Search.Frontend.CoreUX.MvcApplication.InitializeRms() in e:\bt\986564\private\frontend\Serp\app\Global.asax.cs:650<br />
System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80<br />
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +171<br />
<br />
Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927<br />
</registerfactory></configurecontainer>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-86720985254535558322010-09-08T03:47:00.000+03:002010-09-08T03:47:33.947+03:00Want to build voice, SMS, IM and Twitter apps in python? Tropo WebAPI library now available<a href="http://blog.tropo.com/2010/09/07/want-to-build-voice-sms-im-and-twitter-apps-in-python-tropo-webapi-library-now-available/">Want to build voice, SMS, IM and Twitter apps in python? Tropo WebAPI library now available</a>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-63314919322459373602010-09-05T01:11:00.001+03:002010-09-05T01:19:13.025+03:00Best practice to import libraries or frameworks in App engine<div style="text-align: justify;">Usual procedure is to paste the library in your application's path then upload it with your application using appcfg - update.<br />
This works but sometimes we have to deal with huge libraries consisting of hundreds of python files that sometimes exceed the upper limit of 1000 files / application limit or reduce the number of available files for your application. Also maintaining those files becomes a nightmare.<br />
So what is the proper way ? :<a name='more'></a><br />
Well Python's <b>import works straight from the box with a zip file</b> as well, if it finds it your sys.path it treats it as a directory. (for more complicated situations you can look at zipimport module)<br />
Compress the library in a zip file.<br />
You can optionally include an empty "__init__.py" in each directory (if not there already) so it can be treated as a normal python package. <br />
Paste just the zip file in your application - or better put it somewhere and only paste a link to this file in your applications directory (this way you can maintain the library in a single place and use it on many applications)<br />
Then whenever you want to import from that library you can use something like :<br />
<div class="nm-code">import sys<br />
import os<br />
myLibrary= os.path.dirname(__file__)+'/mylibrary.zip'<br />
#(os.path.dirname(__file__) gives you the directory of currently executing script)<br />
if not mylibrary in sys.path:sys.path.insert(0, mylibrary) <br />
#(or sys.path.append(mylibrary) <br />
import mylibrary</div><b>Thats it !</b><br />
</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-28896084367615012792010-09-03T02:43:00.005+03:002010-09-03T03:51:01.932+03:00Allegro Non Troppo - Using Tropo API in App Engine<div style="text-align: justify;">I was looking for an SMS - Voice gateway for an application of mine and after thoughtfully looking into Clicatell - twilio etc. I opted for Voxeo's tropo.<br />
Do not ask much for the reasons I am not ready for a war over which one is better or cheaper to use. <br />
I just felt like that tropo's "develop free - pay as you go once in production" business model fits well with App Engine's model that is on the same line, same applies to application scaling which is analogous to App Engine's.<br />
<br />
So I decided to give it a try and here are my preliminary assessment of the service:<br />
<a name='more'></a>Registration was a 30 seconds job, after which you can proceed and create an application.<br />
Tropo is providing two different APIS a scripting API where you host your scripts in their servers and a WEB api.<br />
Although I knew scripting would be much more easy I have chosen the WEB API for the flexibility it provides.<br />
Setting up an application is also very easy you just provide URLS that will handle Json requests from tropo, then you get 2 keys (outbound tokens) one for voice and one for Messaging applications as well as as some telephone, sip, Inum and other numbers that you can use.<br />
You can test those tokens right from that page if you click on those and select "launch" on the pop up, your server if set up properly will receive an HTTP GET on the URL you specified.<br />
<div style="text-align: left;">One note about the URLS initially i specified a URL of the form "http://www.myapp.appspot.com/trop/SMS/" it seems this form was not working then I tried something in the form "http://www.myapp.appspot.com/trop/SMS/foo.json" and it worked - but can't be sure if it is really so or I have missed something.</div>Now the part that puzzled me for half a day "how you initiate an action request from tropo ?" . i was very unlucky since I could not find a proper example or framework in python since this was only published just a few days ago and was not indexed by google yet, and I did not bother to ask on the tropo's forum because I was thinking I will sort it out my self in a matter of minutes. Well I did but it was a matter of many hours.<br />
What was the problem i did'nt quite get the handshaking process and I thought I would initiate an action (i.e. sending an SMS) by sending to tropo the requesting json object, I did not even bother to check the online debugging utility as I thought this would work only for the scripting API. By the way, after i discovered it works for the web API as well, I find this a very good tool - you can find in a link at the bottom of any (?) tropo's page. Well to make a long story short you initiate a request by issuing a GET to tropo's API url then API replies with a Json post which initiates a session that is posted to your application's URL and you issue your action command by responding to this Post with a Json object.<br />
By the time I was close enough to the solution I received and EMail from Adam Kalsey from Tropo (nice fellow who tries to be helpful all the time) who was checking how was I doing with the API, I mention something like "not much documentation for Python" and he replied right away pointing me to the framework ( <a rel="nofollow" href="http://github.com/tropo/tropo-webapi-python">http://github.com/tropo/tropo-webapi-python</a> ). Well believe me it is a nice small MIT licenced framework by Dan York with tests and examples that will set you up and running in app engine in zero time. Pity I discovered it a little too late, but I can only blame me for that. <br />
<br />
Lessons learned: after we deal with a problem for some time (how much ?) and before we start scratching our heads better reach out to the community, more probably than not somebody has been there before and is willing to give a hand. I know, I know I mention this same lesson some posts ago when I was dealing with panoramio API, lets hope this time I will really get my lesson. <br />
<br />
So far so good with tropo's Web Api but, i have not done much development and testing yet. So stay tuned for more about tropo and app engine - python when i will have more to say about the service.</div>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com2tag:blogger.com,1999:blog-2293041036010004622.post-72437985178992631162010-08-18T00:47:00.000+03:002010-08-18T00:47:29.360+03:00Understanding the concurrent requests limitThis "800ms" rule started as 1000ms some time ago, now it moved to<br />
800ms and 400ms enters the scene. I am afraid it has became a moving<br />
target approaching 0ms too fast. Somebody must stop the bar<br />
somewhere.<p>Happy coding ;-)<br />
(From my post in : <a href="http://groups.google.com/group/google-appengine-python/browse_thread/thread/6353b6232e8851aa">http://groups.google.com/group/google-appengine-python/browse_thread/thread/6353b6232e8851aa</a> )Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com0tag:blogger.com,1999:blog-2293041036010004622.post-45445548192828991052010-07-17T20:25:00.000+03:002010-07-17T20:25:49.780+03:00A bug in tweepy streaming APII found a bug in <a href="http://github.com/joshthecoder/tweepy">tweepy</a> streaming API .<br />
It can't filter by tag with non Latin characters. <br />
I was astonished coz this was working when I used it some months ago (version 1.3 if I remember well). <br />
So here is my small contribution to the project : <br />
def filter(self, follow=None, track=None, async=False):<br />
params = {} <br />
  self.headers['Content-type'] = "application/x-www-form-urlencoded" <br />
should be :<br />
def filter(self, follow=None, track=None, async=False):<br />
params = {} <br />
self.headers['Content-type'] = "application/x-www-form-urlencoded; charset=utf-8" <br />
I have only tested streaming and only tag filtering - probably this applies to other parts but I am not sure.Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com9tag:blogger.com,1999:blog-2293041036010004622.post-73649698351058803182010-07-09T01:46:00.006+03:002010-11-16T13:30:54.734+02:00App Engine & Google Geocoding Service IIThis issue comes and goes in this thread all the time.<br />
I also suggested in the past to delegate the job to the client since I<br />
could't see a scenario where we use server side request and be within<br />
the TOR's.<br />
see here : <a href="http://gaengine.blogspot.com/2010/05/google-maps-api-quotas-and-app-engine.html">http://gaengine.blogspot.com/2010/05/google-maps-api-quotas-and-app-engine.html</a><br />
I was wrong. Recently I run into a case where I had to do this and<br />
still I am sure I am within legal limits, this is so when you need the<br />
geolocation service in order to publish a static map served from<br />
Google.<br />
So this is a real problem and the issue has to be resolved somehow,<br />
although I think we are discussing this in the wrong group. IMO that<br />
this is an issue for the maps group and we should adress it there.<br />
Happy coding:-)<br />
<br />
<b>Update :</b>For an alternative solution you can take a look <a href="http://gaengine.blogspot.com/2010/05/google-maps-api-quotas-and-app-engine.html">here.</a><br />
<a name='more'></a><br />
<a href="http://groups.google.com/group/google-appengine/browse_thread/thread/28a0a39a479838da"> http://groups.google.com/group/google-appengine/browse_thread/thread/28a0a39a479838da</a><br />
<br />
On Jul 8, 10:29 pm, Zarko <<a href="mailto:eladza...@gmail.com">eladza...@gmail.com</a>> wrote:<br />
> Yes, it is in use with a Google map, actually I am trying to save<br />
> requests to Google from clients...<br />
><br />
> On Jul 8, 6:43 pm, Barry Hunter <<a href="mailto:barrybhun...@gmail.com">barrybhun...@gmail.com</a>> wrote:<br />
><br />
><br />
><br />
> > On 8 July 2010 16:08, Zarko <<a href="mailto:eladza...@gmail.com">eladza...@gmail.com</a>> wrote:<br />
><br />
> > > By the way I can't transfer the job to the client (it's not a browser<br />
> > > app).<br />
><br />
> > Are you sure your app is using the API legally then?<br />
><br />
> > *Note: **the Geocoding API may only be used in conjunction with a Google<br />
> > map; geocoding results without displaying them on a map is prohibited.*<br />
> > *<br />
> > *<br />
> > fromhttp://<a href="http://code.google.com/apis/maps/documentation/geocoding/">code.google.com/apis/maps/documentation/geocoding/</a>Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com4tag:blogger.com,1999:blog-2293041036010004622.post-15674390619018068512010-06-25T23:56:00.000+03:002010-06-25T23:56:21.058+03:00Re: What is the best way to convert a dictionary of data into a datastore entity?If you are going to use your dictionaries as dictionaries back in your<br />
program then why not save those into a Blob or Text datastore property<br />
(by pickling or repr - eval or other means).<br />
<a name='more'></a><br />
There was a long thread going here about the more efficient way to do this here :<br />
<a href="http://groups.google.com/group/google-appengine-python/browse_thread/thread/8b07e7c24cb434f2/035f1b4e247deef9">http://groups.google.com/group/google-appengine-python/browse_thread/thread/8b07e7c24cb434f2/035f1b4e247deef9</a><br />
<a href="http://groups.google.com/group/google-appengine-python/browse_thread/thread/b929234f093f355c/806aa0c3de0d2528">http://groups.google.com/group/google-appengine-python/browse_thread/thread/b929234f093f355c/806aa0c3de0d2528</a><br />
an example where i use repr eval:<br />
<div class="nm-code"><pre>from google.appengine.ext import db
class DicPropertyEval(db.Property):
data_type = dict
def get_value_for_datastore(self, model_instance):
return db.Text(repr(super(DicPropertyEval,self).get_value_for_datastore(model_instance) ) )
def make_value_from_datastore(self, value):
if value is None:
return dict()
return eval(value)
def default_value(self):
if self.default is None:return dict()
else:return super(DicPropertyEval,self).default_value().copy()
class DataStoreDic(db.Model):
dicVal = DicPropertyEval(indexed=False)
</pre></div>The above method is not the fastest one but I prefer it sometimes coz<br />
the dictionary is readable - editable by a human while in datastore.<br />
<br />
happy coding ;-)<br />
<br />
On 25 Ιούν, 20:57, Barry Hunter <<a href="mailto:barrybhun...@gmail.com">barrybhun...@gmail.com</a>> wrote:<br />
> I'm no python expert, but I think Expando Class is designed for this<br />
> sort of situation<br />
><br />
> <a href="http://code.google.com/appengine/docs/python/datastore/expandoclass.html">http://code.google.com/appengine/docs/python/datastore/expandoclass.html</a><br />
><br />
> On 25 June 2010 17:52, richardhenry <<a href="mailto:richardhe...@me.com">richardhe...@me.com</a>> wrote:<br />
><br />
> > I have millions of Python dictionaries that have some common fields<br />
> > (clearer for me to call them "fields" than "keys" here), but the<br />
> > fields will vary a great deal from one dict to another.<br />
><br />
> > Since I don't know in advance the fields that a dictionary will<br />
> > contain, I was thinking of iterating through the values in the dict,<br />
> > picking a data store Property (IntProperty(), StringProperty(), etc.)<br />
> > class to use, and then creating a class using type('Content',<br />
> > (db.Model,), {"name": StringProperty() ...}). Then I can store the<br />
> > data from the dictionary in this class and save() it.<br />
><br />
> > Am I headed down the right route? Does this make sense, or am I<br />
> > missing some built-in feature that makes this much easier?<br />
><br />
> > To summarize: I have millions of dictionaries that I want to convert<br />
> > to datastore entities. All of the dict are related, and they do have<br />
> > *some* common fields, but the majority of the data varies from dict to<br />
> > dict. What's the best way to convert these dicts into entities in the<br />
> > datastore?<br />
><br />
> > RichardNickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com2tag:blogger.com,1999:blog-2293041036010004622.post-686874970415438202010-06-17T01:36:00.002+03:002010-06-21T11:40:18.034+03:00SDK version 1.3.5 PrereleaseApp Engine team announced <a href="http://groups.google.com/group/google-appengine/browse_thread/thread/6a481b951118b4f2">V 1.3.5</a> prerelease, I do like the release early release often attitude.<br />
From a quick source code reading, what I see as most important new feature in this release is the support for Content-range headers for Blobs and stream like interface to blobstore that will extend usage of blobstore to some new application scenarios.Nickhttp://www.blogger.com/profile/06695334163724983162noreply@blogger.com2