Scribd.Net.Service.User.Documents - After Upload

Nov 5, 2009 at 8:58 PM
Edited Nov 5, 2009 at 8:59 PM

I am encountering an error accessing Scribd.Net.Service.User.Documents after uploading a document using the Scribd.Net.Document.Upload method, but only directly after. A few seconds later everything works. 

I am using the Scribd.Net.Service.User.ReloadDocuments() before accessing the documents, it has no affect (the error still occurs).

I am using two different methods to see if it might be another error but they both fail as outlined below:

// method A

// repeaters binding will be of type Scribd.Net.Document

 

            DocumentRepeater.DataSource = Scribd.Net.Service.User.Documents;

            DocumentRepeater.DataBind();

 

 

 

// method B

// who needs UI, what exactly is going on

 

foreach (Scribd.Net.Document doc in Scribd.Net.Service.User.Documents)

            {

                Response.Write("<img src='" + doc.ThumbnailUrl + "'>");

                Response.Write(doc.Title);

                Response.Write("<hr>");

            }

 

 

 Both examples throw the same error:

FormatException was unhandled by user code

Input string was not in correct format.

 

// below is the exception detail

 

System.FormatException was unhandled by user code

  Message="Input string was not in a correct format."

  Source="mscorlib"

  StackTrace:

       at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal)

       at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info)

       at System.Int32.Parse(String s)

       at Scribd.Net.Document.Download(Int32 documentId) in C:\Projects\Scribd.Net\Document.cs:line 1188

       at Scribd.Net.User.get_Documents() in C:\Projects\Scribd.Net\User.cs:line 345

       at FiledBy.modules.creator.scribd.Page_Load(Object sender, EventArgs e) in *[deleted for security purposes - internal file]*

       at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e)

       at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e)

       at System.Web.UI.Control.OnLoad(EventArgs e)

       at System.Web.UI.Control.LoadRecursive()

       at System.Web.UI.Control.LoadRecursive()

       at System.Web.UI.Control.LoadRecursive()

       at System.Web.UI.Control.LoadRecursive()

       at System.Web.UI.Control.LoadRecursive()

       at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

  InnerException: 


 

 

As stated above, this only happens a few seconds after the upload has occurred, otherwise no error is thrown.

 

HELP!!!

 

Thanks 

            DocumentRepeater.DataSource = Scribd.Net.Service.User.Documents;
            DocumentRepeater.DataBind();

 

 

Nov 5, 2009 at 9:04 PM
Edited Nov 5, 2009 at 9:08 PM

--

Nov 13, 2009 at 2:47 PM

Still have not heard anything back on this issue. Any ideas?

Coordinator
Nov 13, 2009 at 7:55 PM

It looks like the problem is that "page count" coming back from Scribd.com isn't a number since line 1188 in Document.cs is:

if (_node.SelectSingleNode("page_count") != null)int.Parse(_node.SelectSingleNode("page_count").InnerText);

So, I'm going to have to check to ensure it's a number befor trying to call int.Parse().  This might be happening (it not being a number) because Scribd may not have completed it's indexing of the document before you're trying to reload it.  Not really your fault (I should be checking before the parse), but that's likely the root cause.

 

- Paul

 

Nov 13, 2009 at 8:22 PM

I downloaded the source code and started debugging from there. Did not realize that I could access the full source. 

You are correct about the page count not being available. I placed a try catch and set the page count to zero if it failed ( A tryparse would be better, but this was a quick fix). However, it got even more interesting after I fixed that problem. What was happening was the server was returning an error, unauthorized access, for some of these documents. There were about four documents which I did not upload in place there.

Did not have much time to dig any deeper. What I did was just check the first property to see if it was null, and if it was just not add to the collection.

                                // Get all the properties.

                                Document _temp = Document.Download(_item.DocumentId);

                                if (_temp.AccessKey != null)

                                {

                                    _result.Add(_temp);

                                }

 

Not a good fix, but it works for now. I would like to stay within the current branch in case of future updates. Any chance of this getting fixed, or if you need some help I might be able to come up with a better workaround than this hackey hack. I am sure no one knows the code better than you.

 

Coordinator
Nov 13, 2009 at 9:51 PM
Yes. I plan on resolving it.




On Nov 13, 2009, at 4:22 PM, filedby <notifications@codeplex.com> wrote:

From: filedby

I downloaded the source code and started debugging from there. Did not realize that I could access the full source.

You are correct about the page count not being available. I placed a try catch and set the page count to zero if it failed ( A tryparse would be better, but this was a quick fix). However, it got even more interesting after I fixed that problem. What was happening was the server was returning an error, unauthorized access, for some of these documents. There were about four documents which I did not upload in place there.

Did not have much time to dig any deeper. What I did was just check the first property to see if it was null, and if it was just not add to the collection.

// Get all the properties.

Document _temp = Document.Download(_item.DocumentId);

if (_temp.AccessKey != null)

{

_result.Add(_temp);

}

Not a good fix, but it works for now. I would like to stay within the current branch in case of future updates. Any chance of this getting fixed, or if you need some help I might be able to come up with a better workaround than this hackey hack. I am sure no one knows the code better than you.